text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Model-Independent Prediction of $R(\eta_c)$ We present a model-independent prediction for $R(\eta_c) \! \equiv \! \mathcal{BR} (B_c \rightarrow \eta_c \, \tau^+\nu_\tau)/ \mathcal{BR} (B_c \rightarrow \eta_c \, \mu^+\nu_\mu)$. This prediction is obtained from the form factors through a combination of dispersive relations, heavy-quark relations at zero-recoil, and the limited existing determinations from lattice QCD. The resulting prediction, $R(\eta_c)=0.29(5)$, agrees with the weighted average of previous model predictions, but with reduced uncertainties. I. INTRODUCTION The Higgs interaction is the only source of lepton universality violations within the standard model, but the observation of neutrino masses implies that at least one form of beyond-standard model modification exist. The ratios of semileptonic heavy-meson decays for distinct lepton flavors are particularly sensitive to new physics, because the QCD dynamics of the heavy-meson decays decouple from the electroweak interaction at leading order: This expression implies that the ratios of semileptonic heavy-meson decays can differ from unity at this level of precision only due to kinematic factors, although it is possible to further remove this dependence [1][2][3][4][5][6][7][8]. Measurements from BaBar, Belle, and LHCb of the ratios R(D ( * ) ) of heavy-light meson decays B → D ( * ) ν, with = τ to = µ, exhibit tension with theoretical predictions. The HFLAV averages [9] of the experimental results R(D * ) = 0.306(13)(7) [10][11][12][13][14][15][16][17][18] and R(D) = 0.407(39)(24) [10][11][12] represent a combined 3.8σ discrepancy [9] from the HFLAV-suggested Standard-Model value of R(D * ) = 0.258(5) [9] obtained by an averaging [7,19,20] that utilizes experimental form factors, lattice QCD results, and heavy-quark effective theory, and from R(D) = 0.300(8) [21], which is an average of lattice QCD results [22,23], as well as a value R(D) = 0.299 (3) obtained by also including experimentally extracted form factors [24]. Recently, the LHCb collaboration has measured R(J/ψ) = 0.71 (17) (18) [25] which agrees with the Standard-Model bound of 0.20 ≤ R(J/ψ) ≤ 0.39 at 1.3σ [26]. In the future, it would be useful to consider thebc →cc analog of the B → D process, B + c → η c . Alas, measurements of R(η c ) are substantially harder than R(J/ψ) for a few reasons, foremost of which is there is no clean process like J/ψ → µ + µ − in which to reconstruct the η c , which will result in larger backgrounds. Additionally the transition to η c from excited states is poorly *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>understood, and this further complicates extraction of signals [27]. Despite these present experimental difficulties, it would be valuable to have a theoretical prediction for R(η c ) from the Standard Model ready for it. The current state of affairs, though, is limited to model-dependent calculations (collected in Table I) [3,[28][29][30][31][32][33][34][35][36][37][38][39]. Although most models' central values cluster in the range 0.25 − 0.35, one notes a wide spread in their estimated uncertainty which typically account only for parameter fitting. We take as a reasonable estimate the weighted average of the results, R(η c ) = 0.33 (17). These results rely upon some approximations to obtain the B + c → η c transition form factors. Without a clear understanding of the systematic uncertainties these assumptions introduce, the reliability of these predictions is suspect.c We fill a blank space in the literature by computing a model-independent prediction, R(η c ) = 0.29(5) from the Standard Model, in which all uncertainties are quantifiable. In order to obtain this result, we begin in Sec. II with a discussion of the V − A structure of the Standard Model and the form factors. In Sec. III we explain how heavy-quark spin symmetry can be applied at the zerorecoil point to relate the form factors, using the method of [29]. The initial lattice QCD results of the HPQCD collaboration [40] for the transition form factors are discussed in Sec. IV. The dispersive analysis framework utilized to constrain the form factors as functions of momentum transfer is presented in Sec. V. The results of our analysis, as well as future projections, appear in Sec. VI, and we conclude in Sec. VII. After this calculation was completed, a similar calcula-arXiv:1808.07360v1 [hep-ph] 22 Aug 2018 tion appeared [41] that is in good agreement with ours. In the Standard Model, the factorization of Eq. (1) into a leptonic and a hadronic tensor reduces the problem of calculating R(η c ) to the computation of the hadronic matrix element η c |(V −A) µ |B + c . Using this factorization, the hadronic matrix element can be written in terms of two transition form factors. These form factors enter the matrix element in combination with the meson masses, M ≡ M B + c and m ≡ M ηc , and the corresponding meson momenta P µ and p µ . The form factors themselves depend only upon t ≡ q 2 = (P − p) 2 , the squared momentum transfer to the leptons. The hadronic matrix element in our convention is given by f + (t), f − (t): In this work, we will exchange f − for f 0 , which is given by In this convention, it can be seen that f + (0) = f 0 (0), which should be applied when fitting the functions. We further introduce two important kinematic values t ± = (M ± m) 2 . This convention differs from that utilize by HPQCD for their lattice QCD results [40] by the mass dimension of f 0 . The conversion between the two is Using Eq. (2) or an equivalent basis, form factors are computed from models with uncontrolled approximations. Some models construct wave functions for the two mesons, while others compute a perturbative distribution amplitude at q 2 → 0 and then extrapolate to larger values. In addition, some models violate delicate form-factor relations, such as the heavy-quark spin-symmetry relations discussed below. Due to these issues, it is potentially treacherous to take the all too well agreement seen between the model predictions as a genuine estimate of the true standard model value instead of a theoretical prejudice in modeling. The differential cross section for the semileptonic decay is where, in terms of the spatial momentum p of the η c in the B + c rest frame, Inspecting Eq. (5), one can see that in the light leptonic channels ( = e, µ), the contribution from f 0 can be neglected, while in the τ channel it cannot. Model R theory Year CQM [28] 0.33 1998 QCDSR [29] 0.30 +0.09 −0.09 1999 RCQM [30] 0.28 2000 QCDSR [31] 0.30 2003 RCQM [32] 0.27 2006 NRQM [33] 0.35 +0.02 2006 NRQCD [34] 0.30 +0.11 −0.12 2013 pQCD [35] 0.31 +0.12 −0.12 2013 pQCD [36] 0.6 +0.3 −0.3 2016 pQCD [37] 0.30 +0.12 −0.08 2017 CQM [38] 0.26 2017 CQM [39] 0.25 +0.08 −0.08 2018 RCQM [3] 0.26 2018 III. HEAVY-QUARK SPIN SYMMETRY Decays of heavy-light Qq systems possess enhanced symmetries in the heavy-quark limit because operators that distinguish between heavy quarks of different spin and flavor are suppressed by 1/m Q , and their matrix elements vanish when m Q → ∞. Consequently, all transition form factors Q q |O|Qq in this limit are proportional to a single, universal Isgur-Wise function ξ(w) [42,43], whose momentum-transfer argument is w, the dot product of the initial and final heavy-light hadron 4-velocities, v µ ≡ p µ M /M and v µ ≡ p µ m /m, respectively: At the zero-recoil point t = (M−m) 2 or w = 1, the daughter hadron m is at rest with respect to the parent M . Indeed, one notes that w equals the Lorentz factor γ m of m in the M rest frame. The maximum value of w corresponds to the minimum momentum transfer t through the virtual W to the lepton pair, which occurs when the leptons are created with minimal energy, t = m 2 . In heavy-light systems, the heavy-quark approximation corresponds to a light quark bound in a nearly static spin-independent color field. In the weak decay Q → Q between two very heavy quark flavors, the momentum transfer t to the light quark is insufficient to change its state, and therefore the wave function of this light spectator quark remains unaffected. One thus concludes that ξ(1) = 1 at the zero-recoil (Isgur-Wise) point, yielding a absolute normalization for the form factors. These results are accurate up to corrections of O(Λ QCD /m Q ). In the decay B + c → η c , the spectator light quark is replaced by another heavy quark, c and some of these things will change. This substitution results in a the enhanced symmetries of the heavy-quark limit being reduced [44]. First, the difference between the heavy-quark kinetic energy operators produces energies no longer negligible compared to those of the spectator c, spoiling the flavor symmetry in heavy-heavy systems. Furthermore, the spectator c receives a momentum transfer from the decay ofb →c of the same order as the momentum imparted to thec, so one cannot justify a normalization of the form factors at the zero-recoil point based purely upon symmetry. While the heavy-flavor symmetry is lost, the separate spin symmetries ofb andc quarks remain, with an additional spin symmetry from the heavy spectator c. Furthermore, the presence of the heavy c suggests a system that is closer to a nonrelativistic limit than heavy-light systems. In the B + c → η c semileptonic decays, one further finds that suggesting that an expansion about the zero-recoil point may still be reasonable. Together, the spin symmetries imply that the two form factors are related to a single, universal function h (∆ in Ref. [44]), but only at the zero-recoil point, and no symmetry-based normalization for h can be derived [44]. Using the trace formalism of [45], in Ref. [44] it was shown how to compute the relative normalization between the fourQq →Q q form factors near the zero-recoil point [i.e., where the spatial momentum transfer to the spectator q is O(m q )]. Using these relations, h was derived for a color-Coulomb potential in Ref. [44]. This approximation was improved in Ref. [46], where a constituent quark-model calculation of BR(B + c → η c + ν ) for = e, µ but not τ , was performed. The heavy-quark spinsymmetry relations were generalized in [29] to account for a momentum transfer to the spectator quark occurring at leading order in NRQCD. We reproduce here the relation of [29], where the form factors f + (w = 1) and f 0 (w = 1) are related by where r ≡ m/M , ρ ≡ m Q /m Q , and σ ≡ m q /m Q . These relations reproduce the standard Isgur-Wise result [42,43,47] when σ = 0. Terms that break these relations should be O(m c /m b , Λ QCD /m c ) ≈ 30%, and we allow conservatively for up to 50% violations. The heavy-quark spin symmetry further relates the zero-recoil form factors of B + c → η c to those of B + c → J/ψ, which will be useful in the future to obtain further constraints on all six form factors. IV. LATTICE QCD RESULTS The state-of-the-art lattice QCD calculations for B + c → η c are limited to preliminary results from the HPQCD Collaboration for f + (q 2 ) at 4 q 2 values and f 0 (q 2 ) at 5 q 2 values [40]. These results were obtained using 2+1+1 HISQ ensembles, in which the smallest lattice spacing is a ≈ 0.09 fm, and the b quark is treated via NRQCD, are reproduced in Fig. 2. For q 2 = t − , 0 f 0 (q 2 ) has also been computed on coarser lattices and for lighter dynamical b-quark ensembles, which are used to check the accuracy and assess the uncertainty of the a ≈ 0.09 fm NRQCD results. In contrast to the situation for R(J/ψ), for R(η c ) both form factors have some lattice calculations, so the complications in treating unknown form factors is not required. Instead, the dispersive relations are sufficiently constraining that a rigorous error budget smaller than our naive 20% is the easiest way to reduce the error in R(η c ). V. DISPERSIVE RELATIONS In this work we fit the form factors of B + c → η c using analyticity and unitarity constraints on two-point Green's functions and a conformal parameterization in the manner implemented by Boyd, Grinstein, and Lebed (BGL) [48] for the decays of heavy-light hadrons. This parameterization was extended to heavy-heavy systems in [26] with slightly different set of free parameters to simplify the computation, which we will utilize. Here we briefly sketch the necessary components. Consider the two-point momentum-space Green's function Π µν J of a vectorlike quark current, J µ ≡QΓ µ Q . Π µν J can be decomposed in different ways [47,[49][50][51][52]; in this work we decompose Π µν J into spin-1 (Π T J ) and spin-0 (Π L J ) pieces [47]: From perturbative QCD (pQCD), the functions Π L,T J require subtractions in order to be rendered finite. The finite dispersion relations are: The freedom to chose a value of q 2 allows us to compute χ(q 2 ) reliably in pQCD, far from where the two-point function receives nonperturbative contributions. The formal condition on q 2 to be in the perturbative regime which, for Q, Q = c, b, q 2 = 0 is clearly sufficient. Existing calculations of two-loop pQCD χ(q 2 = 0) modified by non-perturbative vacuum contributions [53][54][55][56][57] used in Ref. [47] can be applied here. An example of the state of the art in this regard (although slightly different from the approach used here) appears in Ref. [24]. The spectral functions Im Π J can be decomposed into a sum over the complete set of states X that can couple the current J µ to the vacuum: Each term in the sum is semipositive definite, thereby producing a strict inequality for each X in Eqs. (11). These inequalities can be made stronger by including multiple X at once, as discussed in Refs. [7,20,47]. For X we include only below-threshold B + c poles and a single two-body channel, B + c + η c , implying that our results provide very conservative bounds. For B + c +η c , there are lighter two-body threshold with the correct quantum numbers that must be taken into consideration. The first physically prominent two-body production threshold in t occurs at B +D (see Table II). With this fact in mind, we define a new variable t bd ≡ (M B +M D ) 2 that corresponds to the first branch point in a given two-point function, while the B + c +η c branch point occurs at t + > t bd . With these variables, one maps the complex t plane to the unit disk in a variable z (with the two sides of the branch cut forming the unit circle C) using the conformal variable transformation where t * is the branch point around which one deforms the contour, and t 0 is a free parameter used to improve the convergence of functions at small z. In this mapping, z is real for t ≤ t * and a pure phase for t ≥ t * . Prior work that computed the form factors between baryons whose threshold was above that of the lightest pair in that channel (i.e., [47,51], which introduces into the region |z| < 1 a subthreshold branch cut, meaning that the form factors have complex nonanalyticities that cannot trivially be removed. To avoid this issue, we instead set t * = t bd , which is possible because we are only interested in the semileptonic decay region, m 2 ≤ t ≤ t − , which is always smaller than t bd . This choice ensures that the only nonanalytic features within the unit circle |z| = 1 are simple poles corresponding to single particles B ( * )+ c , which can be removed by Blaschke factors described below. The need to avoid branch cuts but not poles from |z| < 1 derives from the unique feature of the Blaschke factors, which can remove each pole given only its location (i.e., mass), independent of its residue. 1 In contrast, correctly accounting for a branch cut requires knowledge of both the location of the branch point and the function along the cut. To remove these subthreshold poles, one multiplies by z(t; t s ) [using the definition of Eq. (14)], a Blaschke factor, which eliminates a simple pole t = t s . Using this formalism, the bound on each form factor F i (t) can be written as The function P i (t) in Eq. (15) is a product of Blaschke factors z(t; t p ) that remove dynamical singularities due to the presence of subthreshold resonant poles. Masses corresponding to the poles that must be removed in B + c → J/ψ are found in Table II, organized by the channel to which each one contributes. These masses are from model calculations [60], with uncertainties that are negligible for our purposes. The weight function φ i (t; t 0 ) is called an outer function in complex analysis, and is given by where j = T, L (for which n j = 3, 2, respectively), the functionP i (t) is a product of factors z(t; t s ) or z(t; t s ) designed to remove kinematical singularities at points t = t s < t bc from the other factors in Eq. (15), and W i (t) is computable weight function depending upon the particular form factor F i . The outer function can be reexpressed in a general form for any particular F i as where n I is an isospin Clebsch-Gordan factor, which is 1 for B + c → η c . The remaining factors are found in Table III. Transforming the dispersion-relation inequality, Eq. (15), into z-space: which, upon dividing out the non-analytic terms, allows the expansion in z of an analytic function: Inserting this form into Eq. (18), one finds that the bound can be compactly written as a constraint on the Taylor series coefficients: All possible functional dependences of the form factor F i (t) consistent with Eqs. (11) are now incorporated into the coefficients a in . It is useful to introduce a number of dimensionless parameters that are functions of the meson masses: and a parameter N related to t 0 in Eq. (14) by It is straightforward to compute the kinematical range for the semileptonic process given in terms of z: The minimal (optimized) truncation error is achieved when z min = −z max , which occurs when N opt = λ κ . Evaluating at N = N opt , one finds From these expressions, we find that the semileptonic decays have z max,τ ≈ 0.022 and z max,µ ≈ 0.030, where each has a 1.3% variation, depending upon whether the BD or B * D threshold is the lowest branch point, t bd . VI. RESULTS Before presenting our prediction for R(η c ) we summarize the constraints the form factors f 0 and f + are required to satisfy: • The coefficients a n of each form factor are constrained by n a 2 n ≤ 1 from Eq. (20), in particular, for the cases n = 1, 2, 3 investigated here. • Using Eq. (9), the value of f + (t − ) is required to agree with f 0 (t − ), which is calculated from lattice QCD, within 50%. Imposing these constraints, we perform our fit. Our third assumption relating the form factors through heavy quark spin symmetry is unimposed in [41], allowing us to reduce the uncertainty for f + (t − ). Gaussian-distributed points are sampled for the form factors f 0 and f + whose means are given by the HPQCD results. The combined uncertainties are given by the quadrature sum of the reported uncertainty δ lat of the form-factor points and an additional systematic uncertainty, f lat (expressed as a percentage of the form-factor point value) that we use to estimate the uncomputed lattice uncertainties (i.e., finite-volume corrections, quark-mass dependence, discretization errors). f lat is taken to be 1, 5, or 20% of the value of the form factor from the lattice. This is a more conservative method that the χ 2 procedure [41]. For our final result, we suggest using f lat = 20%, while the other two values are helpful for understanding future prospects with improved lattice data. Using these sample points, we compute lines of best fit, from which we produce the (5) coefficients a n . The resulting bands of allowed form factors are shown for f lat = 20% in Fig. 2 c → ηc form factors f+(q 2 ) (red circles) and f0(q 2 ) (blue triangles) from the HPQCD collaboration. The interior bars represent the statistical uncertainty quoted by HPQCD. The exterior bars represent the result of including our f lat = 20% systematic uncertainty. The colored bands DA (dispersive analysis) represent our one-standard-deviation (1σ) best-fit region. Having computed the form factors, we present predicted values for R(η c ) as a function of the truncation power n = 1, 2, 3 in the dispersive analysis coefficients of Eq. (19) and the 1, 5, 20% systematic uncertainty f lat associated with the lattice data. The full results are presented in Table IV, but as a conservative value, we suggest using the n = 3, f lat = 20% value of R(η c ) = 0.29 (5). In contrast to the case of R(J/ψ), we have more than three data points, and can therefore investigate the convergence more carefully. For f + , the series appears to rapidly converge such that neither a 2 nor a 3 can be distinguished from zero. The value we obtain of n a 2 f+,n = 0.0016(2) could be used to slightly strengthen bounds in future dispersive analyses in the vector channel. For f 0 , the typical value of n a 2 n for n = 1 is O(10 −2 ), but for n = 2, 3 we find that a 2 2 ≈ 1 despite a 2 = 0.0(7) reflecting that while on average a 2 should be negligible, large fluctuations are permitted with the present uncertainties. Although the dispersive constraint is saturated in the n ≥ 2 case, the predictions for R(η c ) aren't observed to change outside of the uncertainties for increasing n. This confirms that while neglected higher-order terms can potentially have a 2 n ≈ 1, the suppression even for z max ≈ 0.03 is sufficient that the rapid convergence is still secured. All model-dependent values for R(η c ) presented in Table I comply with our result of R(η c ) = 0.29(5), albeit some, e.g. the anomalously large value of R(η c ) = 0.6(3) of [36], have seen their parameter space reduced. This general agreement gives us confidence in our result. The B + c → η c process has sufficient q 2 data, with the notable exception being f + (t − ), to compute R(η c ). Following [26], we reanalyze our dispersive fits with a synthetic data point f + (t − ) = 1 ± f lat to investigate it's potential constraining power. The resulting fits are found are indistinguishable from our current results within uncertainty. Therefore, the best direction for improve would be obtained by future lattice results that can fully account for the systematics we have tried to estimate. VII. DISCUSSION AND CONCLUSION In this work we have presented a model-independent prediction of R(η c ) = 0.29 (5). While the near-term outlook for an experimental measurement of R(η c ) from LHCb measurement is poor, near-term lattice results promises to reduce the theoretical uncertainty sufficiently to require consideration of electroweak corrections. Even without improved lattice QCD calculations, potential areas of improvement are possible. Experience in the heavy-light sector and the fact that the R(J/ψ) bounds saturates the dispersive relations suggest that including multiple states that appear in the dispersion relation can provides complementary information to help constrain the form factors further, additionally one could include the lattice results for B → D ( * ) [22,23,[61][62][63][64] and Λ b → Λ c [65]. This would allow for a global, coupled set of predictions for the semileptonic ratios.
5,740
2018-08-22T00:00:00.000
[ "Physics" ]
Problem of Determining the Anisotropic Conductivity in Electrodynamic Equations For a system of electrodynamic equations, the inverse problem of determining an anisotropic conductivity is considered. It is supposed that the conductivity is described by a diagonal matrix σ(x) = \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{diag}}({{\sigma }_{1}}(x),{{\sigma }_{2}}(x)$$\end{document}, σ3(x)) with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma (x) = 0$$\end{document} outside of the domain Ω = \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{ x \in {{\mathbb{R}}^{3}}|\left| x \right| < R\} $$\end{document}, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R > 0$$\end{document}, and the permittivity ε and the permeability μ of the medium are positive constants everywhere in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb{R}}^{3}}$$\end{document}. Plane waves coming from infinity and impinging on an inhomogeneity localized in Ω are considered. For the determination of the unknown functions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\sigma }_{1}}(x)$$\end{document}, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\sigma }_{2}}(x)$$\end{document}, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\sigma }_{3}}(x)$$\end{document}, information related to the vector of electric intensity is given on the boundary S of the domain Ω. It is shown that this information reduces the inverse problem to three identical problems of X-ray tomography. Consider the nonstationary Maxwell equations (1) Here, and are the electric and magnetic field strengths, σ(x) = , σ 2 (x), σ 3 (x)) is a positive semidefinite diagonal matrix, and ε and μ are positive constants. Assume that outside the domain Ω = , where R > 0. Let denote the velocity of propagation of electromagnetic waves. Let , , and j be the unit vector orthogonal to ν, i.e., . For Maxwell's equations (1) in a homogeneous medium ( ), there exist solutions of the form (2) = ε + σ , = −μ , = . curl where and f(t) is an arbitrary generalized function. Each such solution represents a plane wave propagating in the direction of the vector ν and is a weak solution of Maxwell's equations for a homogeneous medium. Consider the Cauchy problem for an anisotropic medium: (3) where and are defined by formulas (2) and is a smooth function such that for and . Thus, and for all and t < 0. Let be the plane corresponding to the front of a plane wave at the time t = 0, when this front touches the domain Ω. Let be the boundary of and be its shadow part with respect to light propagating in the direction ν. Below, problem (3) is considered for three different vectors j k , , and for corresponding orthogonal vectors depending on the angular parameter ϕ, namely, , given for all , , and , where T k (x, ϕ) = -t 0 + δ 0 and is an arbitrary number (possibly small). In other words, the task is to find σ(x) from the given functions (5) For stationary electrodynamic equations, inverse problems of determining conductivity representing a one-variable function were studied by Tikhonov [1][2][3][4] and Cagniard [5]. For nonstationary equations, the theory of inverse electrodynamic problems based on the full system of Maxwell equations was developed in [6][7][8]. The problem of determining the permittivity of an anisotropic medium was considered in [9]. Additionally, phaseless inverse problems of determining permittivity from the magnitude of the electric or magnetic component of a stationary electromagnetic field were studied (see [10] and the review article in [11]). The following result holds for the inverse problem stated above. Theorem 1. Suppose that the matrix σ(x) belongs to and vanishes outside , while the function has the form , where is a smooth function of class , , and is the Heaviside step function, i.e., for and for t < 0. Then all elements of σ(x) in are uniquely determined by data (5). The study of the inverse problem is based on analyzing the structure of the solution to problem (3). In this case, it is convenient to use an integro-differential equation for the vector . To derive it, we apply the curl operator to the second equation in (3) and use the first equation to eliminate the emerging term . Then we obtain the equation Computing with the help of the first equation in (3) yields (7) = , , , ν ϕ = , ϕ, ϕ , ϕ ∈ ,π, = , , , ν ϕ = ϕ, , ϕ , ϕ∈ ,π, = , , , ν ϕ = ϕ, ϕ, , ϕ∈ ,π. It follows from (6), (7), and (3) that the function E is a solution of the Cauchy problem (8) The following result holds for problem (8). Theorem 2. Suppose that the matrix σ(x) and the function satisfy the conditions of Theorem 1. Then the function for can be represented in the form (9) where , the function is a solution of the Cauchy problem (10) and is a bounded function of x and t for for any T > 0. Equation (10) is derived by substituting representation (9) into Eq. (8) and equating the coefficients of to zero, while the initial data for the function follow from the initial data for . The function is the amplitude of at the electromagnetic wave front, i.e., at . Equation (10) No. 1 2021 ROMANOV These formulas imply that the integrals (11) are known for all and . Thus, for each , the right-hand side of (11) is known along any straight line intersecting Ω in the direction . By varying ϕ, we conclude that, in each section of Ω by the plane , the integrals along all possible straight lines lying in this plane are known. As a result, we obtain an X-ray tomography problem for determining , . It is well known that this problem is uniquely solvable (see [12][13][14]). This implies Theorem 1 on the uniqueness of a solution to the inverse problem and an algorithm for its solution.
1,401.4
2021-01-01T00:00:00.000
[ "Materials Science" ]
The minimum mass of a charged spherically symmetric object in D dimensions, its implications for fundamental particles, and holography We obtain bounds for the minimum and maximum mass/radius ratio of a stable, charged, spherically symmetric compact object in a D-dimensional space-time in the framework of general relativity, and in the presence of dark energy. The total energy, including the gravitational component, and the stability of objects with minimum mass/radius ratio is also investigated. The minimum energy condition leads to a representation of the mass and radius of the charged objects with minimum mass/radius ratio in terms of the charge and vacuum energy only. As applied to the electron in the four-dimensional case, this procedure allows one to re-obtain the classical electron radius from purely general relativistic considerations. By combining the lower mass bound, in four space-time dimensions, with minimum length uncertainty relations (MLUR) motivated by quantum gravity, we obtain an alternative bound for the maximum charge/mass ratio of a stable, gravitating, charged quantum mechanical object, expressed in terms of fundamental constants. Evaluating this limit numerically, we obtain again the correct order of magnitude value for the charge/mass ratio of the electron, as required by the stability conditions. This suggests that, if the electron were either less massive (with the same charge) or if its charge were any higher (for fixed mass), a combination of electrostatic and dark energy repulsion would destabilize the Compton radius. In other words, the electron would blow itself apart. Our results suggest the existence of a deep connection between gravity, the presence of the cosmological constant, and the stability of fundamental particles. Introduction The existence of a minimum length is an important prediction of phenomenological quantum gravity. A fundamental bound yielding the smallest resolvable length scale could help solve several outstanding problems in theoretical physics, for example, by providing a natural cut off to regularize divergent integrals in the renormalization of quantum field theories, or by preventing matter from collapsing to form a singularity at the center of a black hole. Furthermore, the existence of both minimum and maximum length scales in nature or, at least, at a given epoch (for example, R U ≈ 1.3×10 28 cm is the current size of the horizon and acts as a de facto maximum length scale for physical phenomena in the universe today), is naturally linked to the existence of upper and lower bounds on the mass-energy scales of physical processes. In this paper, we determine bounds on the mass/radius ratio of stable charged objects, both classically and quantum mechanically, and investigate their implications for fundamental particles. One way to introduce a minimum length is via a Generalized Uncertainty Principle (GUP) that extends the usual Heisenberg Uncertainty Principle (HUP) to include nonlinear terms, which may then be interpreted as quantum gravity effects. A GUP of the form where A and B are positive constants, was proposed in [1] , and many different modifications of the HUP have since been considered in the literature. The GUP in Eq. (1) gives rise to an effective minimum length, in the form of a minimum positional uncertainty, which is proportional to √ A, but the existence of a minimum bound is a general feature of these models. Collectively, such modified relations are referred to either as GUPs, or as minimum length uncertainty relations (MLURs) (for general reviews of GUP phenomenology see [2,3], and see [4,5] for reviews of minimum length scenarios in quantum gravity). The existence of an absolute bound of the form x ≥ x min implies that x cannot be made arbitrarily small, irrespective of the uncertainties in any other physical observables. It is interesting to note that the idea of a minimum length induced by quantum gravitational effects was first proposed long ago [6]. By investigating the quantum mechanical measurement of the 0 01 component of the Christoffel symbols, Bronstein obtained a fundamental limit for the temporal uncertainty inherent in the measurement process, where ρ and V denote the density and the volume of a massive body, respectively. This, in turn, may be related to the spatial uncertainty via x ≤ c t. By taking into account that M = ρV is the particle mass, we obtain an equivalent mass-timedensity uncertainty relation of the form Since the existence of lower bounds for physical quantities is a natural characteristic of quantum processes, the presence of similar bounds in the framework of classical physics appears, at first sight, somehow unusual. Nonetheless, lower bounds on the ratios of physical quantities do occur naturally in classical general relativity, as a form of stability condition for compact objects. Two such bounds are of particular interest for both astrophysics/cosmology and for the study of subatomic particles: the minimum mass/radius ratio for a compact object in the presence of dark energy and for a charged compact object. Classical (3 + 1)-dimensional general relativity, with no dark energy component ( = 0), imposes an upper bound on the mass/radius ratio of any compact object, the Buchdahl limit [7], which requires a sphere of matter with arbitrary equation of state to satisfy the stability constraint If this condition is violated, the object will inevitably collapse under its own gravity to form a black hole. (Typically, this process occurs for stars when the mass of the star exceeds approximately 3.2M [8].) The Buchdahl limit, and its extensions, have been intensively investigated, including the study of the effects of the cosmological constant [9], and of sharp limits on the mass/radius bounds [10][11][12]. D-dimensional extensions of the Buchdahl limits in the presence of a cosmological constant were obtained in [13,14], while the mass/radius ratio for compact objects in five dimensional Gauss-Bonnet gravity and f (R) gravity were considered in [15,16], respectively. In the presence of dark energy, a minimum bound for the mass/radius ratio of a stable compact object also exists. This result follows rigorously from the generalized Buchdahl inequalities for a compact object in the presence of a nonzero cosmological constant ( = 0) [17]. For > 0, the existence of a lower bound admits an intuitive explanation: If the stability condition is violated, the self gravity of the object is insufficient to overcome the repulsive force due to dark energy. Remarkably, a minimum mass also exists for < 0 [17]. Physically, this is due to the balancing of both gravitational and dark energy attraction with local pressure in the matter distribution, induced by non-gravitational forces. In [17], it was shown that an uncharged compact object is stable against dark energy repulsion when its density is above a certain minimum value, for > 0. A similar condition follows from the generalized Buchdahl inequality for a charged compact object, even in the absence of dark energy [18]. For = 0, this gives For = 0, this result generalizes to [18] 2G M Hence, for R 2 1, the effect of dark energy is subdominant to electrostatic repulsion. Equation (7) can also be Taylor expanded to give so that, to leading order, we have In this limit, we recover the standard expression for the classical radius of a charged body with mass M and charge Q: that is, the radius at which the electrostatic potential energy associated with the object is equal to its rest mass, Mc 2 . We recall that this is roughly the radius the object would have if its mass were due only to electrostatic potential energy. Several general restrictions on the total charge Q of a stable compact object can also be obtained from the study of the behavior of the Ricci invariants r 0 = R i i = R, r 1 = R i j R i j and r 2 = R i jkl R i jkl . For example, by considering that the surface density must vanish, it may be shown that Q satisfies the condition where ρ c and p c are the central density and pressure of the object, respectively. Though most investigations of stellar structure have been done under the assumption of charge neutrality, there are a number of physical processes that could lead to the formation of charged regions inside compact objects. One of these processes could be mass accretion by a neutron star [19], if it happens that accretion produces luminosities very close to the Eddington limit L E = 4π G Mm p c/σ T [20], where M is the mass of the star, σ T is the Thomson scattering cross section, and m p is the mass of the proton. Let us assume that the star undergoes spherical accretion, and that the accreting material is ionized hydrogen. If the accreting luminosity of the star is L, then infalling electrons, at a distance r from the center of the star, experience a radiative force F rad = σ T L/4π cr 2 [19]. On the other hand, the radiation drag acting on the protons is smaller by a factor m e /m p 2 ≈ 3 × 10 −7 , where m e is the mass of the electron, so that electrons and protons are subject to different accelerations. Therefore, a star can acquire a net positive charge, Q = G Mm p /e (L/L E ), through accretion [19]. Another possibility giving rise to the existence of charged macroscopic objects is related to quark deconfinement inside dense neutron matter [21]. If deconfinement occurs inside a dense neutron star, the strange beta-equilibrated quark matter consists of an approximately equal mixture of three quarks, the up, down and strange quarks, with a slight deficit in the number of strange quarks. This composition of quark matter could lead to a net positive charge inside the neutron star or quark star. In deriving the results quoted above, it was assumed that the pressure within the object is isotropic. Interestingly, anisotropies in the pressure distribution inside compact objects, in the presence of a cosmological constant, can significantly modify both the upper and the lower bounds for the mass. These bounds are strongly dependent on the anisotropy parameter , which is defined as the difference between the tangential and radial pressure at the surface of the object. Pressure anisotropies modify the lower bound on the minimum density of a stable spherical mass distribution, for > 0, so that [22] Hence, the presence of the anisotropic pressure distribution weakens the lower bound on the mass. Remarkably, even anisotropic objects may still be stable, as long as their mass exceeds an absolute classical minimum value, determined by both and . The existence of a cosmological constant therefore has profound consequences for the stability of matter, even at the classical level. The nature of the cosmological constant or, more generally, dark energy, is one of the most fundamental problems in contemporary physics. In particular, the important question of whether represents a true fundamental constant of nature, or simply an approximation (for example, an approximately constant field configuration that arises as a solution to the, as yet unknown, equations of motion for a dynamical scalar field), remains unanswered. However, even if we take the existence of the cosmological constant, as implied by the Cosmological Concordance, or CDM model (cf. [23][24][25][26][27]), at face value, yet another question remains: Is an independent constant of nature, or can it be expressed in terms of other, known constants of nature? In [28], it was shown that, if the minimum mass in nature is M W = (h/c) √ /3 ≈ 3.5 × 10 −66 g, as proposed by Wesson [29], then a particle with mass M W and density ρ , given by Eq. (6), has a classical radius given by where R P is the Planck length, and R W = √ 3/ ≈ R U ≈ 10 26 m. This is of the same order of magnitude as the classical electron radius, Based on this observation, it was suggested in [28] that the radius R, given by Eq. (13), should be formally identified with r e and taken as a minimum possible length scale in nature. The cosmological constant may then be formally identified with the 'standard' set of physical constants where α = e 2 /q 2 P is the fine structure constant and q P = √h c is the Planck charge. Evaluating this numerically gives = 1.4 × 10 −56 cm −2 , in good agreement with the value inferred from various cosmological observations [23][24][25][26][27]. In [28], the formal identification R = r e was justified on the basis of a 'small number hypothesis', which represents an extension of the large number hypothesis proposed by Dirac [30], and which proposes that a numerical equality between two very small quantities with a very similar physical meaning cannot be a coincidence. Interestingly, the same identification was also obtained in [31] using information theory, in which a set of axioms for the cosmological constant were formulated by analogy with the Khinchin axioms [32], by formally replacing the dependency of the information measure on probabilities of events by a dependency of on the other fundamental constants of nature. These results raise the interesting questions of whether there is an intrinsic relation between electromagnetic phenomena and dark energy, and of what form the possible interaction/coupling of the electric charge e with the cosmological constant may be. In this work, we aim to show concretely that, by consistently combining results from general relativity, canonical quantum theory, and MLURs predicted by phenomenological quantum gravity, the identification (15) can be explicitly obtained by saturation of the quantum gravitational stability condition for the electron. Furthermore, our results show this identification to be broadly consistent with the results obtained by various early pioneers of quantum gravity research (see [4,5] for reviews), including those of Bronstein [6] and those obtained by Károlyházy et al. [33][34][35]. However, some assumptions present in the existing quantum gravity literature are shown to be inconsistent with canonical quantum mechanics. Specifically, certain assumptions as regards the nature of MLURs imply quantum gravity effects which manifest on scales larger than the observed Compton wavelengths of elementary particles. (Clearly, this cannot be the case, otherwise quantum gravity would already have been observed in the lab.) Interestingly, when these assumptions are revised in order to ensure the consistency of MLURs with the canonical theory (i.e. by ensuring that quantum gravity effects are subdominant to 'standard' quantum effects), the results obtained are inconsistent with both Bronstein's formulation, Eqs. (3) and (4), and Károlyházy's original results [33,34]. The reasons for these discrepancies are discussed in detail in Sect. 6. The structure of this paper is as follows. In Sects. 2, 3, we obtain the generalized Buchdahl inequalities, in arbitrary space-time dimensions, for a charged spherically symmetric object embedded in a space-time with general nonvanishing (i.e. positive or negative) dark energy. This extends previous results given in [36], in which the D-dimensional generalized Buchdahl inequalities for uncharged matter were obtained in both the asymptotically de Sitter and the anti-de Sitter cases. Specifically, in Sect. 2, the gravitational field equations and the hydrostatic equilibrium equations, also known as the Tolman-Oppenheimer-Volkov (TOV) equations, are obtained. The general form of the mass limits are given in Sect. 3 and various limiting cases of special physical interest are considered in Sect. 4. In Sect. 4.5, we use our previous results to derive bounds on the minimum and maximum densities of static asymptotically de Sitter and anti-de Sitter space-times. [These results are interesting because, even if the real universe is an expanding (3 + 1)-dimensional spacetime with a positive cosmological constant, these static, asymptotically de Sitter and anti-de Sitter spaces still have essential interpretations from the viewpoint of holographic duality.] The thermodynamic stability of higher-dimensional charged objects is investigated in Sect. 5. By minimizing the gravitational energy of charged objects with minimum mass/radius ratio, we show that the ratio of the square of the charge of the object to its mass, Q 2 /M, is proportional to the radius of the object, R (to leading order). In Sect. 6, we investigate the quantum mechanical implications of the lower mass bound for charged objects, in the standard (3+1)dimensional scenario, leading to the identification of the cosmological constant in terms of other fundamental constants of nature, as in Eq. (15 ). This identification is seen to arise as a consequence of the stability bound for the electron, viewed as a charged, gravitating, quantum mechanical particle, and extends the results obtained in [36], in which the minimum mass of an uncharged, gravitating, quantum mechanical particle was determined. Section 6.3 shows that MLUR leads to holography in arbitrary noncompact dimensions, and discusses its relation to the results previously obtained in Sect. 6. Section 7 contains a summary of our main results and a brief discussion of possibilities for future work. Geometry, field equations, and the TOV equations for charged objects in D dimensions In the following, we assume that the line element of a spherically symmetric D-dimensional static space-time can be represented in a generic form as [37] where Here x 0 = ct, x 1 = r , where r is the radial coordinate in D space-time dimensions, with domain 0 ≤ r < ∞, while the angular coordinates are defined according to 0 ≤ θ i ≤ π , i = 1, . . . , D − 3, and 0 ≤ φ ≤ 2π , respectively. The Einstein gravitational field equations are given by where κ = 8π G D /c 4 and the energy-momentum tensor contains three components, corresponding to matter (M), dark energy (DE), and the electromagnetic field (E M). We also assume that the matter and dark energy parts may be expressed in terms of fluid variables, so that with the dark energy obeying the equation of state P DE = wρ DE c 2 , where ρ DE = D c 2 /8π G D , and w = constant. Finally, the electromagnetic energy-momentum tensor is given by where F μν = ∇ ν A μ − ∇ μ A ν and A μ denotes the electromagnetic vector potential. The electromagnetic field tensor F μν satisfies the cyclic identity and the Maxwell equations where j ν denotes the electric current four-vector. We choose the rest frame of the fluid so that the D-velocity is which is then normalized according to u 2 = g tt u t u t = 1. We also introduce the electric charge density ρ e , which is related to the time component of the current four-vector by the relation From the Maxwell equation (22), and with the use of the charge density defined above, we can construct the 'proper charge', Q(r ), defined as where we note that the definition does not contain the angular volume D−2 . By direct substitution, we obtain the nonzero components of the energy-momentum tensor of the electromagnetic field as For later use, we also compute F 2 = F αβ F αβ , which contains only contributions from F rt and F tr , so that . Thus, for the components of the electromagnetic energymomentum tensor, we obtain where i = 1, 2, . . . , D − 2 denote the angular variables of a D-dimensional space-time. Hence, the electric field from the charge density generates a positive energy density and a radial pressure equal to Q 2 /2r 2(D−2) , and a negative transverse pressure with exactly the same magnitude. The conservation of the total energy-momentum tensor gives the equation This can be rewritten in a more compact form as (35) where the effective pressure P eff (r ) is defined as In this formulation, the charge dependent term is manifest, and the conservation equation reduces to the uncharged case when Q = 0. The Einstein field equations for the G t t , G r r and G θ i θ i components then become respectively. 2.1 The TOV equation for a charged sphere in D-dimensional space-times Equation (37) can be integrated immediately to obtain the 'accumulated' mass M(r ) inside radius the r , which is a function of the cosmological constant, D , and of the charge integral, The charge integral in Eq. (40) can be transformed by integration by parts, giving In the integral no surface term at infinity appears and the result is not valid for all space, but only for r < R, where R is the radius of the sphere. The second term on the right-hand side of Eq. (41) is the electromagnetic mass contribution, to be included in the total mass of the sphere. Thus, e −λ(r ) becomes We now define the total mass M T (r ) as the sum of the matter mass M(r ) and of the electromagnetic mass generated by the charge Q(r ) as The total mass-energy density inside the fluid sphere, including the electromagnetic contribution ρ q (r ), can then be defined as where ρ q (r ) has been defined implicitly. Using the definition of M T (r ), e −λ(r ) can then be written in the simpler form where By substituting the above expression into Eq. (38), we obtain Thus, with the use of Eqs. (35) and (47) we obtain the TOV equation, describing the structure of a charged sphere in D dimensions, which takes the form where we have introduced the Buchdahl variables, defined as . We note that the right-hand side of the TOV equation contains an extra term Q Q /r 2(D−2) , vis-á-vis the uncharged case [36], which represents the charge contribution to the hydrostatic equilibrium. Mass limits in D dimensions for charged spherically symmetric objects in the presence of dark energy In terms of Buchdahl variables, Eqs. (47) and (35) can be written as and respectively, where (. . .) (x) denotes differentiation with respect to x. More elegantly, Eq. (50) can be formulated in another way as This relation will be important later, in the derivation of the Buchdahl inequality. Furthermore, there exists an additional relation between ρ T , ρ DE and the function w(x) = w(r 2 ), defined as From Eq. (49), we then obtain After differentiating Eq. (54) with respect to x [and with the use of Eq. (51)], we find Since ρ T = ρ + ρ q , we next obtain and, with the use of Eq. (53), we arrive at the result Furthermore, since y(x) is defined in terms of Buchdahl variables as we have where and Using Eqs. (58)-(61) in Eq. (57), we finally obtain This is the generalized Buchdahl equation for spherically symmetric, charged compact objects in D-dimensional space-time. As compared to the uncharged case [36], it contains an additional term on the right-hand side, which is the extra contribution due to the presence of the electric charge. Buchdahl inequality in D space-time dimensions In the following, we introduce four new variables z, γ , ψ and η, defined as where η(r ) is defined in terms of the integral The function (r ), introduced in Eq. (63), is explicitly defined by giving In terms of the new variables introduced above, the Buchdahl equation Eq. (62) can be written as For a stable charged object, the assumption that the average total densitȳ does not increase with r implies that M T /r D−1 is a decreasing function. Therefore, we assume that, for all r < r , (r ) satisfies the condition and also that From Eq. (68), we can then obtain the generalized Buchdahl inequalities from the condition that, for any physical density profile of the charged object, the following relations hold: together with the condition that (Q 2 ) > 0 inside the object. The inequality therefore holds, for all r in the range 0 ≤ r ≤ R, where R is the radius of the fluid sphere, for any static charged object. Using the mean value theorem, we get and, since The above relation can be written explicitly as All the integrals in the inequality (76) can be evaluated using the conditions (70) and (71). By using (70), it follows that the denominator of right-hand side of (76) is bounded, such that The second term on the left-hand side of (76) also has an upper bound, where γ 0 ≡ γ (r = 0) is the central value of γ . The term involving γ (r ) also has a lower bound, Consequently, the term in the numerator on the right-hand side of (76) is bounded from below by where we have denoted Plugging integrals (77), (79), and (80) into (76) and using the relation y 2 = 1 − (r )/r D−3 , we next obtain At the surface of the compact object, r = R, and thus, using Eq. (49) together with the value of the dark energy at the surface where we have used the fact that arcsin ( 1 − y 2 ) ≤ 1 − y 2 /y, and have assumed that γ ≥ 0 for 0 ≤ r ≤ R. In the γ 0 term, we also assume that ζ(R) ≥ y(R) as a result of the energy condition ρc 2 + P ≥ 0, which allows us to replace 1/ζ with 1/y. For γ 0 = 0, the inequality (83) properly reduces to the uncharged case discussed in [36]. Even when y is bounded between 0 and 1, the last term on the left-hand side is unbounded, since it is proportional to 1/y. The metric becomes the black hole metric for y = 0, but we expect the Buchdahl limit to set in before this point, at the upper bound /R D−3 = 1 − y 2 . Dimensionless form of the mass bounds We now introduce a new set of dimensionless variables and respectively. In terms of these dimensionless quantities, the inequality (83) can be expressed in a simple form as where As a cross-check of our main result, we compare these bounds with the known result for uncharged objects obtained in [36]. When Q = 0, we have = 0, = 0, and In this case, in complete agreement with the previous result [36]. In order to obtain the influence of the charge on the mass bounds, we need to evaluate the value of , which contains the unknown parameter γ 0 and the upper bound u + [through (1/y) upper in (84)]. Since ζ(r = 0) = 1, an estimate value of γ 0 can be obtained from Eq. (64): where λ e =h/m e c is the Compton wavelength of the electron. This is equivalent to the statement that a charge e cannot be compressed to within a radius smaller than λ e . Consequently, we can approximate The dimensionless quantity ∼ R 3 can have values as small as 10 −87 (for M T 2 × 10 3 g, R = 1.5R S , where R S = 2G M/c 2 is the corresponding Schwarzschild radius) to as large as 1600 (for M T M = 2 × 10 33 g, R = 1.5R S ) when D = 4 and for very small . Thus, we cannot take it to be a generically small quantity. On the other hand, the cosmological constant in four space-time dimensions is extremely small in the real physical world. The quantity b is thus very small, since R is typically much less than, or at most comparable to the size of the universe, R R U ≈ R W = √ 3/ . Therefore, it is interesting to explore the mass bounds for charged spherical objects when , b. Another important limit is when the dark energy becomes a cosmological constant with w = −1. We will explore mass limits in these situations in the following section. Mass limits for R 2 1 cases In this section, we assume the dark energy density to be very small, i.e. R 2 1. We first consider the case in which dark energy corresponds to a cosmological constant with w = −1. The general case, with arbitrary w, is then investigated. Cosmological constant dark energy If dark energy corresponds to a cosmological constant and b is very small, we can set w = −1 in Eq. (88) so that The upper (lower) mass limit is then given by u +(−) respectively. For B < 0, C > 0, u − is positive and the lower limit exists. It is straightforward to see that B is always negative for D ≥ 2 and C is always positive for D ≥ 0, since , ≥ 0. Generic dark energy A more general situation is when we allow a small wdependent term to exist for ω = −1. In this case, we may approximate the quantity 1 − 4 AC/B 2 as where Since C 0 > 0, the second term on the right-hand side of Eq. (95) will determine whether the lower bound u − exists. Thus, for > (<) 0, a nontrivial positive lower bound u − will exist if Charged sphere with no dark energy We may also set b = 0 and simply consider the mass limits for a charged sphere in asymptotically flat space. From Eq. (88), it follows that and Hence, for D ≥ 2, both the upper and the lower limits always exist according to Eq. (98). Due to the electrostatic repulsion between charged fluid elements, the minimum mass/radius power ratio is required in order for gravitational attraction to counteract the repulsive force, which enables the object to maintain a static configuration. The maximum mass/radius power ratio denotes the limit before gravitational collapse. For a charged object, we expect the maximum mass/radius power to be greater than that for an uncharged object, i.e. the Buchdahl limit Eq. (5), due to the repulsive effect of the charge density. Small objects with , 1 As long as the size R of the object is sub-astronomical (i.e. small compared to the size of a typical star), the numerical value of in (3 + 1) dimensions is generically much smaller than unity. We now consider the condition 1, together with 1, in arbitrary dimensions, and derive explicit expressions for the mass bounds in this limit. For , 1, we can approximate 1 − 4 AC/B 2 as and the mass limits u ± become The minimum mass/radius power can thus be approximated by where we define the total charge contribution Q tot ≡ D−2 Q. Note again that the total mass M T is the sum of usual matter mass and the electromagnetic mass given by Eq. (43). For zero dark energy, b = 0, and for very small the minimum mass/radius power simplifies to The minimum mass/radius power ratio can also be cast in the form of a minimum average density for a static spherical object, i.e. Remarkably, with the contribution from the electric charge density, the lower bound (i.e. nontrivial positive values of u − ) exists, for a wide range of values of D , in both the asymptotically de Sitter and the anti-de Sitter cases. For positive (negative) D , as long as w is not much less (more) negative than −(D − 2)/(D − 1), the lower bound always exists. On the other hand, the maximum mass/radius power ratio is Thus, the maximum mass/radius power becomes larger in the presence of the electric charges, but the presence of dark energy could either enhance or weaken the upper bound, depending on the sign of w. For zero dark energy and very small , the maximum mass/radius power limit reduces to For convenience, we now present some results in (3 + 1) dimensions with positive and w = −1. where we have assumed small contributions from the dimensionless quantity . At this point, in order to facilitate the comparison between the previous four-dimensional results obtained in [18] and the results of the present paper, we would like to mention that due to the different choice of units and scaling of physical parameters the results of [18] can be re-obtained once the substitutions 4π Q 2 /c 2 → Q 2 and c 2 → 8π G B are performed in all the equations of the present paper. We end this section with a discussion of the holographic interpretation of the minimum and maximum mass/radius power ratio, focusing on the scenario in which the object is embedded in an asymptotically anti-de Sitter (AdS) spacetime, with D < 0. In this case, the maximum mass bound for a given radius of a charged object corresponds to the Hawking temperature, T H , of a charged (i.e. a Reissner-Nordström) black hole (RNBH), with the same mass. At this radius, any object with larger mass than the maximum mass will inevitably collapse to form a black hole. For a large RNBH with positive heat capacity, T H is an increasing function of the black hole mass and can be determined once the mass is known (see, for example, [38] for explicit formulas). From the viewpoint of holographic duality, the Hawking temperature can be identified with the temperature of the dual gauge plasma in the deconfined phase (for example, the quark-gluon plasma in QCD). Specifically, it can be interpreted as the maximum temperature of the dual gauge matter in the confined phase before it undergoes an inevitable phase transition into a deconfined phase. Or, in other words, as the confinement/deconfinement phase transition temperature. Generically, a static configuration in the bulk gravity picture of the background AdS space is dual to a thermal phase in the boundary gauge picture. To give a few examples: An empty bulk AdS space is dual to the confined phase of gauge matter on the boundary. A black hole in the bulk is dual to the thermal phase of the gauge matter on the boundary, in which the Hawking temperature of the black hole is identified with the temperature of the thermal gauge phase. The mass of a static AdS star made of fermions is dual to the conformal dimension of the multitrace operator in the dual conformal field theory (CFT) [39,40]. In [41], it was shown that the mass of the bulk AdS star is linearly proportional to the number density of particles on the boundary when the mass is large. This provides a holographic correspondence between the bulk mass in the gravity picture and the particle density on the boundary in the gauge field picture. With this in mind, we can interpret the minimum mass at a given radius, derived in this section, to be the dual of the minimum density of the gauge matter living on the boundary space. If the density of the gauge matter is too low, it will evaporate into a 'hadron' gas. This gauge picture corresponds to the gravity picture in which the mass of the spherical object scatters into the entire AdS space, since it is lower than the minimum mass required for stability at a given radius. Thus, the minimum mass/radius power ratio gives the critical density of the dual gauge 'nucleus', under which it will evaporate into the 'hadron' gas phase. Bounds on the static universe We can apply the condition (87) to the entire universe by setting R → {R U , ∞}. For asymptotically dS space, this is not allowed since r is limited by the cosmic horizon radius R U = √ (D − 1)(D − 2)/2 D . For asymptotically AdS space, R → ∞ is physically viable. Generically, the bounds given by Eq. (87) may yield bounds on the average density of static, asymptotically dS and AdS universes by letting R = R U and R → ∞, respectively. First, let us consider the AdS case. Since , given by Eq. (92), is proportional to R 3 /y, dividing by R 2 gives an interesting constraint on the average density of a static universe, with D > 3. For R → ∞, the parameters in Eq. (88) become leading to the degeneracy of u + and u − . This implies the uniqueness of the average density of the static asymptotically AdS universe, which is given bȳ where This bound only exists, for D < −κc 2ρAdS , w −1, if the universe is charged. For the uncharged case, Q = 0, the maximum mass/radius power ratio of the entire universe (R → ∞) gives the average density bound for the static AdS universe. For asymptotically dS space, we can set R = R U = √ (D − 1)(D − 2)/2 D to obtain an approximate average densitȳ For a static uncharged dS universe, starting from Eq. (90) and setting R = R U , we obtain These limits exist only for w > −(D − 2) 2 /(D − 1) 2 . Even if, in reality, the universe is an expanding (3+1)-dimensional space-time with positive , the static dS and AdS space still have essential interpretations from the viewpoint of holographic duality (cf. discussion of the AdS case in Sect. 4). Total energy and gravitational stability of charged objects with minimum mass/radius ratio in arbitrary dimensions In the present section, we investigate the stability of charged gravitating objects in arbitrary dimensions. As a first step in this study, we derive an explicit expression for the total energy of a compact, charged, general relativistic object, which includes the contribution from the gravitational energy. For the sake of notational convenience, we use a system of units such that c = G = 1 and κ = 8π throughout the remainder of this section. A definition of gravitational field energy E G , with interesting properties, was proposed in [42], and further developed in [43,44]. The derivation of E G proceeds as follows: Let us assume that T μ ν is the energy-momentum tensor of a stationary system with mass M, embedded within a space-time with a time-like Killing vector ξ ν . Then the matter-energy E M of the system is defined as [42,43] where is any space-like surface over which the energy is to be evaluated. If M is the total energy of the system, then the gravitational energy of the system E G is defined as [42]. This definition of the gravitational energy can be reformulated in terms of the theory of surface layers [43,44] as follows: let the surface be a closed surface, which cuts the space-time in such a way that the exterior space-time remains unchanged, while the interior space-time is flat. The internal energies of the matter and of the gravitational field are then replaced by the surface energy of , so that E G = E − E T , where E G is the energy of the gravitational field inside , E is the energy of , and E T is the energy of the matter inside [43]. The matter-energy inside the surface is given by E T = V T μ ν ξ ν u μ dV , where V is the invariant volume inside , and u μ is the four-velocity field of points that are fixed in . Next, we introduce the unit normal vector field n of . The exterior curvature tensor of is defined by K i j = ∇ j n i , where we have introduced the set of intrinsic coordinates x i , x j on . The surface energy tensor of (the Lanc- where [] denotes the discontinuity at , and K = K i i [43]. The energy of the cut is given by E = S μ ν ξ ν u μ d , where is an invariant surface element. For a vacuum solution of Einstein's field equations, E gives the gravitational field energy inside . If there is a nonvanishing energymomentum density tensor inside , then the gravitational field energy is given by [43] This definition is manifestly coordinate invariant. In the case of spherical symmetry, and in arbitrary dimensions, S 0 and thus the energy inside the surface is given by In the exterior of a higher-dimensional charged matter distribution, the vacuum metric functions satisfy the condition ν + λ = 0, and the metric is the generalized D-dimensional Reissner-Nordström-de Sitter metric, with coefficients where the dimensionless parameters u, and b are given by Eq. (85). Therefore, the gravitational energy inside a surface of radius R, where R is the radius of the charged object, is given by where . For an object with minimum mass/radius power ratio, and with small , b and negligible , the condition (102) is satisfied. Thus, eliminating the total mass using Eq. (102), the gravitational energy becomes For a stable configuration, the total gravitational energy should have a minimum, defined by The resulting expression can be rearranged into a cubic equation in D , and thus solved analytically for D in arbitrary dimensions. However, the expression is long and complicated, so we will give only the approximate form, valid to leading order in Q 2 and D . Thus, we have giving as the radius of a stable charged object, with minimum mass/radius power ratio, which also minimizes the gravitational energy of the configuration. Following the method presented in [18], for compact objects in (3 + 1) dimensions, we obtain the minimum mass of a charged object as a function of the D-dimensional cosmological constant D , and of its electric charge, in the form Using Eqs. (122) and (123), we can now eliminate the cosmological constant and obtain the ratio of the square of the charge of the object to its mass, as a function of the radius: For D = 4, this gives From the expression for R in Eq. (122), we see that stability can be achieved only when D > 0 for negative w. Furthermore, if we fix the mass M the of object to the minimum allowed at a given radius R, and consider a change δ R, then any change in the charge contribution Q 2 must be compensated by a corresponding change in the D contribution, in such a way that as to keep the gravitational energy of the system constant. In the case of the electron, for which Q = e and M = m e , Eq. (126) automatically recovers the classical electron radius, r e = 1.28e 2 /m e (in CGS units). In classical, relativistic, but non-gravitational physics, this is obtained from the requirement that the electrostatic energy e 2 /r e equals the rest massenergy m e . In the present approach, this result is obtained by minimizing the total gravitational energy of a charged system with minimum mass/radius ratio. A complementary stability analysis Consider a charged static object with the minimum mass/radius power ratio. One way to obtain a stability condition for this object is by minimizing the quantity M/R D−3 min with respect to the radius R. If the object has the minimum mass/radius power ratio and we make a slight change in R, its mass has to change in such a way that the mass/radius power ratio remains constant. By setting and using condition (102), we obtain the stability condition yielding the radius . (129) Substituting back into Eq. (102) gives the mass at the minimum mass/radius power ratio configuration, The dark energy constant D can be eliminated using Eqs. (129) and (130) to give For D = 4, this gives Although the stability analysis based on the minimization of gravitational energy allows only positive D , for negative w, the stability analysis presented here allows both D > 0 and D < 0. However, for D < 0, the equation of state parameter w must satisfy the condition w > −(D −1)/(D − 2), which gives w > −3/2 for D = 4. Quantum implications of a classical minimum mass for charged objects In this section, we investigate the quantum implications of the existence of a classical minimum mass for charged objects in (3 + 1) dimensions. In Sect. 6.1, we begin by reviewing a series of quantum gravity arguments that give rise to 'cubic' MLURs, in which the minimum positional uncertainty ( x) min is given by the cube root of three phenomenologically significant length scales. In general, such relations may be derived by minimizing the total uncertainty, due to both canonical quantum mechanical and gravitational effects, with respect to the mass M of the system. In Sect. 6.2, we combine the mass minimization condition giving rise to cubic MLURs with phenomenological results from canonical quantum mechanics, namely, the existence of a minimum Compton radius for any object (i.e. 'particle') of mass M, and consider charged objects subject to the bound (10). By combining all three mass bounds, we obtain the condition for quantum gravitational stability of a charged particle. Applying this to the electron, we find that saturation of this condition requires the existence of a 'new' fundamental length scale in nature, R * , of order R W . Furthermore, setting R * = R W = √ 3/ , we recover the expression for , written in terms of the fundamental constants {c, G,h, e, m e }, Eq. (15). For later convenience, we now define the Planck length R P , mass M P , and charge q P , expressed in terms of the independent constants {c, G,h}, via and For the sake of clarity, all fundamental constants are written explicitly throughout the remainder of this section. Following [29], but adopting the notation and terminology used in [36], we also define two mass scales, M W and M W , associated with the cosmological constant , From here on, we refer to these as the first and second Wesson masses, respectively. The associated lengths are Cubic MLURs in phenomenological quantum gravity In addition to those proposed by Bronstein [6], at least three sets of heuristic arguments based on quantum gravitational phenomenology give rise to the cubic MLURs. The first is based on an extension of a gedanken experiment first proposed by Salecker and Wigner [46], which proceeds as follows. Suppose we attempt to measure a length d using a special 'clock', consisting of a mirror and a device that both emits and detects photons. The photons are reflected by the mirror, placed at some unknown length d from the device, which emits a photon and re-absorbs it after a time t = 2d/c. Assuming that the recoil velocity of the device is well below the speed of light, it may be modeled non-relativistically. Then, by the standard HUP, the uncertainty in its velocity v, at the time of emission, is of order where M is its mass and x is the initial uncertainty in its position. We note that the 'device' considered here may still be small enough to behave quantum mechanically. For example, we may consider a two-state system involving a charged particle, embedded within a broader experimental set-up, that emits and re-absorbs photons. In this case, x and p = M v refer to the positional and momentum uncertainty of the charged particle, which, together with the mirror that reflects the photons, measures (or 'probes') the distance d. During the time required for the photon to travel to the mirror and back, the particle acquires an additional positional uncertainty ( x) = 2d v/c [i.e., in addition to the standard positional uncertainty x h/(2M v)], so that the total positional uncertainty is given by Minimizing this expression with respect to x, and using (neglecting numerical factors of order unity), where λ C = h/(Mc) denotes the Compton wavelength of the particle, so that If we then require d > R S = 2G M/c 2 (i.e. that our measuring device is not inside a black hole), we obtain ( x tot ) min = R P . However, more realistically, we may require d > λ C , so that the measurement process devised by Salecker and Wigner gives rise to a MLUR which is consistent with the standard Compton bound. The original argument presented in [46] may also be modified to explicitly include the classical 'uncertainty' in the position of the measuring device due to gravitational effects. Assuming that this is proportional to the Schwarzschild radius of the device R S , the total uncertainty due to canonical quantum mechanical effects, plus gravity, is where β > 0 [47]. Minimizing this with respect to M yields and, substituting this back into Eq. (141), we obtain where (again) we have neglected numerical factors of order unity in the preceding expressions. One disadvantage of the approach described above is that it appears to apply only to the specific measurement process envisaged in [46]. However, in [48,49], it was shown that the expression for ( x) min given in Eq. (139) may be obtained from general principles in canonical quantum mechanics. For V = 0, the time evolution of the position operatorx(t) given by the Schrödinger equation (in the Heisenberg picture) is This may be solved directly to givê The spectra of any two Hermitian operators, andB, must obey the general uncertainty relation [50] and In the Heisenberg picture, we have ( x) 2 = x(0) x(t), so that, again setting t = d/c, ( x) min is given by Eq. (139). As with Salecker and Wigner's gedanken experiment, we have again considered performing two measurements of the position of an object, one at t = 0 and the other at some time t > 0, and can relate this to the uncertainty inherent in the measurement of a length scale d = ct. However, in this case, no assumptions have been made about the details of the measurement procedure, so that Eq. (139) may be considered as a general result in canonical quantum mechanics (i.e. not accounting for the effects of gravity). As such, the arguments presented in [47], and hence the cubic MLUR (143), may be considered to have general validity for gravitating quantum mechanical systems. Cubic MLURs of the form (143) (with β = 1) were also obtained in [33,34] by considering a gedanken experiment to measure the lengths of geodesics with minimum quantum uncertainty. This derivation relies on the fact that the mass of the measuring device M distorts the background spacetime. Equating the uncertainty in momentum of the device with the uncertainty in its mass then implies an irremovable uncertainty or 'fuzziness' in the space-time in the vicinity of the device itself. This results in an absolute minimum uncertainty in the precision with which a gravitating measuring device can measure the length of any given world-line, d. As with the results proposed in [47], in this scenario the value β ∼ O(1) arises as a direct result of the assumption that the Schwarzschild radius of body of mass M, R S = 2G M/c 2 , represents the minimum classical 'gravitational uncertainty' in its position. In fact, for cubic MLURs of the form (143), it is usually assumed that β ∼ O(1) in most of the existing quantum gravity literature [5]. For all of the scenarios leading to Eq. (143) considered above, this is directly equivalent to assuming a minimum classical gravitational uncertainty, given by R S . However, since Eq. (143) holds if and only if Eq. (142) also holds, it is straightforward to check that setting β = 1 is inconsistent with the requirement that quantum gravity effects, stemming from MLURs of the form (143), be subdominant to 'standard' quantum effects. Since quantum gravity has not been observed in the lab, we require ( x) min (β R 2 P d) 1/3 ≤h/(Mc), or, equivalently Substituting the minimization condition for x, Eq. (142), into this inequality then gives Clearly, for β ∼ O(1), this contradicts the weak gravitational limit of the theory, represented by Eq. (139), and which yields d ≥ ( x) min = R P . This implies that the arguments of Károlyházy et al. [33,34], which automatically assume β = 1, are also inconsistent with the weak field limit and the condition that quantum gravity effects from MLURs become subdominant to canonical quantum uncertainty in this regime. In fact, subsequent work has claimed that Károlyházy's quantum space-time MLUR is incompatible with observa-tions in yet another sense, in that it implies a vacuum energy density of the order of the neutron star density [35]. While it would be interesting to repeat the arguments presented in [35] using the more general relation, Eq. (143), and to consider the value of β required to reduce the neutron star density to the observed vacuum density, this task lies outside the scope of the present paper and is left to future work. However, for now, we note that only the more general relation (143), with β 1, is compatible with current observations. Although the arguments presented in this subsection do not allow us to fix the value of β, or even the minimum value of β required for consistency with the weak field limit, we note that they yield two conditions on the mass of the measuring device M in relation to the distance to be measured d, Eqs. (142) and (149), and that these relations involve only a single free constant. In the next subsection, we combine these with the condition relating the mass M and radius R of a gravitationally stable charged body, and explicitly consider an object of charge e and mass m e (i.e. an electron). In so doing, we see that the consistency of all three relations implies the identification of fundamental constants given in Eq. (15). Quantum gravitational bounds for stable charged objects One possible definition of the quantum gravity regime is the requirement that the positional uncertainty of an object, due to combined canonical quantum and gravitational effects, be greater than or equal to its classical radius, ( x) min ≥ R. (This is essentially the inverse of the requirement for classicality: that the macroscopic radius of an object be larger than its quantum uncertainty.) Thus, the conditions correspond to a regime in which the 'particle' behaves quantum mechanically and gravitationally, but in which specific quantum gravitational effects are subdominant the standard Compton uncertainty. In this regime, we may therefore assume that where γ ≤ 1. Likewise, we may set where ξ ≥ 1, if we expect the object to display no classical behavior. Clearly, with equality holding if and only if γ = ξ = 1. For convenience, we now rewrite the three independent expressions we have obtained for M throughout the preceding sections of this work, namely 131), this is also the radius at which both the classical mass/radius ratio and the classical gravitational energy are both minimized. We now investigate the properties of a charged particle for which the combined uncertainty (due to both canonical quantum and gravitational effects) and the classical mass/radius ratio and gravitational energy are minimized. Thus, we proceed by equating the three expressions for M in Eqs. (155a) and (155c). The physical picture is that we use a 'particle' of mass M and classical radius R as a probe to measure a distance d: the minimum uncertainty in the position of the particle is also the minimum uncertainty in the measurement of d. Equations (155b) and (155c) immediately imply or, equivalently, This gives a nice (and self-consistent) interpretation of the Planck charge q P as the leading order contribution to a sum of terms that determine the maximum possible charge of a stable, gravitating, quantum mechanical object. The bound (157) may also be obtained in a more direct way by combining a general relativistic result with canonical quantum theory: rewriting Eq. (142) as Q 2 q 2 P R M/(M P R P ) and invoking a Compton-type relation between R and M (i.e. taking R to be the Compton radius of the sphere), yields exactly the same result. For convenience, we now rewrite where R * is an arbitrary length scale. We note that, if β is independent of d, R * is simply proportional to d, but, if β ∝ d −1 , R * is a constant. (Also note that setting β ∝ d −1 would in no way alter the argument for the minimization of x with respect to M, proposed in Sect. 6.1.) Equations (155a)-(155c) then become and respectively, where M * = M P R P /R * . Equating the two then yields The expression for M, Eq. (160), also implies that the Compton wavelength of the particle is given by The self-consistent solution to Eqs. (155a)-(155c), written in terms of R * = βd (158), is therefore yielding where λ C =h/(Mc) and together with To summarize our results, we have shown that, to probe a distance d, given in terms of some length scale R * by Eq. (166), we can minimize the combined quantum mechanical and gravitational uncertainty inherent in the measurement of d by choosing an appropriate probe 'particle', with mass M, given by Eq. (165). This mass corresponds to a classical radius R given by Eq. (164). In addition, we found that the minimum value of the combined uncertainty, incorporating gravitational effects, is equal to the classical radius, ( x) min = R. [Alternatively, if we take a particle of mass M, given in terms of Q and M * = M P R P /R * by Eq. (165), the value of d obtained in Eq. (166) represents the length scale with is naturally 'probed' by such a particle, with minimum uncertainty ( x) min = R.] Mathematically, the three length scales λ C , ( x) min = R, and d, are related via We also showed that the Planck charge acts as a maximum possible charge for any stable, gravitating, quantum mechanical object, regardless of its mass M and associated Compton radius λ C . Therefore, for Q < q P , the positional uncertainty induced by quantum gravitational effects is strictly less than the Compton scale, for a stable body of any mass M. Let us now reverse this argument by asking the following question: If we suppose that M represents the mass, not of a composite body, but of a true fundamental particle, what is the inherent length scale that such a particle can probe, with minimal uncertainty? To answer this question, we first note that, in order for M and Q in Eq. (165) to be constants of nature, R * must also be a constant of nature. As a test case, let us now consider the electron by setting M = m e and Q = e, for which Eq. (167) recovers the well known relation r e = αλ e , were α ≈ 1/137 is the fine structure constant. For convenience, let us also associate R * with another universal constant, which we denote * , via the relation * := From Eq. (165), we then have * = 3 Evaluating this numerically gives * ≈ 1.4 × 10 −56 cm −2 , and we recall that is the value of the cosmological constant implied by various observations [23][24][25][26][27]. This strongly suggests the identification * = , R * = R W . We stress, however, that this identification is not based purely, or even primarily, on a numerical coincidence. Rather, our requirement that the total uncertainty x, incorporating canonical quantum and gravitational effects, be minimized for all stable bodies, including fundamental particles, requires the existence of a fundamental length scale in nature which is many orders of magnitude larger than the Planck length. Specifically, the minimization of the combined canonical quantum and gravitational uncertainty of the electron requires the existence of a fundamental constant, with dimensions L −2 , of the form (169). Formally identifying * = and substituting this back into Eq. (9), we obtain the bound Q 2 M 3h 2 G 2 c 6 1/6 ≈ e 2 m e = 2.52 × 10 8 Fr/g, (172) to leading order. The fulfillment of this condition therefore indicates the stability of a general, charged, gravitating, quantum mechanical object, as claimed. Finally, let us now consider the role of the length scale d, given by Eq. (166), when R * is a universal constant and so is Q. Combining this with the requirement that R * = βd, Eq. (158), we obtain the following expression for β: The preceding arguments, given in Sect. 6.1, then imply that the gravitational uncertainty of the particle is given by rather than ( x) grav ≈ R S = 2G M/c 2 , as assumed in [34,35,47]. This implies that an additional, self-consistent interpretation of the classical radius R of a charged object, is that it represents the minimum value of the classical gravitational 'disturbance' induced by the objects mass M. As an additional check on the consistency of this result, we note that imposing the general condition β = R * /d on Eq. (141) yields Clearly, minimizing this expression with respect to M (i.e. treating M and d as independent variables), yields However, since the length scale that may be probed with minimum total uncertainty using a particle of canonical quantum width λ C is d = (Q 4 /q 4 P )λ C (Eq. 167), where Q 2 ≤ q 2 P , it is reasonable to ask, what happens in the 'canonical quantum limit', where d → λ C , so that d and M can no longer be considered independent variables? In this case, for a charged particle, Eq. (167) requires Q 2 = q 2 P , and Eq. (175) becomes We also note that, in the limit d → λ C , Eq. (173) implies R * → λ 3 C /R 2 P . Substituting these values into Eq. (175) yields and Since we require λ C R P , in order to avoid black hole formation, all quantities { x, R, d, R * , λ C } remain above the Planck scale in this limit. Finally, before concluding this section, we note that Bronstein's bound (3) is also a form of cubic MLUR, which may be rewritten as after identifying x ≈ c t, where we have used the fact that ρ = M R −3 denotes the classical density. It is clear that this is compatible with our result, Eq. (167), only when M = M P . Thus, in general, our results are incompatible with those presented in [6]. Although Bronstein did not explicitly consider charged particles, so that our results are not directly comparable to his, the origin of the difference appears to lie the fact that his results imply the gravitational field of an object gives rise to an additional uncertainty in its momentum, of order ( p) grav ≈ Gρ 2 V x t. Though it is beyond the scope of the present work to investigate this discrepancy further, it would be interesting to consider extending Bronstein's original arguments to the case of charged bodies, to see whether they are compatible with those presented here. MLUR and holography in arbitrary dimensions In this section, we will demonstrate that the MLUR which represents the minimum possible uncertainty due to combined canonical quantum and gravitational effects, implies holography involving quantum gravity 'bits', in space-times with an arbitrary number of noncompact dimensions. We have seen, in Sect. 6.1, that the minimum canonical quantum mechanical uncertainty is proportional to M −1/2 , where M is the mass of the object (cf. Eq. (175)). It was also shown in Sect. 4 that, for a given mass of a static object, its classical radius has a minimal possible value before the object collapses to form a black hole. (This because the ratio M/R D−3 has an upper bound; see also [36]). Thus, the minimum classical gravitational uncertainty is given by the minimum radius of the object, which is roughly the same as the horizon radius of the corresponding black hole. This is proportional to M 1/(D−3) in D dimensions. Therefore, the MLUR of the D-dimensional space-time can be expressed in the form where ζ , χ are positive constants. This expression contains the canonical quantum mechanical term and the classical gravitational term. By minimizing x with respect to M, we obtain the minimum length which corresponds to the mass For a measuring apparatus with size , the quantum mechanical uncertainty term and the classical gravitational term have parameters By using the D-dimensional Planck length we can express the maximum number of degrees of freedom in an D−1 volume as Remarkably, the result satisfies a holographic relation. Thus, the maximum number of degrees of freedom in a (D − 1)-dimensional volume is proportional to the (D − 2)dimensional 'area' of the boundary in which the volume is enclosed. Specifically, it is equal to the number of quantum gravity bits, ( /R P(D) ) D−2 , on the (D − 2)-dimensional surface. Hence, we prove that cubic MLURs, combining the minimum possible uncertainties arising from both canonical quantum and gravitational effects, inevitably lead to holography in arbitrary, noncompact, D-dimensional space-time. Discussions and final remarks In the present work, we have investigated the possibility of the existence of a minimum mass/radius ratio for charged, stable, compact general relativistic objects in arbitrary dimensions, in the presence of dark energy in the form of a cosmological constant. We have shown that for a static, spherically symmetric mass distribution, such a minimum ratio does indeed exist, and that it arises as a direct consequence of the Ddimensional Buchdahl inequality, which also gives rise to an upper bound for the mass/radius ratio. In the case of the minimum mass/radius ratio, we obtained an explicit inequality giving the lower bound on M/R in arbitrary dimensions, as an explicit function of the charge Q and the D-dimensional cosmological constant D . In order to obtain both the upper and the lower bounds, we generalized the approach introduced in [36] for uncharged objects to include nonzero charge, Q = 0. For Q = 0, all our results reduce properly to the bounds obtained in [36]. In addition, we have investigated the condition of the thermodynamic stability for objects with minimum mass/radius ratio, which requires that they are in the minimum energy state. To estimate the total energy of these objects, we have used the definition of gravitational energy introduced in [42]. In D = 4 dimensions, imposing the condition of minimum stability, for charged objects with minimum mass/radius ratio, leads to an explicit expression, Eq. (126), in which the ratio of the square of the charge of the object to its mass is proportional to the radius, Q 2 /M ∝ R. The same bound was also obtained as a stability condition for charged bodies in [18]. We have also investigated the quantum implications of the existence of a classical minimum mass for charged objects in four space-time dimensions, by starting from a series of quantum gravity arguments that give rise to cubic MLURs of the form x ≥ ( x) min = (β R 2 P d) 1/3 , Eq. (143), where β is positive numerical constant which is related to the positional uncertainty of the object induced by its gravitational field. In these approaches, x represents not only the uncertainty in the position of the object, but also the irremovable quantum uncertainty inherent in any measurement of the physical length d. We have combined the mass minimization condition, giving rise to the cubic MLURs, with phenomenological results from canonical quantum mechanics, namely, the existence of a minimum (canonical) quantum radius (the Compton radius), and have considered objects subject to the minimum mass/radius bound for charged bodies (10). By combining all three mass bounds, we have obtained the condition for quantum gravitational stability of a charged particle, Q 2 /M 3h 2 G 2 c 6 / 1/6 ≈ e 2 /m e = 2.52 × 10 8 Fr/g, Eq. (172). Physically, we may interpret this as meaning that, if the electron were any less massive (for fixed charge e), or more highly charged (for fixed mass m e ), a combination of electrostatic and dark energy repulsion would lead to instability. In other words, the electron would blow itself apart, as claimed in the Introduction. In addition, saturation of this condition yields an expression for in terms of the constants {c,h, G, e, m e }, given by Eq. (15). Specifically, by combining the mass bound obtained from purely classical considerations with the cubic MLURs, moti-vated by quantum gravity, and applying the result to body of charge e and mass m e (i.e. an electron), we obtained a prediction of a 'new' constant of nature, * , which may be expressed in terms of other fundamental constants. Physically, the existence of this constant is required in order to ensure the consistency of MLURs with the weak field limit of canonical quantum theory and with the classical stability bounds for charged, gravitating 'particles'. Evaluating * numerically, we have shown that it has the same order of magnitude value as the observed cosmological constant, which motivates the identification * ≡ ≈ 10 56 cm −2 . Crucially, this implies that, if the cosmological constant can be expressed as a function of the set of the 'standard' constants {c,h, G, e, m e }, it cannot be interpreted as a fundamental constant of nature. It is also interesting to note that the cubic MLURs used in this work, which, together with the classical stability bounds obtained for charged objects, imply a fundamental relationship between the existence of the cosmological constant and the stability of fundamental particles, also imply the existence of a holographic relationship between the maximum number of degrees of freedom in a bulk space-time and the number of quantum gravity 'bits' on the boundary. This is proved explicitly, for arbitrary D-dimensional space-times (with noncompact dimensions), in Sect. 6.3. Finally, we note that the formalism developed in this paper can be easily extended to the case of non-electromagnetic interactions. For example, by interpreting the charge Q as a generalized charge, corresponding to a Yang-Mills field, we can apply our results even to the case of strongly interacting particles, based on the fundamental QCD Lagrangian [51] L QCD = 1 4 a F a μν F aμν where the subscript f denotes the various quark flavors u, d, s etc., and the corresponding quark masses m f . The nonlinear gluon field strength is defined as In Eqs. (187) and (188), ψ is the (spinor) wave function of the quarks, γ μ are the Dirac matrices, f abc are the structure constants of the group SU(3), and α s is the strong interaction coupling constant. In the first order perturbation theory, one can neglect the quark masses, so that the equation of state for zero temperature quark matter can be obtained [51,52]: where B is interpreted physically as the difference between the energy density of the perturbative and non-perturbative QCD vacua (the bag constant), while ρ q and p q denote the energy density and thermodynamic pressure of the quark matter, respectively. Equation (189) [51,52]. On the other hand, after a neutron matter-quark matter phase transition, which can take place, for example, in the dense core of neutron stars, the energy density of strange quark matter is of the order of ρ q ≈ 5 × 10 14 g/cm 3 . However, it is important to note that, in the case of the QCD description of strong interactions, the strong coupling constant α s is a function of the particle (i.e. quark) momenta, and of their energy density. For the simplest hadronic models, the quark-gluon coupling constant is of the order of α s ≈ 0.12 [51]. If we define the generalized QCD charge as Q QCD ≈ α 1/2 s , we may obtain an estimate for the mass of a quark, interpreted as an electric and color-charged particle having a minimum mass/radius ratio by applying the formalism developed in this paper and identifying the constant * with B, where B is the bag constant introduced in the simple MIT bag model, Eq. (189). This yields a value of order m q ≈ 67.75 MeV [18], which represents a reasonable approximation to the predicted mass of the s quark [52].
16,839.2
2016-02-27T00:00:00.000
[ "Physics" ]
Structure refinement of (NH4)3Al2(PO4)3 prepared by ionothermal synthesis in phosphonium based ionic liquids – a redetermination The crystal structure of (NH4)3Al2(PO4)3 was refined by powder XRD synchrotron data. (NH4)3Al2(PO4)3 is a member of the structural family with formula A 3Al2(PO4)3 where A is a group 1 element, of which the K and Rb forms are also known. Chemical context Following the discovery of the microporous AlPO 4 -n series of materials (Wilson et al., 1982), many efforts have been directed toward the synthesis of novel phases utilizing traditional hydrothermal (Wilson, 2007;Yu & Xu, 2006) and solvothermal syntheses (Das et al., 2012). Recently, ionothermal synthesis has been added to the stable of synthetic methods. Ionothermal synthesis is an extension of the solvothermal method of synthesis using an ionic liquid as the solvent (replacing, for example, water or ethylene glycol) where a portion of the organic structure-directing agent from a typical zeolite synthesis is derived from the ionic liquid (Morris, 2009). Many new materials have been synthesized by ionothermal synthesis, with new aluminophosphate materials among the most common (Parnham & Morris, 2007;Xing et al., 2008Xing et al., , 2010). An important issue in ionothermal synthesis is control of water (Ma et al., 2008). Excess water often leads to synthesis of dense AlPO 4 phases such as the one with a tridymite-type of structure, which we observed as well during syntheses utilizing 85% wt H 3 PO 4 . To control the level of water in the synthesis, thereby allowing easy recycling of the ionic liquid solvent and to intentionally prepare ammonium aluminophosphates, we used (NH 4 ) 2 HPO 4 as the phosphorous source in the synthesis. Ammonium is a good structure-directing agent for aluminophosphate frameworks; multiple ammonium aluminum phosphates are known (Byrne et al., 2009;Vaughan et al., 2012). In the current phosphonium-based ionothermal synthesis, the presence of an ammonium cation in the relative absence of water provokes the formation of a 2/3 Al/P framework with the formula (NH 4 ) 3 Al 2 (PO 4 ) 3 . A structurally unrelated compound with the formula (NH 4 ) 3 Al 2 (PO 4 ) 3 has previously been synthesized via a solvothermal approach (Medina et al., 2004). The aluminophosphate database at Jilin (Li et al., 2019) currently lists 21 framework structures with a 2:3 ratio of Al:P. A framework with sub-stoichiometric Al content is by necessity anionically charged and must be cation-balanced, so most of the known frameworks, such as UT-3, UT-4 and UT-5 (Oliver et al., 1996) are charge-balanced by organoammonium cations. Low-water-content syntheses clearly favor 2:3 compounds as most of the known materials are synthesized from low-water-content preparations. Structural commentary and survey of related compounds The (NH 4 ) 3 Al 2 (PO 4 ) 3 phase synthesized here is related to the series of A 3 Al 2 (PO 4 ) 3 materials synthesized via hightemperature solid-state methods (Devi & Vidyasagar, 2000) with varying monocations on the A site. Additionally, an independent synthesis previously yielded a (NH 4 ) 3 Al 2 (PO 4 ) 3 material called SIZ-2 whose structure was solved and refined from single-crystal data (Cooper et al., 2004) and possesses nearly the same structure as refined from the current powder data of (NH 4 ) 3 Al 2 (PO 4 ) 3 . A polyhedral representation of the crystal structure of (NH 4 ) 3 Al 2 (PO 4 ) 3 is shown in Fig. 1. SIZ-2 crystallized from a choline chloride/urea eutectic mixture where decomposition of urea was proposed to be the source of ammonium in the structure. The refinement of Cooper et al. (2004) included the ammonium N atoms, but made no attempt to find or model the corresponding H atoms. Devi & Vidyasagar (2000) utilized Li, Na, K, Rb, Cs, and Tl as the A cation and succeeded in crystallizing compounds with A = Na, K, Rb, Tl. The thallium derivative yielded a completely different structure with trigonal-bipyramidal coordination of Al. The A = Na structure was not solved, but apparently crystallizes in an unrelated orthorhombic spacegroup type from that observed for A = K, Rb in their work, and for A = NH 4 here. Devi & Vidyasagar (2000) utilized (NH 4 ) 2 HPO 4 as the phosphate source in their high-temperature preparations of A 3 Al 2 (PO 4 ) 3 , but did not obtain (NH 4 ) 3 Al 2 (PO 4 ) 3 , likely due to the volatility of NH 3 at high temperatures. As in the K and Rb forms of the A 3 Al 2 (PO 4 ) 3 series, aluminum and phosphorus are both tetrahedrally coordinated and connected through corners throughout the (NH 4 ) 3 Al 2 (PO 4 ) 3 structure. The NH 4 + cations reside in a channel along the c-axis direction made from a 12 T-site ring of alternating AlO 4 and PO 4 tetrahedra ( Ball and stick representation of (NH 4 ) 3 Al 2 (PO 4 ) 3 showing the 12membered ring with three phosphate groups protruding inward with close contact to ammonium cations. Figure 1 Polyhedral representation of (NH 4 ) 3 Al 2 (PO 4 ) 3 , showing the overall connectivity and ion channels in the crystal structure. Al is in the center of blue tetrahedra, P in gray tetrahedra, and N is represented by blue spheres. Table 1 Hydrogen-bond geometry (Å , ). D-HÁ solvent is present within the pores of the (NH 4 ) 3 Al 2 (PO 4 ) 3 framework. Without the NH 4 + groups, the structure would have 24% void volume. The framework is triply negatively charged and charge-balanced by the ammonium cations. Three of the six phosphate groups in the ring protrude inward such that the closest contact distance between the H atom of an ammonium group and the O atom of the nearest phosphate is between 1.83 and 1.87 Å , indicating significant hydrogenbonding interactions. The full range of HÁ Á ÁO hydrogen-bond lengths is between 1.83 and 1.97 Å (Table 1). Crystallizing in space-group type Pna2 1 , (NH 4 ) 3 Al 2 (PO 4 ) 3 is isostructural to, but with a slightly larger unit cell than the K form synthesized by Devi & Vidyasagar (2000). Lattice expansion of $0.1-0.2 Å occurs along each of the three axes, leading to an overall 6.6% increase in cell volume from 1245 to 1327 Å 3 . A lattice expansion is no surprise as the ionic radius of NH 4 + is between 1.4 and 1.67 Å depending on the coordination number (Sidey, 2016). This is slightly larger than the reported 1.37 to 1.55 Å range for K + (Shannon, 1976). Much of the relative lattice expansion for (NH 4 ) 3 Al 2 (PO 4 ) 3 occurs along the a and c axes. Tilting of tetrahedra accounts for a significantly smaller expansion of the long b axis. In addition, an isostructural K/As form is also known where two-thirds of the phosphate groups have been replaced by arsenate (Boughzala et al., 1997). Arsenate included on the phosphate sites increases the cell volume to 1307 Å 3 , just smaller than that recorded here for (NH 4 ) 3 Al 2 (PO 4 ) 3 . The pure arsenate form K 3 Al 2 (AsO 4 ) 3 was reported by Stö ger & Weil (2012), which has a cell volume of 1328 Å 3 , essentially equivalent to that here. An overlay plot of atomic positions of (NH 4 ) 3 Al 2 (PO 4 ) 3 (red) versus SIZ-2 (blue) shows that although the independent refinements of the two (NH 4 ) 3 Al 2 (PO 4 ) 3 materials were performed via different methods at different temperatures, most atom positions are similar, with no more than about 0.004 fractional position differences along the a or c axes (for these axes, about 0.03-0.04 Å , Fig. 3). One area stands out in the A 3 Al 2 (PO 4 ) 3 series. Fig. 4 shows the key area surrounding O11 where the largest position movement is observed in the two independent refinements of (NH 4 ) 3 Al 2 (PO 4 ) 3 . The P3-O11 bond is always among the shortest P-O bonds found in the crystal structure, here at 1.487 (5) Å . Two clusters of P-O bond lengths occur; one at about 1.49 Å and another at 1.55 Å . These distances are relatively typical for aluminophosphates (Richardson & Vogt, 1992;Wei et al., 2012). Each of the O atoms protruding into the pore possess short P-O bonds and hydrogen bonds to two ammonium ions (Table 1). In particular, N2, N3, O11, and P3 are effectively in a plane so that with the hydrogen bonding present in our refined model from N3 and N2 through the attached H atoms to O11, O11 moves closer to P3 while N2 and N3 move slightly further away versus the positions in the SIZ-2 refinement. Table 2 Ball and stick representation of the key area surrounding O11 where the largest position movement takes place in the two independent refinements of (NH 4 ) 3 Al 2 (PO 4 ) 3 . Boughzala et al. (1997) For each of the compounds, the atomic numbering scheme of the current (NH 4 ) 3 Al 2 (PO 4 ) 3 refinement has been utilized. For the first two compounds, A = NH 4 , while for the second two, A = K. For the As-containing compound, the P3 site is reported to have the highest occupancy of As at 0.86. four isostructural A 3 Al 2 (PO 4 ) 3 compounds. Other bond lengths and angles are otherwise relatively unremarkable versus other members of the structural class although we note that As/P-O distances are longer than P-O as expected. Rb 3 Al 2 (PO 4 ) 3 is structurally related to the NH 4 and K forms, but crystallizes in a higher symmetry space-group type (Cmc2 1 ), accompanied with higher overall coordination numbers around Rb + and a mirror plane perpendicular to a. The ionic radius of Rb + is similar to that of NH 4 + , reported as 1.52-1.63 Å (Shannon, 1976). Lithium and cesium forms of the series have not yet been synthesized, likely because of the relatively small and large, respectively, ionic radii versus those of the fitting A cations. Our initial attempts at ion-exchange of (NH 4 ) 3 Al 2 (PO 4 ) 3 with LiNO 3 or CsNO 3 in aqueous solution to form the Li or Cs form failed, with partial structural degradation and no ion-exchange observed. Synthesis and crystallization In a typical preparation, 1.65 g (NH 4 ) 2 HPO 4 was added to a 125 ml polytetrafluoroethene (PTFE) lined autoclave containing 24.02 g of ethyl tri(butyl)phosphonium diethyl phosphate. The mixture was stirred at room temperature for 2 min. To this mixture were added 0.49 g of Al(OH) 3 , and the contents were stirred at room temperature for 2 min. The contents of the autoclave were digested at 423 K for 24 h prior to isolating the product by filtration. Analytical results show this material has a molar ratio Al:P of 0.725. The X-ray diffraction pattern is shown in Fig. 5. Scanning electron microscopy (SEM) revealed agglomerated stacks of irregularly shaped blocky crystals of from 500 nm to 2-4 mm in length (Fig. 6). Calcination of (NH 4 ) 3 Al 2 (PO 4 ) 3 at temperatures of 773 K or higher causes the formation of an AlPO 4 phase with a tridymite-type structure. Ethyl tributyl phosphonium diethyl phosphate (Cyphos 169) was acquired from Cytec; aluminum hydroxide was acquired from Pfaltz and Bauer. XRD pattern ( = 0.373811 Å ) of (NH 4 ) 3 Al 2 (PO 4 ) 3 synthesized ionothermally in ethyl tributylphosphonium diethylphosphate and Rietveld residuals following structure refinement. Part A shows the fit to the overall pattern, and inset B shows the fit to high-angle regions. Computer programs: local program at 11BM, GSAS (Larson & Von Dreele, 2000), coordinates from an isotypic structure, CrystalMaker (Palmer, 2005), publCIF (Westrip, 2010). Refinement Crystal data, data collection and structure refinement details are summarized in Table 3. Following initial survey scans on in-house Cu source powder XRD instruments, final data were acquired from samples packed in thin glass capillaries on 11-BM at the Advanced Photon Source at Argonne National Laboratory. Starting atomic positions for the refinement were adapted from the literature examples. Starting positions for the ammonium cations were located in a difference-Fourier map and subsequently refined using GSAS (Larson & Von Dreele, 2000) as tetrahedral rigid bodies with N-H bond lengths held at 0.9526 Å and tetrahedrality enforced, leading to HÁ Á ÁH distances of 1.5556 Å . No soft constraints were applied to the framework positions. All atoms in the structure were refined with a common U iso parameter. Two low-intensity reflections in the region 4.00-4.22 /2 were excluded from the refinement as belonging to an impurity phase after assessment of multiple (NH 4 ) 3 Al 2 (PO 4 ) 3 batches. Refinement trials with a higher symmetry model (space-group type Cmc2 1 ) were attempted but showed poor agreement with the experimental data, with R wp > 0.16.
3,108
2019-11-19T00:00:00.000
[ "Materials Science", "Chemistry" ]
Subwavelength hyperspectral THz studies of articular cartilage Terahertz-spectroscopy probes dynamics and spectral response of collective vibrational modes in condensed phase, which can yield insight into composition and topology. However, due to the long wavelengths employed (λ = 300 μm at 1THz), diffraction limited imaging is typically restricted to spatial resolutions around a millimeter. Here, we demonstrate a new form of subwavelength hyperspectral, polarization-resolved THz imaging which employs an optical pattern projected onto a 6 μm-thin silicon wafer to achieve near-field modulation of a co-incident THz pulse. By placing near-field scatterers, one can measure the interaction of object with the evanescent THz fields. Further, by measuring the temporal evolution of the THz field a sample’s permittivity can be extracted with 65 μm spatial resolution due to the presence of evanescent fields. Here, we present the first application of this new approach to articular cartilage. We show that the THz permittivity in this material varies progressively from the superficial zone to the deep layer, and that this correlates with a change in orientation of the collagen fibrils that compose the extracellular matrix (ECM) of the tissue. Our approach enables direct interrogation of the sample’s biophysical properties, in this case concerning the structure and permittivity of collagen fibrils and their anisotropic organisation in connective tissue. In the last two decades, THz radiation has attracted a lot of attention due its unique properties [1][2][3] . For example, there have been non-invasive inspections of semiconductor surfaces 4 , space shuttle panels 5 , electronics 6 , paintings 7 and pharmaceutical tablets 8 . Unlike X-rays, the photon energies are non-ionizing, hence the great interest in using THz for biological tissue evaluation 9,10 and also for cancer diagnosis 3,11 . Moreover, many low-frequency vibrational modes of biological molecules in aqueous media lie in this frequency range, allowing THz spectroscopy to identify and characterize inter-molecular bonding in amino acids 12 , sugars 13 , DNA 14 and proteins 15 , as well as dynamics at biomolecule-water interfaces 16 and in photoactive proteins 17 . There are also the THz investigations of corneal diseases by Taylor et al. 18 and the diabetic foot studies by Hernandez-Cardoso et al. 19 . Furthermore, long-range collective vibrational modes, which mediate structural changes and the reaction coordinates critical to the function of active proteins 20 , normally manifest themselves at THz frequencies. Whilst THz spectroscopy can readily identify such collective vibrational modes 21 there are several difficulties, in addition the broadband nature of the resonances, in determining structural features of these systems. Firstly, samples have to be kept hydrated for normal biological function to be maintained, which is problematic due to the large THz absorption of water 22 . Secondly, owing to the long wavelengths employed (λ = 300 μm at 1 THz), near-field approaches are generally required to get sub-mm resolution. However, invasive imaging techniques such as those involving scanning tips or apertures [23][24][25] are not suited for biological applications. Furthermore, it is usually necessary to encapsulate biological samples to maintain hydration, severely restricting the resolution achievable by scanning tips or apertures, and the apertures themselves typically have a very strong frequency response 26 making them unusable for spectroscopic applications. For these reasons, subwavelength spectroscopic THz measurements of biological samples [27][28][29] have been plagued by problems, and biological imaging has, for the most part, been restricted to large structures such as organs 30,31 . Apertureless near-field THz measurements offer an intriguing solution to many of these problems. In the approach previously described by refs 29,32 by placing a sample directly onto a crystalline electro-optic crystal THz detector, the near-field THz radiation can be observed via a femtosecond optical detection pulse incident from the rear of the crystal. This strategy is highly advantageous for biological imaging as the crystal detector itself can be used to encapsulate the sample 29 , and an image is readily obtained by raster scanning the detection pulse. However, a major shortcoming of this approach 29,32 is that the electro-optic crystal must be transparent for optical detection, hence the sample is exposed to the intense femtosecond visible pulse. Moreover, since the sample is in contact with the detector, this latter influences the reflection of the detection pulse, hence a measured image can be composed of both optical and THz responses, which may be of comparable magnitudes in biological samples. An alternative apertureless approach involves the use of a photoconductive modulator to spatially modify a THz beam 33 . Here, an optical pump beam is projected simultaneously with a THz beam onto a thin photoconductive modulator such as a semiconductor wafer (see below; Fig. 1), switching the THz material response from dielectric to conductor through electron-hole pair photoexcitation 34 . The photoconductive regions generated by the pump behave as scatterers for THz radiation in the vicinity of a sample, which is placed on the rear interface of the modulator. This approach offers several clear advantages: firstly, there is no mechanical raster scanning involved. Moreover, the spatial resolution of a sample placed directly after the modulator is determined primarily by the thickness of the photo-modulator 35 , and such sub-wavelength THz measurements have been achieved in a variety of solid state systems [35][36][37] . Furthermore, this approach enables Hadamard transform imaging, where binary intensity patterns spatially modulate a beam of radiation allowing the formulation of an image by analysis of the transmitted or reflected light 38,39 . Hadamard approaches can significantly improve image quality and acquisition times 35 , which proves particularly advantageous for imaging biological samples due to the rather problematic THz absorption of water therein. Articular cartilage is a connective tissue composed of a dense extracellular matrix (ECM) rich in water, collagen and proteoglycans, with sparse specialised cells called chondrocytes 40 . It provides a smooth and lubricated surface for articulation and facilitates the transmission of loads through the distinctive regional orientation of the collagen fibrils, showing a change in alignment going from the articular surface through to deeper within the tissue. For this reason, cross-sections of articular cartilage are suitable candidates to test the capabilities of the THz imaging technique with polarization resolution. The thin superficial zone is made primarily of collagen fibrils aligned parallel to the articular surface, whilst the middle zone is composed of thicker collagen fibrils with an oblique alignment, and the deep zone consists of collagen fibrils aligned orthogonal to the articular surface 41 . Clinical conditions such as osteoarthritis and rheumatoid arthritis are characterized by degradation of the cartilage matrix, resulting in a disruption of the organised collagen structure 42 . Techniques that are able to detect changes in structure at the fibril level have potential for diagnosis of these pathologies. In this article, we present a subwavelength THz measurement technique, based on the photoconductive modulator approach from refs [35][36][37] which is applicable to histological sections of biological tissues. We project binary intensity patterns from a femtosecond laser source onto an ultrathin (6 μm-thick) photoconductive silicon wafer in order to modulate a coincident picosecond THz pulse. Cross-sections of healthy articular cartilage are placed on the rear interface of a silicon wafer for maximal near-field interaction. By varying the arrival time of the incident THz pulse and using time domain detection, we measure the full temporal evolution of the THz field. With both amplitude and phase of the scattered THz pulse determined, we are able to extract the frequency-dependent complex THz permittivity of our sample with subwavelength resolution. We show that the THz permittivity of articular cartilage, made essentially of type-II collagen, varies across tens to hundreds of micrometres depending on the protein fibril orientation. This demonstrates the advantage of our approach in mapping the micro-structure of anisotropic samples, previously unattainable using far-field approaches. Note that this technique, in transmission geometry, is only applicable to histologically sectioned samples and hence is only suitable for ex-vivo studies. However, we do point out that it may be possible to apply similar principles to study THz reflection from surfaces such as skin. Figure 1 illustrates the experimental setup (a more detailed schematic is presented in ref. 35 ). We use a typical THz time domain spectrometer (THz-TDS) to launch and subsequently detect a THz pulse. Briefly: an amplified 800 nm (90 fs) Ti-Sapphire femtosecond laser running at a repetition rate of 1050 Hz, is used to power the THz-TDS using optical rectification and electro-optic sampling in ZnTe crystals for generation and detection of our terahertz pulses, respectively 43,44 . The femtosecond pulses also provide a third optical excitation beam with a fluence of μ J cm 100 / 2 . This pump pulse is spatially modulated via a digital micromirror device (DLP3000 with the DLP Lightcrafter from Texas Instruments) and a single lens so as to project an optical intensity pattern on the surface of a highly resistive silicon wafer (8000 Ω.cm, 6 μm thick). This projection is coordinated at the sample with the arrival of a THz beam. The biological sample, articular cartilage composed mainly of type II collagen fibrils, consists of 40 μm-thick histological cryosections of bovine cartilage (see Materials and Methods). The hydrated sample is placed on the rear interface of the photomodulator, which is in turn sandwiched between two optically transparent polystyrene coverslips to maintain sample hydration and structural integrity. The photoconductive properties of the silicon wafer allow one to optically render some regions opaque to THz radiation 34 , and scatter the incident THz light in the vicinity of the sample. Then, by measuring the far-field THz transmission for different spatial photo-excitation patterns, the near-field THz response of the object at different spatial locations can be obtained. Optimal signal-to-noise ratio is achieved via the use an orthogonal set of binary patterns derived from Hadamard matrices 38,39 (see Materials and Methods). Moreover, by varying the relative arrival time of our electro-optic sampling pulse, we measure the full temporal evolution of the transmitted THz field with 100 fs temporal resolution. Combined with a reference scan taken in the absence of a sample, we are able to extract the frequency-dependent complex THz permittivity (see Materials and Methods for mathematical details) of the sample with a spatial resolution determined by the optical pattern on the photomodulator. We find that scatterers of size 65 μm to be sufficient to resolve the spatial variations of interest in the cartilage sample. Plane Wave Analysis A standard approach to analyzing THz-TDS spectra is to extract the complex permittivity (or equivalent) via analysis of the Fresnel transmission equations 44 . However, this approach assumes a plane wave approximation, something that is questionable for the near field. In this section, we test the validity of such an approximation to our experimental approach. We analytically model a system similar to that in our experiment (full mathematical details in supplementary information). In brief, we analyze the transmission through a single aperture in a conducting film in contact with a lossy dielectric layer of thickness h, as represented in Fig. 2a. Here, the region with the aperture is tailored to have similar transmissive properties to those of the experimental photomodulator, while the lossy dielectric is given a permittivity ε. We set the permittivity of the incident and transmitted regions to ε s = 2.5, i.e. similar to that of the plastic coverslips encapsulating our sample. Using a modal matching model 37 which assumes an incident THz plane wave, we simulate experiment by finding the transmitted far field for the two cases where ε = 7.5 + 2i (i.e. similar to our cartilage sample discussed below) and ε = 1 (representing our reference). To replicate the multi-aperture approach used in our experiment, we carry out a complex summation of fields transmitted through different sized apertures (see supplementary information). We then analyze the total transmitted fields via the approach outlined in the Materials and Methods section in order to extract the permittivity of the lossy dielectric layer. By comparing the extracted permittivity to that introduced in the model, we can assess the validity of the plane wave approximation. In Fig. 2b, we plot the real and imaginary parts of the recovered permittivity versus frequency for three different sample thicknesses. We see that at higher frequencies, the recovered permittivity is generally very close to the input value used in the model. However, a greater discrepancy is found at lower THz frequencies, pronounced in both real and imaginary parts of the permittivity. This discrepancy arises from the presence of near fields, which are neglected in the plane wave approximation made to extract the permittivity. The longer decay lengths of the low frequency evanescent field components 45 lead to a greater discrepancy than the high frequency fields. We also see that the thin samples exhibit greater discrepancy: for thinner samples, the amplitude of evanescent field components at the exit interface is larger. We discuss in more detail the origin of these effects in the supplementary information. One should note that discrepancies due to the plane wave approximation are expected to be less severe in our experiment, owing to the much lower, finite conductivity of the photomodulator 35,37 , which will act to relax the aperture boundary conditions 46 and reduce the amplitude of evanescent field components. Nevertheless, for sample thicknesses on the order of μm (such as those used in the experiment), one has to question the validity of the plane wave approximation at low THz frequencies. For this reason, we do not consider the very low frequency part of our spectra, below ~0.6 TH z . Note that for higher resolution images or thinner samples, one needs to develop a more elaborate analysis procedure, incorporating all near field effects, in order to reliably extract values of local permittivity. Figure 3a shows a photomicrograph of a cross-section of articular cartilage taken with a polarized visible light microscope. The sample contains three main regions with distinct orientations of the collagen fibrils, similar to samples studied previously with other imaging techniques 47,48 . In the superficial zone, collagen fibrils are aligned parallel to the articular surface. In the middle zone, the fibrils have an oblique arrangement, then ending orthogonal to their starting alignment in the deep zone, which presents high intensity of the transmitted polarized light. While articular cartilage has a collagen ultrastructure with spatial dimensions ~100 nm 49 which cannot be resolved here, we concern ourselves primarily with resolving orientation of the collagen fibrils which also occurs on a subwavelength scale for THz radiation. Figure 3b-e show the subwavelength THz response of cartilage measured with polarization parallel and perpendicular to the articular surface. Measurements were performed at discrete locations, from the superficial through to the deep zone, encompassing the different orientations of the collagen fibrils indicated in Fig. 3a. As a comparison, we also plotted the permittivity of the sample measured in the far field (i.e. a spatial average measured through the entire sample) and the permittivity of pure water (taken from ref. 22 ). Note that water alone accounts for nearly 80% of the wet weight of articular cartilage 40 , and that, due to the THz diffraction limit, the far-field spatially averaged measurement is carried out over a sample length of ~0.5 mm, a length scale over which both the protein concentration and fibril orientation can be expected to vary substantially, owing to the heterogeneity of the biological sample on a micro-scale. The water spectral response shows a decreasing permittivity with increasing frequency 22 . However, both the spatially averaged and subwavelength THz response at all points across the depth of the cartilage exhibit broad features that are not apparent in the spectrum of pure water. Here, the broad peak at . − THz cm 1 5 (50 ) 1 in the real part of the permittivity spectrum ( . THz 1 7 in the imaginary part) is not due to bulk water and hence, is a feature associated with hydration water and the fibrils themselves (note that the smaller oscillatory peaks in the spectrum are artefacts of the finite Fourier transform used in the analysis, depending on the temporal length of the THz measurement). Finally, the observation that the far-field permittivity is not the average of the near-field permittivities exists. Finally, we note that the far-field measured permittivity is not characteristic of the spatial average of the near-field permittivities. We believe there are two origins to this effect. Firstly, in a spatially inhomogeneous sample, the coherent averaging of transmitted fields is not expected to be representative of the spatially average permittivities themselves. Moreover, the THz spot size is larger than the sample itself, which makes any far-field measurement unreliable. Results and Discussion When we compare the cartilage's local permittivity, measured as a function of the distance from the superficial zone to the deep layer, to the spatially averaged measurement, we see a number of striking traits. Firstly, for horizontal THz polarization (Fig. 3b,c), the real part of the THz permittivity increases going from the superficial to the deep zone (top to bottom in Fig. 3a), whilst the imaginary part decreases. This indicates that the sample is most polarizable when the THz field is oriented along the fibril direction, i.e. in the superficial zone, and suggests that the collagen fibrils have a THz frequency dipole moment oriented along their principal axis. This assignment is corroborated by measurements with THz polarization rotated by 90 degrees (Fig. 3d,e): here the spatial dependence of the permittivity is essentially reversed and the sample is most polarizable at a deep location where the THz field is oriented along the fibril axis. It is important to note that the variation between the two sets of measurements in Fig. 2 most likely arises from the response of two slightly different areas of the sample, and is indeed representative of the variation when measuring day to day in the lab, a problem arises due to an inherent difficulty in positioning the sample on such small length scales. Hyperspectral measurements of a second sample are shown for comparison in supplementary section S3, which exhibits similar features to the results presented here. It is important to note that slight variations in sample thickness or hydration level will lead to slightly different values for the extracted for the real and imaginary parts of the permittivity. This is a well-known problem in phase resolved measurements, since the optical thickness of a sample will determine to a large degree the phase of the transmitted wave. Nevertheless, we again observe a clear resonance at 1.6 THz in regions where the polarisation is aligned to the collagen fibril axis. It has been shown that proteins have low-frequency vibrational modes in the far-IR region 50 , as well as coupled solute-"solvent modes from the solvated solute 51 . For a biological tissue such as cartilage, both fibrous type-II collagen and water in proximity to the protein (i.e. hydration water), may contribute to the total THz response. Markelz et al. have shown that collagen (lyophilised powder) has a rapidly increasing absorbance with increasing frequency in the range 0.3 to . THz 1 25 14 . Our data are in line with those findings, and we speculate that this broad absorption band is associated with the intermolecular structure of collagen. The strong dependence of the spectral response upon the THz field polarization may be associated with (water-mediated) collagen interstrand coupling 14 , which is stronger when the fibrils are aligned parallel to one another. It is possible that such interstrand coupling could play an important role in stabilizing the collagen structure 52 . Alternatively, the alignment of the water network in the direction of the fibrils gives rise to the polarisation effect observed, and further studies of the localized polarization-sensitive THz response observed here could provide greater insight. Conclusions We have demonstrated for the first time subwavelength hyperspectral THz imaging of articular cartilage using the photoconductive properties of a silicon photomodulator. We study articular cartilage, composed of collagen which is the most abundant structural protein in the human body, and find that its THz dielectric function varies on a sub-THz wavelength scale depending on collagen fibril orientation, which could be due to the presence of a THz dipole moment along the primary axis of the fibril or the collagen is birefringent. We point out that such a detailed observation is impossible to deduce from far-field measurements, demonstrating the value of this subwavelength approach in regards to the diagnosis of pathologies that alter the collagen structure. It is interesting to note that, since the fundamental imaging resolution limit of our measurement is determined by the diffraction of the optical pump pulse, we believe that our approach, where sub-micron resolution may even be possible, holds promise as a future microscopy tool with potential for applications in the biomedical sciences, even on subcellular scales. However, while the presence of a THz resonance in oriented regions of articular cartilage is certainly a promising observation, we acknowledge that it is as yet unclear whether this additional information could potentially be useful for diagnosis. Moreover, to implement such a THz imaging technique in real-world applications, improvements to both the data acquisition rates as well as the current costs of THz measurement systems will be required. Methods Sample preparation. Bovine metacarpophalangeal joint cartilage was obtained from a local abattoir and washed in phosphate-buffered saline (PBS; pH 7.4) before cryosectioning. A cartilage segment was immersed in Bright cryo-m-bed compound and frozen before cryosections were cut. Cross-sections of cartilage were cut perpendicular to the articular surface and analyzed. The geometry of the section was recorded in polarized light microscope images, obtained using a 10X objective on a standard polarized light microscope and a CCD camera (QImaging Retiga 2000R). Orthogonal patterns. We observe THz transmission via a single-element detector in the far-field. Hence, as mentioned in the main text, our sub-wavelength resolution is achieved by modulating our THz beam with different encoding patterns in the near-field of our sample. To achieve optimal signal-to-noise ratio, we use an orthogonal set of binary patterns derived from Hadamard matrices 38,39 . We now consider the construction of an N-pixel image Ψ; our i th measurement, φ i , is the dot product of the object transmission function and the i th mask configuration, mathematically expressed as where w ij holds the spatial information of the i th mask and ψ j is the j th pixel of the image. This can be represented by the matrix equation Φ = WΨ, where the rows of matrix W are reformatted into the projected masks. For invertible matrices W, the image vector Ψ can be obtained through matrix inversion Ψ = W −1 Φ, which then has to be reshaped into a 2D matrix of pixel values. Further, the matrix equation Φ = WΨ represents the image being expanded in some basis given by W. For this study, we use Hadamard matrices as the basis expansion, i.e. W is a Hadamard matrix of order N. A Hadamard matrix H n is defined as an n × n matrix of +1 s and −1s with the property that the scalar product between any two distinct rows is 0 (each row is orthogonal to every other one). Thus H n satisfies: . Moreover, a Hadamard basis minimizes the mean square error of each pixel in the image 38 . Here, masks are created via the photoexcitation of silicon, thereby rendering some pixels opaque and leaving the rest transmissive. This means that the physical masks are composed of 1 s and 0 s whereas Hadamard matrices are made of +1 s and −1s. This prevents us from doing a fully orthogonal measurement. However, as is outlined in 39 , we can still perform such a measurement with our system. For this, we carry out sequential measurements of a mask directly followed by its inverse and record the difference in THz transmission via a lock-in amplifier. The signal acquisition time for each mask and its inverse is 100 ms. Note that the THz transmission is recorded within a 5 ps window after photoexcitation to minimize electron diffusion in the silicon photomodulator(see supplementary of ref. 35 ) and subsequent smearing and broadening of spatial features. Calculating The Permittivity. To obtain the permittivity of a sample using THz-TDS, one typically performs two measurements: one measuring the temporal waveform transmitted through a sample and the other to obtain a reference waveform without the sample. However, we cannot assume a homogenous beam. For this reason, our reference is recorded for each pixel, performing the same measurement on the same system without the sample in place. After Fourier transformation of the time axis, one can divide signal by reference to obtain the frequency dependent amplitude transmission coefficients. These are then equated to the transmission functions of the system, calculated using the transfer matrix method 44 : i i f i 21 11 22 12 where ε i and ε f are the permittivities of the initial and final media, respectively, enclosing the multilayer system and M is a 2 × 2 matrix associated with the propagation through the entire multilayer system. This matrix is given by the product of the individual layer matrices, M ≡ M 1 M 2 M 3 …M N , describing the propagation through each layer. The characteristic matrix of the j th layer, M j , with thickness l j and dielectric function ε j is given by where β ω ε = l c / j j j is the phase delay associated with light propagation inside the j th layer. By equating the experimental amplitude transmission coefficients with (3), we can then solve for the permittivity of the sample as a function of space and frequency.
5,966.8
2018-05-02T00:00:00.000
[ "Engineering", "Materials Science", "Medicine", "Physics" ]
ideal-types, paradigms, models and ‘good practices’: repertoire of conceptual tools for public administration? We have concluded the previous chapter with More’s masterpiece which introduced the notion of utopia and utopian thinking as a way of practising teleological thinking in the study of public governance. In Aristotle’s framework of the four causes (introduced in Chapter 2 and examined for application to PA in Chapter 6), this approach entails starting the analysis from the final cause – that is, the goal or end, the reason why something is brought about – to then turn to the other causes, like the material cause (what enables a thing to be transformed from a potentiality into actuality) and the efficient cause (the forces that bring about change). A utopian approach also entails taking as incipit of the analysis the potentiality (what might be, but does not yet exist in actuality), rather than actuality (what exists here and now). At the opposite pole we can find the notion of a practice that works, a practice (too often and erroneously qualified as ‘best’ in much of the grey literature and consultancy papers) which exists in actuality and is predicated to produce certain effects, at least in the given context where it is operating. ‘Best practices’ or ‘good practices’, as they are often called, exist in actuality rather than in potentiality like utopias, and the starting point is the efficient cause: the causal mechanism which brings about the effect the practice produces. Conceptually, ‘practices’ can be seen to lie at the opposite pole than utopias: practices exist in actuality (here and now), utopias exist as potentials; practices are characterised primarily by a logic of efficient cause, utopias by a logic of final cause. We can also consider there are other conceptual tools that enjoy currency in PA that are located at intermediate points in-between utopias and practices (see Figure 8.1). These are the notions of: model, ideal-type, and paradigm (definitions are provided later in the chapter as the concepts are introduced and examined in turn). In this chapter, we revisit these five notions – utopias, paradigms, INTRODUCTION We have concluded the previous chapter with More's masterpiece which introduced the notion of utopia and utopian thinking as a way of practising teleological thinking in the study of public governance. In Aristotle's framework of the four causes (introduced in Chapter 2 and examined for application to PA in Chapter 6), this approach entails starting the analysis from the final cause -that is, the goal or end, the reason why something is brought about -to then turn to the other causes, like the material cause (what enables a thing to be transformed from a potentiality into actuality) and the efficient cause (the forces that bring about change). A utopian approach also entails taking as incipit of the analysis the potentiality (what might be, but does not yet exist in actuality), rather than actuality (what exists here and now). At the opposite pole we can find the notion of a practice that works, a practice (too often and erroneously qualified as 'best' in much of the grey literature and consultancy papers) which exists in actuality and is predicated to produce certain effects, at least in the given context where it is operating. 'Best practices' or 'good practices', as they are often called, exist in actuality rather than in potentiality like utopias, and the starting point is the efficient cause: the causal mechanism which brings about the effect the practice produces. Conceptually, 'practices' can be seen to lie at the opposite pole than utopias: practices exist in actuality (here and now), utopias exist as potentials; practices are characterised primarily by a logic of efficient cause, utopias by a logic of final cause. We can also consider there are other conceptual tools that enjoy currency in PA that are located at intermediate points in-between utopias and practices (see Figure 8.1). These are the notions of: model, ideal-type, and paradigm (definitions are provided later in the chapter as the concepts are introduced and examined in turn). In this chapter, we revisit these five notions -utopias, paradigms, ideal-types, models and practices -and their usages in PA in an integrated way. We argue that the combined use of these notions may be beneficial to the progress of PA, and we observe that over time in the PA debate attention may shift and the emphasis may be placed on one or the other of these notions to the risk of overlooking the others: we hope that revisiting in a joint way these conceptual tools for PA may enable scholars and practitioners to resort more systematically to the whole gamut, and to employ these conceptual tools in a complementary and integrated way for tackling complex public governance problems. We start from the notion of utopia and its possible usages in PA. UTILISING THE NOTION OF UTOPIA IN PA Is utopian thinking utilised in contemporary PA discourse? It is hard to answer such a question because it would demand wide-scope textual analysis of public discourses, and compellingly defining what exactly could be placed under the label of 'Utopian thinking'. However, a tentative, possibly provocative, statement put forward in this book is that, with a few notable exceptions (one definitely being the contribution by Bouckaert, 2020, aimed at reviving Utopian thinking in and for PA, also drawing on Achten et al., 2016; see also Jacoby, 2005), utopian thinking is limitedly used as a systematic conceptual tool for the critique and reforming of PA, and yet, at the same time, both utopias and dystopias surface copiously in contemporary PA debates. Some of these utopias/dystopias are worked out by practitioners. As noticed by a practitioner intervening on the debate on the usage of utopias in PA, one recurrent utopia is the 'smart', small (mid-sized) city, socially inclusive, highly innovative and well-administered (Lucas, 2015). Interestingly, if we look at Edoardo Ongaro -9781839100345 Downloaded from Elgar Online at 11/04/2020 02:02:12PM via free access a network of (self-asserted) 'forward-looking' cities which was promoted between the end of the 1990s and the debut of the 2000s by the Bertelsmann Foundation (a German-based foundation active in supporting applied research in the field of public governance and management) and called 'The Cities of Tomorrow', 1 it is worth noticing almost all of them were medium-sized cities. Since the 2010s, the main rhetoric revolves around the appealing label of 'smart cities'. It may be worth considering whether these notions of 'the cities of tomorrow' and the 'smart cities' are all utopias (dystopias?) floating around in disguised forms. If the 'mid-sized smart city' might be an example of a practitioner-made utopia, two more categories may be envisaged: 2 scholar-made utopias (and dystopias); and institutions-made utopias (dystopias). In both cases, unfortunately, dystopias may be more abundant than utopias. Starting from scholarly-made utopias/dystopias, one may think of 'governance without government': this might obviously be just a catchy slogan mainly coined to convey a strong message, but to the extent that government is imagined to be useless to good governance, it may swiftly translate into a utopia or, for those who think public administrative apparatuses and governmental action are necessary to good governance, in a dystopia. Certain depictions of the citizens as 'honest plus smart plus engaged', the citizen 'maker of good governance' also appear to display rather more the trait of blueprint utopia easily morphing into dystopia than providing an appropriate characterisation of real citizens (as perhaps shown by the rise of populism in the 2010s in the Western world and beyond). It may be noticed that these utopian/dystopian representations are far from the carefully crafted governance arrangements adopted by More's Island of Utopia to regulate the relationship between government and citizens. Other examples of scholarly-made utopias are more thoroughly crafted and also more explicit in adopting utopian thinking as a conceptual tool. Garofalo and Geuras (2015) identify a number of utopias including 'a covenant between practitioners and scholars', which 'encompasses the hopes and concerns of public administration scholars and practitioners about their lack of connection with one another … in which they collaborate to frame and resolve management and organizational problems' (Garofalo and Geuras, 2015, p. 86, drawing on Posner, 2009). This utopia is iconoclastic in its thrust to enable the critique of current ways of mutual engagement between practitioners and academics conceived as a means to identifying ways forward to better bridge PA scholars and practitioners. There are, fortunately, a number of examples of scholarly-made utopias that point to constructive usages of utopian thinking as conceptual tool for critical analysis and forward-looking thinking. These include at least some of the contributions to envisioning the future of PA that came out of the Minnowbrook conferences ( and Werner Jann (Bouckaert and Jann, 2020), which also developed probably one of the best wrought out and most self-conscious usages of the notion of utopia and utopian thinking in and for PA (Bouckaert, 2020). 3 Turning to institution-made utopias, one obvious, and major, example are the Sustainable Development Goals (SDGs) of the United Nations. One might also wonder whether such ambitious charts of principles and goals are useful or useless utopias. The first set of the United Nations 'Millennium Development Goals' that were to be achieved by 2015 looked in a number of respects like a blueprint, which did not always allow for learning and adaptability, which did not cope with inherent contradictions or trade-offs, and which were hardly usable to consider critically the local circumstances as the point of departure for improvement and development. So, were those goals a useless utopia? Possibly but not necessarily. The 2015 version of the goals, the seventeen Sustainable Development Goals to be attained by 2030, may represent a different story. In fact, a more optimistic view sees them as one of the most ambitious collective undertakings of humankind ever attempted (to achieve a better world), underpinned by multilateralism and by a vision of humanity taking its destiny in its hands collectively (very much in line with Kant's framing of multilateralism as a condition for the attainment of conditions of peace in the world, as in Kant, 1795/2013, 'On Perpetual Peace'); and significantly, Sustainable Development Goals have been approved by all UN Member States. Seen in this way, the Sustainable Development Goals can be interpreted as positive utopias: a way of both challenging the current state of affairs in the world and of envisioning a world which is other from the extant one, a way of thinking teleologically starting from the ultimate goals to attain rather than the extant circumstances. In this sense, they might be interpreted as positive utopias, and not just as a form of 'Management by Objectives' (dys) topian list. As part of these utopias, Sustainable Development Goal number 16 concerning the development of strong and resilient public institutions enabling peace and justice might be interpreted as a collectively endorsed utopia inspiring PA scholars and practitioners alike to envisage paths for the betterment of PA. There is a link, but also a clear distinction, between foresight and utopian thinking. Strategic and policy foresight and other forward-looking exercises developed by public institutions (as may be the case with think tanks or policy units, at times outside government but influential over it, at times embedded into the very administration of the core government, as it happens in some countries, or the supranational polity of the European Union: for example the European Political Strategy Centre of the European Commission, or the policy lab of the Joint Research Centre, again of the European Commission, which produces foresight studies and scenarios on the future of government and of citizen-government relations) are ultimately driven by an attempt at forecasting, at anticipating futures with diverse grades of likelihood and resemblance to the present. Utopian thinking deliberately breaks all the bridges with the extant state of affairs to enable re-thinking, a thinking afresh of how government and society could be organised. From the perspective of the notion of utopia, we can revisit three other most famous concepts used in the field of PA as well as across the social and natural sciences: these are the notions of ideal-type, paradigm, and models -these notions have a huge history and range of usages: consistently with the purposes of the book, we confine this revisiting of the three notions to the application to the field of PA. REVISITING THE NOTIONS OF IDEAL-TYPE, PARADIGM, AND MODEL Given the significance for the PA debate, it may be worth distinguishing utopias from the notion of 'ideal-type', famously associated with the work of Max Weber. Interestingly, Weber mentions that ideal-types are in a sense a utopia: It [the ideal type] is not a description of reality but it aims to give unambiguous means of expression to such a description … An ideal-type is formed by the one-sided accentuation of one or more points of view and by the synthesis of a great many diffuse, discrete, more or less present and occasionally absent concrete individual phenomena, which are arranged according to those one-sidedly emphasized viewpoints into a unified analytical construct (Gedankenbild). In its conceptual purity, this mental construct (Gedankenbild) cannot be found empirically anywhere in reality. It is a utopia. (Weber, 1949, p. 90, emphasis added) It should be noticed that here 'ideal' does not mean 'normative/prescriptive' (that is, something that ought to be achieved), it simply means that it is mental, and in its conceptual purity, this mental construct cannot be found empirically anywhere in reality. It is in this specific sense that Weber referred to it as utopia. Ideal-types are culturally meaningful, value-laden representations of social phenomena, but yet, different to the utopias as delineated in the previous section of this chapter, ideal-types are not whole worlds 'other' from this world; rather, they keep their umbilical cord with the social phenomena, of which they represent a unified analytical construct. The ideal-type is not a description of reality but it aims to give unambiguous means of expression Edoardo Ongaro -9781839100345 Downloaded from Elgar Online at 11/04/2020 02:02:12PM via free access to such a description: its usefulness lies in that ideal-types can be used as yardsticks -investigators can arrive at interpretative understanding of a concrete empirical observation by comparing its differences with the initially constructed yardstick. Weber famously theorised the ideal-type of 'bureaucracy under legal domination' (that is, where legitimacy lies in the supremacy of the law, rather than in charisma or tradition). The process of ideal-typing is a matter of imagining and contrasting the worked out analytical construct with experience: 'It is a matter here of constructing relationships which our imagination accepts as plausibly motivated and hence as "objectively possible" and which appear as adequate from the nomological standpoint' (Weber, 1949, p. 92, emphasis added). It has to do with generic patterns (behaviour and structure) of culturally significant features (that are necessary for understanding causal relationships, and are significant for the social scientist or a larger social-cultural group), which are given a unique meaning (these are indicated as the genetic features that make the ideal-type unique: an ideal-type is a unique 'creature' in the realm of the ideal). In this sense, Ideal-typing may be claimed to be an approach to theory building. The ideal-type is based on logical coherence, at logical and value (axiological) level, which entails that the set of values upheld by the social scientist must be made explicit to the reader. 4 As aptly summed up by Stout (2010), in order to engineer the ideal-type method, first a specific social phenomenon of interest must be identified. Second, a culturally significant organising characteristic must be chosen and specified as the frame of reference. Third, the generic elements essential for identifying causal relationships must be identified; the set should be culturally significant, as comprehensive as possible, and the manner in which these elements are thought to be related must be explicated in a logical manner. Fourth, mutually exclusive meanings of each element must be interpreted so that the genetic character of the ideal-type is clear. These meanings must also be logical and coherent in their relationships with one another and plausible in comparison to experience (Stout, 2013). The art of working out new ideal-types might be deemed to be a lost art rather than something into which contemporary PA scholars are engaging, but Stout and Love systematically resort to the use of ideal-typing to work out their ideal-type of 'integrative governance' as a synthesis of four primary governance approaches (Stout and Love, 2019, pp. 46-9 in particular). Their work is thus an example both of ideal-typing as a practised and contemporary art in public governance, and of a book-manifesto which makes explicit the philosophical foundations of the proposed argument, indeed in which philosophical knowledge underpins and informs the argument: an example of philosophy of public administration, in the framework worked out in Chapter 1 (incidentally it may be noticed my ontology is different from the authors', yet this consid-Edoardo Ongaro -9781839100345 Downloaded from Elgar Online at 11/04/2020 02:02:12PM via free access eration does not detract anything from my appreciation for writing one of the rare books in recent times proposing a philosophy of public administration). Both ideal-types and utopias may be used for framing empirics and gaining insights, although ideal-types are more geared to theory building while utopias are also meant to arouse passions and social action for change towards a different state of affairs than the extant one; utopias are a radical way of utilising teleological thinking. Both are rather context-insensitive, but ideal-types are amenable to mental experimentation of what would happen when placed in context 'A' or context 'B', while utopias set their own context, and replace the real ones. Utopias totally reverse the logic of path dependency; they embody the converse of historical institutionalism. Both utopias and ideal-types are to be distinguished from the notions of 'model' and of 'paradigm'. A model can be defined as a selective reduction of reality in order to highlight key relations and connections for purposes of understanding and highlighting key causal relations as well as for guiding action. Models are ubiquitous in the study of PA; at times they aim at providing description, explanation and interpretation of administrative phenomena; other times they take up a normative and prescriptive thrust and aim at providing guidance for change and reform of public administration and management. The 'New Public Management' (Hood, 1991) , and so on, may be labelled as models (more or less internally consistent), with differential emphases and either leaning towards the descriptive and explanatory (descriptive-analytical models), or towards the prescriptive and normative (prescriptive models). Modelling, when it takes a normative thrust, is for action: it is a form of bracketing wider aspects of reality to focus action on those aspects that are causally more directly linked to the expected outcomes to be attained, purposefully forgetting that reality is more complex than what the model depicts (the main problem here lies in the fact that the forgotten part of reality sooner or later strikes back). When models also take up a normative dimension, they can be likened functionally to ideal-types and utopias in that they can be used as yardsticks for the critical analysis of the present situation in view of the pursuit of a travel -a reform trajectory -towards a more desirable destination (the obvious problem applies -desirable for whom? -which brings us back to the issue of the legitimacy of a governance system discussed in Chapter 5). The notion of paradigm is pitched at a different level: a paradigm can be defined as a coherent pattern of core ideas and premises (assumptions or hypotheses) that governs scientific inquiry in the discipline at a given time: Edoardo Ongaro -9781839100345 Downloaded from Elgar Online at 11/04/2020 02:02:12PM via free access these are scientific paradigms (Kuhn, 1962;Riccucci, 2010). The notion of paradigm may also take a normative and prescriptive thrust, and thence in PA paradigms can be defined as sets of core tenets about how to organise the public sector. Drechsler has called attention to the significance of three main paradigms in PA, from a historical viewpoint: the Western PA paradigm (itself highly composite, and as a very minimum it should be distinguished Anglo-American PA from continental European 'Weberian' PA), the Confucian PA paradigm, and the Islamic PA paradigm. Over more recent centuries the (highly composite and varied) Western paradigm of PA has spread widely across the world, and in many respects it has been either coercively forced upon far-flung countries ('far' as seen from Western Europe, of course), or more or less willingly adopted by a number of countries because of its alleged qualities and attributes (one can think here of the Meiji revolution/ restoration in 19th-century Japan, or post-WWII processes of Westernisation of institutions and administration in South Korea). However, at least from a historical perspective, it is possible to observe that in the history of PA there have been at least two paradigms distinct and possibly 'alternative' to the Western one: the Confucian PA paradigm and the Islamic PA paradigm (the reader can find more on paradigms in PA and their usage in the postscript by Wolfgang Drechsler at the end of this book). The border between 'model' and 'paradigm' may be not so easy to draw in practice. Some authors use the notions of paradigm to work out what they refer to as the contemporary 'public governance paradigms' (Andersen et al., 2020) to outline the features of doctrines about the reform of PA that we have in this book placed under the label of models (namely: New Public Management, Neo-Weberian State, Digital Era Governance, Public Value Management, and New Public Governance). Interestingly, they refer to public governance paradigms as 'quasi-paradigms': they retain the property of having a core of propositions (like a paradigm) and then a set of declensions of these core propositions can be made to flesh out the implications drawn from the core tenets. They are defined as 'relatively coherent and comprehensive norms and ideas about how to govern, organize and lead the public administration' and operationalised along five dimensions, defined as follows (Andersen et al., 2020): the extent of centralised control (the degree of recommended centralised control in the vertical chain of command); the emphasis placed on horizontal coordination (the degree of recommended horizontal interagency coordination and collaboration); the extent of use of value articulation (the degree to which public governance should be based on the articulation of public values); the extent to which it is resorted to the use of incentives (the degree to which public governance should be based on conditional positive and negative incentives); and the extent to which societal involvement is resorted to (the degree to which Edoardo Ongaro -9781839100345 Downloaded from Elgar Online at 11/04/2020 02:02:12PM via free access private for-profit or non-profit actors, including citizens, should be involved in public governance). According to the authors, these quasi-paradigms [are not] paradigms in the Kuhnian sense of the term. However, we agree with Dunleavy and Margetts (2013) that public governance paradigms behave like ordinary paradigms in two important respects. First, they tend to have two levels, with an overall macro-level theory based on a few propositions that pull together and give direction to a wider range of supplementary concepts, detailed recommendations and preferred methods. Second, they develop in response to the problems of their predecessor, enter a period of relatively successful 'normal governance' and are problematized by the accumulation of problems to which they cannot provide an appropriate response. These resemblances to Kuhnian-type scientific paradigms serve to justify the notion of public governance paradigms. (Andersen et al., 2020, n.p.) The introduction of the notion of 'quasi-paradigm' points to the consideration that the border between what constitutes a paradigm, on one hand, and what constitutes a model, on the other hand, may be porous, and intermediate concepts may be usefully wrought out and employed. PRACTICES: GOOD AND BEST Utopias, ideal-types, paradigms and models have crucial significance for the field of PA. However, words like utopias, ideal-types, paradigms and models have been looked at with suspicion in more recent times, partly as a sensible reaction to the failures of utopian-inspired social designs as well as (on a smaller scale) the apparent lack in fulfilling the expectations raised by models of PA reform like the New Public Management and a spate of others which followed suit. It is also partly as a reaction to reform models having been deemed to have fallen short of the expectations they raised that international organisations like the Organisation for Economic Co-operation and Development (OECD), which was very active in spreading 'global models of public management reform' during the 1990s, seem to have more recently orientated themselves towards the opposite approach, namely: the search for 'practices that work', which are often in practitioners' discourse called 'best practices'. It seems that nowadays the practices approach -the extrapolation-based approach -is the prevailing one, notably in practitioners' discourse; its core tenet can be summarised as: 'rather than looking for new models (paradigms, ideal-types, utopias), we must search for practices that work and extrapolate them for replication (properly adapted) elsewhere'. There is much more than meets the eye, however, and the logic of best practices may be both seductive and highly misleading: first, truly 'best' practices are (very) rare and, second, Edoardo Ongaro -9781839100345 Downloaded from Elgar Online at 11/04/2020 02:02:12PM via free access the process of extrapolation and transfer of a practice (better: of the mechanisms that, incorporated into the practice, enable it to achieve certain results in the extant situation) to a target domain in order to replicate the results elsewhere is a major, complex process that may also lead to unexpected consequences (Bardach, 1994 and1998, Chapter 2;Barzelay, 2007;Bretschneider et al., 2005; Ferlie and Ongaro, 2015, Chapter 8). It is for these reasons that many academics have claimed it better to conceive of the search for 'good' or 'smart' practices, rather than allegedly 'best' ones, that is, the search for practices that work well enough and can be replicated elsewhere, provided context and contextual influences are appropriately taken into account (Behn, 1991;Bardach, 1998;Barzelay, 2007;Ferlie and Ongaro, 2015). The practices approach is appealing to practitioners, notably for its apparent sensibleness and 'pragmatism'. However, even when taking into account the warning against the seductions of naïve interpretations of the logic of 'best' practices, the practice approach may soon reach its limits. This occurs for a deeper reason: a practice-driven focus is unlikely to be equipped with the intellectual resources for escaping the traps of path-dependency. The practice agenda is inherently likely to be drawn into the scouting of the nearby terrain, and lose sight of the possible alternative views about PA and how the public sector could -and should -be organised. An approach self-confined to detecting practices that work and not complemented by the other approaches is unable to provide breakthrough solutions, or to furnish guidance on how to organise public governance that can anticipate major economic, societal or environmental changes (Pollitt, 2016b). TOWARDS AN INTEGRATED APPROACH: UTOPIAS, IDEAL-TYPES, PARADIGMS, MODELS, AND PRACTICES AS REPERTOIRE OF CONCEPTUAL TOOLS FOR THE BETTERMENT OF PUBLIC ADMINISTRATION The history of public governance and public administration has been made by the combined usage of different, complementary approaches; in this sense, nowadays partly neglected approaches like the usage of models, paradigms, ideal-types and utopias bear continued significance for the field of PA, as does the approach of practices extrapolation which seems to enjoy wide currency at the time the second edition of this book is being completed. We might indeed see the whole gamut of these approaches as amenable to being ordered in function of the emphasis on either of the four causes first outlined by Aristotle (see Chapters 2 and 6). Those who ascribe to the Aristotelian approach stress that it is the joint application of the four causes to enable a full understanding of the phenomenon investigated. However, different agendas of Edoardo Ongaro -9781839100345 Downloaded from Elgar Online at 11/04/2020 02:02:12PM via free access research and epistemic approaches may place a different emphasis on either of the causes. We argue that the logic of the extrapolation of practices is primarily grounded in an emphasis on the efficient (and the material) cause. Conversely, utopias and to a certain extent ideal-types, paradigms and models embody teleological reasoning and take the final cause as the starting point of the inquiry. Finally, all these approaches are concerned with the formal cause (if you subscribe to the Aristotelian approach); however, we would put forward the tentative claim whereby the 'teleological' approaches -utopias and to a certain extent paradigm and ideal-types -strive to more directly define the 'nature' of the object they conceive and work out, that is, they are more interested with the formal cause (the essence or nature of the entity), whilst practice-based approaches are more focused on the apparent properties of the entity with an inherent orientation to disregard issues of 'essence' and 'form' (formal cause) as deemed to be ultimately of limited 'pertinence' and 'usefulness'. We can hence now return to Figure 8.1, where we present utopias, paradigms, ideal-types, models, and practices as a range of conceptual tools which may also be seen in a combined way as a function of, first, the extent to which they take the move from actuality (what exists here and now) or potentiality (what may be brought to exist, but does not exist in actuality: it is not entelechy, in Aristotle's terminology, see Chapter 2); and, second, from the relative emphasis on either of the four causes that is placed when utilising these notions. Utopian approaches clearly take the move from the final cause, to then turn to tackling the issue of 'how to get there', that is, the enablers (material cause) and the forces (efficient cause) that may lead the system towards the end-goal (to the extent it is desirable -not dystopian -and taking into account that utopias perform more of an iconoclastic function as critique of the present state of affairs to identify ways forward than as a blueprint). Approaches centred on learning from practices and extrapolating practices from one context for adaptation to another one take the move from the efficient cause (what is the mechanism that brings about the effect observed in the practice) and the material cause (what provides the conditions and enables something to happen). The formal cause -what is the nature or essence of the object of investigation -is the starting point in modelling, ideal-typing and conceiving of paradigms, with an emphasis on 'what brings about certain effects' in modelling (efficient and material causes), and at least implicitly an emphasis on what the ultimate goal is (final cause) in ideal-types and paradigms. NOTES 1. The network was later dissolved, and the Bertelsmann Foundation initiative is not to be confused with the homonymous EU programme.
7,165.2
2020-07-24T00:00:00.000
[ "Philosophy" ]
A Novel Magnet-Axis-Shifted Hybrid Permanent Magnet Machine for Electric Vehicle Applications Abstract: This paper proposes a novel magnet-axis-shifted hybrid permanent magnet (MAS-HPM) machine, which features an asymmetrical magnet arrangement, i.e., low-cost ferrite and high-performance NdFeB magnets, are placed in the two sides of a “5”-shaped rotor pole. The proposed magnet-axis-shift (MAS) effect can effectively reduce the difference between the optimum current angles for maximizing permanent magnet (PM) and reluctance torques, and hence the torque capability of the machine can be further improved. The topology and operating principle of the proposed MAS-HPM machine are introduced and are compared with the BMW i3 interior permanent magnet (IPM) machine as a benchmark. The electromagnetic characteristics of the two machines are investigated and compared by finite element analysis (FEA), which confirms the effectiveness of the proposed MAS design concept for torque improvement. Introduction Due to their high torque/power density, high efficiency and excellent flux weakening capability, interior permanent magnet (IPM) machines are considered as competitive candidates for electric vehicles (EVs) [1]. In order to improve the reluctance torque and reduce the magnet usage, multi-layer IPM machines are widely employed in EV applications, such as the BMW i3 traction machine [2]. However, for the conventional IPM machines, the optimum current angle for maximizing reluctance and permanent magnet (PM) torques basically differs by a 45 • electrical angle, which results in a relatively low utilization ratio of the two torque components. Consequently, in order to deal with this issue, hybrid rotor [3][4][5][6][7], dual rotor [8] and asymmetrical permanent magnet (PM)-assisted synchronous reluctance machines [9] have been recently developed. The constant power-maintaining capabilities of the hybrid rotor configurations are investigated by adopting the parameter equivalent circuit method, which shows that the hybrid rotor topologies have more degrees of freedom for a given constant power operating range [10]. Moreover, the theoretical analysis demonstrates that the PM usages of synchronous machines can be reduced by about 10% with the reluctance axis shifted by a displacement angle of about 60 • [11]. The hybrid synchronous machines with a displaced reluctance axis are comparatively studied with conventional pure PM and electrically excited synchronous machines [12], which demonstrates that the hybrid topologies exhibit higher torque and high-efficiency operating range. In addition, the effects of shifting the PM axis with respect to the reluctance axis in PM machines are investigated [13], showing that the asymmetric salient PM machine exhibits higher torque and constant power speed range [14]. Nevertheless, the hybrid and dual rotor machines suffer from complicated structures, while the latter asymmetrical one is characterized by shifts of both magnet and reluctance axes that require relatively sophisticated computational design efforts. Recently, in order to reduce the use of the rare-earth NdFeB magnets, the hybrid PM concept has been proposed and developed in rotor PM [15,16] and stator PM [17][18][19][20][21][22] configurations. Compared with the structure of conventional spoke-type magnets, the proposed hybrid PM topology exhibits better field weakening capability and lower total cost [15]. Besides, compared with a doublelayer PM structure, the U-shaped configuration has good irreversible demagnetization withstanding capability [16]. Due to the variable magnetization state of the low-coercive-force AlNiCo magnets, the flexible air-gap flux adjustment and wide operating range with high efficiency can be readily achieved in stator hybrid PM machines [16][17][18][19][20][21][22]. A novel magnet-axis-shifted hybrid PM (MAS-HPM) machine combined with the asymmetric and hybrid PM concepts is proposed in this paper. The purpose of this paper is to propose an MAS-HPM machine for torque performance improvement. The proposed configuration features an asymmetrical PM arrangement, i.e., low-cost ferrite and high-performance NdFeB magnets, which significantly reduces the difference of the optimum current angle for maximizing PM and reluctance torques. Hence, the torque capability can be further improved. In order to validate the merits of the magnet-axis-shift (MAS) effect, the IPM machine of an BMW i3 vehicle is used as a benchmark. The basic electromagnetic characteristics of the two machines are comparatively investigated, which confirms the validity of the proposed MAS design concept. Machine Topologies The topologies of the benchmark 2016 BMW i3 IPM machine and the proposed MAS-HPM machine are shown in Figures 1a,b, respectively. The main design parameters are tabulated in Table 1. It should be noted that the proposed machine shares the same inverter power ratings, stator structure, active stack length and air-gap length with the BMW i3 IPM machine. Meanwhile, in order to make a fair comparison, the rare-earth PM usages are identical in the two structures. The main difference between the two machines lies in the fact that two kinds of PM, i.e., low-cost ferrite and high-performance NdFeB magnets, are simultaneously employed in the developed MAS-HPM machine to achieve the MAS effect. The total costs of the magnets are given in Table 1. Due to the additional ferrite magnets, the proposed machine has a slightly higher total cost of magnets than the BMW i3 IPM counterpart. However, compared with the BMW i3 IPM machine, the ratio of the peak torque to the total cost of magnets in the MAS-HPM configuration is increased by about 7.81%, which indicates that the torque capability can be improved by 7.81% at the same cost of magnets. The d and q-axes' equivalent electrical circuits are illustrated in Figure 2. In the synchronous reference frame, the voltage equations for the PM synchronous machine are expressed as: The d and q-axes' equivalent electrical circuits are illustrated in Figure 2. In the synchronous reference frame, the voltage equations for the PM synchronous machine are expressed as: where R is the stator resistance, ω is the electric frequency, ψ d and ψ q are the d and q-axes' flux linkages, respectively. i d and i q are the d and q-axes' current, respectively. where R is the stator resistance, ɷ is the electric frequency, ψd and ψq are the d and q-axes' flux linkages, respectively. id and iq are the d and q-axes' current, respectively. By the application of Kirchhoff's voltage and current laws to both d and q-axes, the four equations can be obtained as: where RFe,d and RFe,q are the iron losses resistances in d and q-axes, respectively. idi and iqi are the iron losses currents in d and q-axes, respectively. idm and iqm are the d and q-axes' magnetization currents, respectively. MAS Principle The total torque Ttotal of an IPM machine, including the PM torque TPM and the reluctance torque Tr, can be expressed as [23]: By the application of Kirchhoff's voltage and current laws to both d and q-axes, the four equations can be obtained as: where R Fe,d and R Fe,q are the iron losses resistances in d and q-axes, respectively. i di and i qi are the iron losses currents in d and q-axes, respectively. i dm and i qm are the d and q-axes' magnetization currents, respectively. MAS Principle The total torque T total of an IPM machine, including the PM torque T PM and the reluctance torque T r , can be expressed as [23]: where p r , ψ f , i s , L d and L q are the rotor pole pair number, the PM flux linkage, the phase current and the dand q-axes' inductances, respectively. β is the current angle, which is defined as the angle between the phase current and open-circuit back electro-motive force (EMF) [24]. From Equations (3)-(5), it can be found that the optimum current angle for T r is theoretically twice that for T PM . If the difference between the optimum current angles for maximizing the two kinds of torques can be reduced, the torque capability of the machine will be improved. To achieve this goal, this paper proposes an asymmetrical PM arrangement by employing the HPM configuration, i.e., low-cost ferrite and high-performance NdFeB magnets, which is termed as the MAS effect. In this case, the magnet axis is shifted while the reluctance axis is unchanged due to the symmetrical rotor configuration. Thus, the difference of the current angles γ s when both T PM and T r reach the maximum can be reduced, which can be defined as: where β R and β PM are the optimum current angles for the reluctance and PM torques, respectively. The flux density distributions of the two machines are calculated by finite element analysis (FEA) and illustrated in Figure 3. It can be seen that the d-axis shifted by an angle α s in the proposed machine under the open-circuit condition, as shown in Figure 3a, which confirms the MAS effect. The reluctance d and q-axes are not changed in the two machines, as shown in Figure 3b, which is mainly attributed to the design of the symmetrical flux barriers in the two rotor configurations. The flux density distributions of the two machines at the peak current load condition are given in Figure 3c. Due to dual excited armature windings and PMs, the two machines under the load condition have higher flux densities than at other operating conditions. To clearly understand the MAS effect, the open-circuit air-gap flux density waveforms are given in Figure 4. Compared with the d-axis in the BMW i3 IPM machine, the displacement of the actual daxis occurred in the proposed topology, which means that the magnet and reluctance axes grow closer by using the HPM configuration. Consequently, the resultant current angles for optimizing the reluctance and PM torques are closer, which enables the torque improvement. Moreover, the To clearly understand the MAS effect, the open-circuit air-gap flux density waveforms are given in Figure 4. Compared with the d-axis in the BMW i3 IPM machine, the displacement of the actual d-axis occurred in the proposed topology, which means that the magnet and reluctance axes grow closer by using the HPM configuration. Consequently, the resultant current angles for optimizing the reluctance and PM torques are closer, which enables the torque improvement. Moreover, the fundamental amplitude of the air-gap flux density in the MAS-HPM machine are found to be 53.70% higher than that of the BMW i3 IPM machine, as reflected in Figure 4b. Due to the asymmetrical PM configuration, larger high-order harmonics of the air-gap flux density are observed in the MAS-HPM machine. Electromagnetic Performance Comparison In order to validate the MAS effect, the basic electromagnetic characteristics of the proposed MAS-HPM machine are comparatively studied with those of the BMW i3 IPM machine in this section. In order to reduce the computational time, 1:12 scale models are adopted for the two machines. The simulation time is 2.5 hours. Open-Circuit Performance The back EMF waveforms of the two investigated machines are shown in Figure 5. Compared with the BMW i3 IPM machine, the proposed configuration exhibits a 53.54% higher back-EMF fundamental amplitude, which indicates that the magnet torque can be effectively improved by using the HPM configuration. In addition, the cogging torque waveforms of the two machines are shown in Figure 6, which experience the same periods due to the same numbers of stator slots and rotor poles. Because the air-gap flux density contains larger high-order harmonics, as shown in Figure 3b, the MAS-HPM structure has a higher cogging torque amplitude. The ratios of the cogging torque amplitudes to the corresponding peak torque values in BMW i3 IPM and MAS-HPM machines are 0.73% and 2.04%, respectively, which are lower than the acceptable value of 2.5%. Electromagnetic Performance Comparison In order to validate the MAS effect, the basic electromagnetic characteristics of the proposed MAS-HPM machine are comparatively studied with those of the BMW i3 IPM machine in this section. In order to reduce the computational time, 1:12 scale models are adopted for the two machines. The simulation time is 2.5 h. Open-Circuit Performance The back EMF waveforms of the two investigated machines are shown in Figure 5. Compared with the BMW i3 IPM machine, the proposed configuration exhibits a 53.54% higher back-EMF fundamental amplitude, which indicates that the magnet torque can be effectively improved by using the HPM configuration. In addition, the cogging torque waveforms of the two machines are shown in Figure 6, which experience the same periods due to the same numbers of stator slots and rotor poles. Because the air-gap flux density contains larger high-order harmonics, as shown in Figure 3b, the MAS-HPM structure has a higher cogging torque amplitude. The ratios of the cogging torque amplitudes to the corresponding peak torque values in BMW i3 IPM and MAS-HPM machines are 0.73% and 2.04%, respectively, which are lower than the acceptable value of 2.5%. Torque Characteristics The torque versus current angle characteristics of the two machines are illustrated in Figure 7. The PM and reluctance torques are separated by using the frozen permeability method [25]. It can be seen that the γs of the proposed MAS-HPM machine is smaller than that of the BMW i3 machine. As a result, a higher torque capability can be obtained in the HPM case, as evidenced in Figure 8. Moreover, due to the MAS effect, the ripple patterns of the PM and reluctance torques of the proposed machine are different, which results in a torque ripple offset effect. Hence the HPM configuration exhibits 55.99% lower torque ripple than the BMW i3 IPM machine, as shown in Figure 8b. The average torques versus phase current curves of the two machines are shown in Figure 9. It can be observed that the MAS-HPM machine has a higher torque capability regardless of the applied loads. As a whole, the feasibility of the proposed MAS-HPM design for torque performance improvement is clearly confirmed. Torque Characteristics The torque versus current angle characteristics of the two machines are illustrated in Figure 7. The PM and reluctance torques are separated by using the frozen permeability method [25]. It can be seen that the γs of the proposed MAS-HPM machine is smaller than that of the BMW i3 machine. As a result, a higher torque capability can be obtained in the HPM case, as evidenced in Figure 8. Moreover, due to the MAS effect, the ripple patterns of the PM and reluctance torques of the proposed machine are different, which results in a torque ripple offset effect. Hence the HPM configuration exhibits 55.99% lower torque ripple than the BMW i3 IPM machine, as shown in Figure 8b. The average torques versus phase current curves of the two machines are shown in Figure 9. It can be observed that the MAS-HPM machine has a higher torque capability regardless of the applied loads. As a whole, the feasibility of the proposed MAS-HPM design for torque performance improvement is clearly confirmed. Torque Characteristics The torque versus current angle characteristics of the two machines are illustrated in Figure 7. The PM and reluctance torques are separated by using the frozen permeability method [25]. It can be seen that the γ s of the proposed MAS-HPM machine is smaller than that of the BMW i3 machine. As a result, a higher torque capability can be obtained in the HPM case, as evidenced in Figure 8. Moreover, due to the MAS effect, the ripple patterns of the PM and reluctance torques of the proposed machine are different, which results in a torque ripple offset effect. Hence the HPM configuration exhibits 55.99% lower torque ripple than the BMW i3 IPM machine, as shown in Figure 8b. The average torques versus phase current curves of the two machines are shown in Figure 9. It can be observed that the MAS-HPM machine has a higher torque capability regardless of the applied loads. As a whole, the feasibility of the proposed MAS-HPM design for torque performance improvement is clearly confirmed. Torque/Power versus Speed Curves The torque and power versus speed curves of the two machines are illustrated in Figure 10. It can be seen that the MAS-HPM machine exhibits higher torque and power than the BMW i3 IPM machine over the whole operating range, consequently achieving a better high-speed constant power-maintaining capability. Irreversible Demagnetization The flux density distributions of the magnets are illustrated in Figure 11. When the working temperature is set as 100 °C, the knee points of ferrite and NdFeB magnets are −0.15 and −0.6 T, respectively. It can be observed that the irreversible demagnetization of ferrite and NdFeB magnets does not occur. In order to quantitatively illustrate the flux density variations of magnets, five typical points are selected in three magnets, as shown in Figure 11. The corresponding flux density variations Torque/Power versus Speed Curves The torque and power versus speed curves of the two machines are illustrated in Figure 10. It can be seen that the MAS-HPM machine exhibits higher torque and power than the BMW i3 IPM machine over the whole operating range, consequently achieving a better high-speed constant power-maintaining capability. Torque/Power versus Speed Curves The torque and power versus speed curves of the two machines are illustrated in Figure 10. It can be seen that the MAS-HPM machine exhibits higher torque and power than the BMW i3 IPM machine over the whole operating range, consequently achieving a better high-speed constant power-maintaining capability. Irreversible Demagnetization The flux density distributions of the magnets are illustrated in Figure 11. When the working temperature is set as 100 °C, the knee points of ferrite and NdFeB magnets are −0.15 and −0.6 T, respectively. It can be observed that the irreversible demagnetization of ferrite and NdFeB magnets does not occur. In order to quantitatively illustrate the flux density variations of magnets, five typical points are selected in three magnets, as shown in Figure 11. The corresponding flux density variations Irreversible Demagnetization The flux density distributions of the magnets are illustrated in Figure 11. When the working temperature is set as 100 • C, the knee points of ferrite and NdFeB magnets are −0.15 and −0.6 T, respectively. It can be observed that the irreversible demagnetization of ferrite and NdFeB magnets does not occur. In order to quantitatively illustrate the flux density variations of magnets, five typical points are selected in three magnets, as shown in Figure 11. The corresponding flux density variations of the typical five points on magnets are given in Figure 12. It can be seen that the working points of ferrite and NdFeB magnets are greater than the respective knee points, which indicates that good demagnetization withstanding capability can be achieved. of the typical five points on magnets are given in Figure 12. It can be seen that the working points of ferrite and NdFeB magnets are greater than the respective knee points, which indicates that good demagnetization withstanding capability can be achieved. of the typical five points on magnets are given in Figure 12. It can be seen that the working points of ferrite and NdFeB magnets are greater than the respective knee points, which indicates that good demagnetization withstanding capability can be achieved. Rotor Mechanical Analyses The rotor mechanical strengths of the two machines are investigated at the maximum speed of 12,000 rpm in this section. The von Mises stress maps are shown in Figure 13. It can be observed that the peak stress of the MAS-HPM machine (268.8 MPa) is slightly lower than that of the BMW i3 IPM machine (282.4 MPa), which are both lower than the threshold yield value (396 MPa). Due to the differences in mesh subdivision, it can be observed that the mismatch between the maximal values occurs at the two sides of the symmetrical configurations. However, the difference in stress values between the points of the symmetrical structure is very small and thus negligible, as shown in Figure 13. As a result, it was confirmed that the proposed rotor configuration can withstand a larger centrifugal force at the maximum speed of 12,000 rpm. Rotor Mechanical Analyses The rotor mechanical strengths of the two machines are investigated at the maximum speed of 12,000 rpm in this section. The von Mises stress maps are shown in Figure 13. It can be observed that the peak stress of the MAS-HPM machine (268.8 MPa) is slightly lower than that of the BMW i3 IPM machine (282.4 MPa), which are both lower than the threshold yield value (396 MPa). Due to the differences in mesh subdivision, it can be observed that the mismatch between the maximal values occurs at the two sides of the symmetrical configurations. However, the difference in stress values between the points of the symmetrical structure is very small and thus negligible, as shown in Figure 13. As a result, it was confirmed that the proposed rotor configuration can withstand a larger centrifugal force at the maximum speed of 12,000 rpm. machine when the speed exceeds 4000 rpm. Furthermore, the efficiency maps of the two cases are illustrated in Figure 15. The maximum efficiency of the proposed MAS-HPM machine (95.79%) is slightly higher than that of the BMW i3 IPM (95.57%). Due to the higher iron loss in high speed range, the proposed structure shows a relatively lower efficiency when the speed exceeds 10,000 rpm. Nevertheless, the MAS-HPM machine still exhibits a similar operating range when the efficiency is higher than 93%. machine when the speed exceeds 4000 rpm. Furthermore, the efficiency maps of the two cases are illustrated in Figure 15. The maximum efficiency of the proposed MAS-HPM machine (95.79%) is slightly higher than that of the BMW i3 IPM (95.57%). Due to the higher iron loss in high speed range, the proposed structure shows a relatively lower efficiency when the speed exceeds 10,000 rpm. Nevertheless, the MAS-HPM machine still exhibits a similar operating range when the efficiency is higher than 93%. The iron losses of the two machines under different speeds are given in Figure 14. It can be observed that the stator iron losses dominate the total iron losses in both machines at the rated load. The iron losses of the two machines are very close when the speed is lower than 4000 rpm. However, due to higher harmonics, the HPM structure produces a larger iron loss than the BMW i3 IPM machine when the speed exceeds 4000 rpm. Furthermore, the efficiency maps of the two cases are illustrated in Figure 15. The maximum efficiency of the proposed MAS-HPM machine (95.79%) is slightly higher than that of the BMW i3 IPM (95.57%). Due to the higher iron loss in high speed range, the proposed structure shows a relatively lower efficiency when the speed exceeds 10,000 rpm. Nevertheless, the MAS-HPM machine still exhibits a similar operating range when the efficiency is higher than 93%. Conclusions A novel MAS-HPM machine is proposed to achieve a higher torque capability and a wider high-efficiency operation range for EV applications in this paper. The basic electromagnetic characteristics of the proposed MAS-HPM machine and the benchmark BMW i3 IPM machine are comprehensively investigated and compared by FEA. Due to the MAS effect, the difference between the optimal current angles of maximizing the magnet and reluctance torques is reduced. In addition, it was found that the back-EMF and total torque of the proposed MAS-HPM machine can be effectively improved, compared with the conventional BMW i3 IPM machine. Moreover, the proposed machine shows lower peak mechanical stress, better field-weakening capability, higher peak efficiency and comparable high-efficiency operating range, which confirms the effectiveness of the proposed MAS design concept for performance improvement. However, due to higher harmonics, the proposed MAS-HPM configuration has higher cogging torque and iron losses in a high speed operating range.
5,412.4
2019-02-16T00:00:00.000
[ "Physics" ]
Ionospheric total electron content of comet 67P/Churyumov-Gerasimenko We study the evolution of a cometary ionosphere, using approximately two years of plasma measurements by the Mutual Impedance Probe on board the Rosetta spacecraft monitoring comet 67P/Churyumov-Gerasimenko (67P) during August 2014–September 2016. The in situ plasma density measurements are utilized to estimate the altitude-integrated electron number density or cometary iono- spheric total electron content (TEC) of 67P based on the assumption of radially expanding plasma. The TEC is shown to increase with decreasing heliocentric distance ( r h ) of the comet, reaching a peak value of ∼ (133 ± 84) × 10 9 cm − 2 averaged around perihelion ( r h < 1.5 au). At large heliocentric distances ( r h > 2.5 au), the TEC decreases by ∼ 2 orders of magnitude. For the same heliocentric distance, TEC values are found to be significantly larger during the post-perihelion periods compared to the pre-perihelion TEC values. This “ionospheric hysteresis effect” is more prominent in the southern hemisphere of the comet and at large heliocentric distances. A significant hemispheric asymmetry is observed during perihelion with approximately two times larger TEC values in the northern hemisphere compared to the southern hemisphere. The asymmetry is reversed and stronger during post-perihelion ( r h > 1.5 au) periods with approximately three times larger TEC values in the southern hemisphere compared to the northern hemisphere. Hemispheric asymmetry was less prominent during the pre-perihelion intervals. The correlation of the cometary TEC with the incident solar ioniz- ing fluxes is maximum around and slightly after perihelion (1.5 au < r h < 2 au), while it significantly decreases at larger heliocentric distances ( r h > 2.5 au) where the photo-ionization contribution to the TEC variability decreases. The results are discussed based on cometary ionospheric loss Introduction The main aim of this work is to study the evolution of a cometary ionosphere with the heliocentric distance and life cycle of a comet. It is based on the in situ plasma measurements by Rosetta (Glassmeier et al. 2007) around comet 67P/Churyumov-Gerasimenko (hereafter referred to as 67P; Churyumov & Gerasimenko 1972). The Rosetta spacecraft monitored the cometary plasma environment from 2014 August 6 to 2016 September 30. During this time interval, comet 67P moved from a heliocentric distance of ∼3.6 au toward the Sun, attained a perihelion distance of ∼1.2 au from the Sun, and again moved away from the Sun as far as ∼3.8 au until the Rosetta operations were terminated. This enabled Rosetta to explore the evolution of the cometary ionosphere from a weak activity state to a highly active state, and again back to a quiet state. The 67P cometary ionosphere studies reported earlier are based on plasma measurements along the Rosetta orbiter spacecraft trajectory that varied largely between 0 and ∼1500 km from the comet nucleus. However, a cometary ionosphere has a large altitudinal extent, which depends on its activity level. As the plasma density varies significantly depending on the cometocentric distance (e.g., Edberg et al. 2015), in situ spacecraft plasma measurements cannot directly give a complete description of the Data and method of analyses The total electron density (N e ) at the spacecraft location was deduced from the mutual impedance spectra obtained from the Mutual Impedance Probe (MIP; Trotignon et al. 2007) of the Rosetta Plasma Consortium (RPC; Carr et al. 2007) on board the Rosetta spacecraft. The RPC-MIP is a linear quadrupolar electrode array consisting of two transmitting electric monopoles and one receiving electric dipole, mounted on a 1 m in length carbon fiber-reinforced plastic cylindrical bar with diameter of 2 cm. Each of the electrodes are 20 cm long with diameter of 1.1 cm. Each of the transmitting monopoles is at a 40 cm distance from the nearest receiver and the largest distance between a transmitter and receiver is 1 m. A mutual impedance spectrum is produced by feeding sinusoidal currents at different frequencies to the transmitters and measuring simultaneously the voltage difference from a receiving dipole with both electric dipoles embedded in the plasma to be measured. When the plasma Debye length (λ D ) is smaller than the transmitter-receiver distance, the electron plasma frequency f p can be identified from a resonance in the mutual impedance spectrum. The electron density N e is estimated from f p as N e ∼( f p /8.98) 1/2 , where f p is in kHz and N e , in cm −3 . For cases in which the plasma Debye length is too large (i.e., plasma density that is too small), typically in the range 40 cm < λ D < 4 m, RPC-MIP could make use of one of the two Langmuir probes of the LAngmuir Probe (RPC-LAP; Eriksson et al. 2007) instrument as an additional electric transmitter, located 4 m away from the RPC-MIP receiver. The RPC-MIP operational mode that makes use of the RPC-MIP transmitters is known as short Debye length (SDL) mode, while the mode that makes use of the RPC-LAP1 as a transmitter is referred to as the long Debye length (LDL) mode. Using the above principle, cometary ionospheric electron density N e was estimated along the Rosetta trajectories around comet 67P. For this long-term study, we utilized the average N e over a time window of 320 s when the number of density measurements extracted from the RPC-MIP spectra exceeds 50% of the total number of available spectra during that time window. During an approximately two-year-long period Rosetta monitored in situ the cometary plasma environment from a varying distance between 0 and ∼1500 km. Vigren & Galand (2013) predict an N e altitude profile with a peak ionospheric density above the comet surface. From the analysis of the near-surface cometary ionospheric density measurements during the final descent of Rosetta spacecraft to 67P, a peak in N e was identified and was found to be located at ∼5 km from the cometary nucleus center (Heritier et al. 2017). This result therefore confirmed the previous theoretical expectations of the cometary ionosphere peak density location. This location was shown to be dependent only on the geometry of the nucleus, and to be independent of solar or other external conditions. Using these results, and the fact that N e follows an r −1 c dependence (Edberg et al. 2015), where r c is the cometocentric distance, above the altitude of the peak density (Heritier et al. 2017), we schematically present the Schematic N e profile of comet 67P as a function of cometocentric distance r c . The quantity N p represents the peak plasma density and r p is the corresponding cometocentric distance. r c -dependent ionospheric N e profile of 67P in Fig. 1. Based on this simple schematic consideration, we define a cometary TEC or the altitude-integrated electron number density as follows: where N e (r c ) = In equation (1), r o is the average radius of comet 67P, typically taken as 2 km. The quantity N p represents the peak plasma density and r p is corresponding cometocentric distance which is considered to be 5 km. For practical reasons, we take the upper limit of integration H as 500 km. This ensures convergence and is justified by the fact that the cometary plasma density is shown to follow a much steeper variation, closer to r −2 c , at larger distances (see Behar et al. 2018) associated with the cometary ion pick-up process in the incoming magnetized solar wind flow; this results in insignificant contribution to cometary ionospheric TEC at large cometocentric distances. It should be noted that this TEC estimation is based on an assumption of radially expanding plasma. However, a radial expansion at the constant velocity of plasma may not be expected out to several 100 km, particularly at a low activity period. During such conditions, density contributions from distances above 100 km may be significantly low. This is discussed later in the paper. Using the above relations, TEC is finally expressed as a function of N e and r c : 4.9 × 10 5 r c N e for 500 km ≥ r c > 5 km 73.6 r c −2 × 10 5 N e for 2 km < r c ≤ 5 km, where N e and r c are obtained from Rosetta measurements in cm −3 and km, respectively, and TEC is obtained in cm −2 . The TEC represents the total number of free thermal electrons contained in a column of unit cross-section along a vertical propagation path from the comet surface to an altitude of 500 km. Because it is an altitude-integrated parameter, TEC is supposed to give a description of the cometary ionosphere that is independent of the Rosetta cometocentric distance r c at the time of measurement. Therefore TEC is more suitable than a single-point N e value to study the global structure of the cometary ionosphere. For example, TEC can give more complete idea about the altitude extent of the cometary diamagnetic cavity void of magnetic fields (e.g., Goetz et al. 2016a,b;Nemeth et al. 2016;Timar et al. 2017), about the global impacts of space A51, page 2 of 9 R. Hajra et al.: Ionospheric TEC of comet 67P weather events like coronal mass ejections (CMEs; e.g., Witasse et al. 2017;Goetz et al. 2019), corotating interaction regions (CIRs) between solar wind high-speed streams, and slow streams (e.g., Edberg et al. 2016;Hajra et al. 2018b) on the cometary atmosphere. The main neutral species present in the 67P coma are reported to be H 2 O, CO 2 , and CO (Hässig et al. 2015;Le Roy et al. 2015;Fougere et al. 2016). Their ionization threshold wavelengths are ∼98, 90, and 89 nm, respectively, below which absorption of solar photons can lead to ionization (e.g., Galand et al. 2016). Thus the solar flux dependence of the cometary plasma can be studied by considering the solar extreme ultraviolet (EUV) radiations. As there was no EUV solar flux monitor on board the Rosetta spacecraft, we used the daily average spectral solar fluxes obtained from the Thermosphere Ionosphere Mesophere Energetics and Dynamics-Solar EUV Experiment (TIMED-SEE; Woods et al. 2005). The fluxes are corrected for the 67P orbit by considering the Earth-Sun-67P angle and an interplanetary solar rotation period of 26 days (Withers & Mendillo 2005). Since the solar flux is proportional to the inverse square of distance from the Sun, the actual fluxes (EUVc) incident on the comet nucleus are estimated by taking into account the heliocentric distance of comet 67P (once the shift in angle has been considered). According to analytical ionospheric modeling at the comet (Galand et al. 2016;Vigren et al. 2016), cometary plasma density N ph e (r c ) at a cometocentric distance r c due to photo-ionization of cometary neutral species l by solar ionizing fluxes is estimated as follows: where In the above equations, ν ph l is the photo-ionization frequency of cometary outgassing neutral species l, u i is the ion bulk velocity, n n (r c ) is the cometary neutral density at r c , σ ph l (λ) is the total photo-ionization cross-section of the neutral species l having ionization threshold and minimum wavelengths λ th and λ min , respectively, and F c (λ) is the un-attenuated solar ionizing flux at the comet ionosphere. We estimated the photo-ionization frequency, electron density N e along the Rosetta trajectory, and TEC due to photo-ionization using the above relationships. The TIMED-SEE spectral solar flux extrapolated as described above are used to estimate ν ph l , following Heritier et al. (2018). We considered photo-ionization cross-sections separately for dominating species H 2 O given in Vigren & Galand (2013) and species CO 2 given in Cui et al. (2011). This same procedure was used by Galand et al. (2016) and Heritier et al. (2018) for modeling 67P in situ ionospheric plasma, although the above-mentioned authors also included electron-impact ionization. The λ min is taken as 0.1 nm. Accounting for the range of neutral outflow velocity, we estimated a range of N e and TEC for u i of 500 m s −1 and 900 m s −1 (Gulkis et al. 2015;Lee et al. 2015;Galand et al. 2016;Hansen et al. 2016;Marshall et al. 2017). The cometary neutral density n n measured by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor (ROSINA/COPS; Balsiger et al. 2007) was used for the present analysis. It may be mentioned that the ROSINA/COPS n n measurement is sensitive to the neutral composition (Gasc et al. 2017). However, in our Ionosphere of comet 67P during the Rosetta mission. From top to bottom panels: ionospheric TEC derived from RPC-MIP, observed electron density (N e ) along the Rosetta spacecraft trajectory, and heliocentric distance (r h ) of comet 67P (blue, scale on the left), and cometocentric distance (r c ) of Rosetta from 67P (red, scale on the right), respectively. Top two panels: the blue points show 320 s average data and red shows the corresponding daily averages (see Sect. 2). model calculation, we took care of this factor in the estimation of photo-ionization frequencies of the species (see Galand et al. 2016). Cometary TEC variation during the entire Rosetta mission: an overview The variations of estimated TEC and in situ electron density N e along the spacecraft trajectory, along with the cometocentric distance r c and the heliocentric distance r h of comet 67P for the entire mission operation interval are shown in Fig. 2. The blue data points in the top two panels correspond to 320 s average RPC-MIP measurements with >50% detection ratio (see Sect. 2), while the red points correspond to their "daily averages". It should be noted that the comet exhibits roughly two rotations per 24 h period, thus the daily average is performed over roughly two cometary rotations. We also excluded the intervals with prominent solar and interplanetary disturbances Hajra et al. 2018a;Goetz et al. 2019) to study the "quiet-time" ionospheric behavior. The day with cometary brightness outburst was also excluded ). From Fig. 2 discontinuities can be recorded in N e and TEC during June 2015. It may be noted that there was a change in Rosetta operation mode on 2015 June 29; LDL was mostly used before and SDL mostly used after. The maximum density retrieved in LDL mode is ∼350 cm −3 , while plasma density below ∼500 cm −3 cannot be measured in SDL A51, page 3 of 9 A&A 635, A51 (2020) During the entire Rosetta mission operation period, a maximum TEC of ∼555 × 10 9 cm −2 was estimated on 2015 September 7 at a heliocentric distance of ∼1.28 au. However, plasma density variations below ∼10% would not be detected because of the finite frequency resolution in the RPC-MIP operational mode used to retrieve the plasma density. This may introduce errors in the lowest TEC values as TEC is a linear function of in situ plasma density. For statistical analysis and comparison, the entire Rosetta observation period is divided into five intervals according to the heliocentric distance and orbital position of comet 67P. These consist of perihelion, where r h < 1.5 au; medium heliocentric distances of 1.5 au < r h < 2 au before and after perihelion; and large heliocentric distances r h > 2.5 au before and after perihelion. The intervals are shown in Table 1. The corresponding seasonal information taken from Heritier et al. (2018) is also shown. The statistical characteristics of TEC during these intervals are summarized in Table 2. The TEC exhibits large variability as evident from significant standard deviations. As 67P approached the Sun, TEC increased and reached its maximum following perihelion, after which it decreased with increasing heliocentric distance. Average diurnal TEC during perihelion is ∼(133 ± 84) × 10 9 cm −2 , that is, significantly larger than ∼(1-3) × 10 9 cm −2 recorded at large heliocentric distances (>2.5 au). Cometary TEC dependence on heliocentric distance The variation of TEC as a function of heliocentric distance r h of 67P is shown in Fig. 3. The green circles in the top panel show all TEC values during the entire mission (320 s average), while blue and red triangles show the daily average TEC during pre-and post-perihelion periods, respectively. The middle and bottom panels show data separately for northern and southern hemispheres, respectively. A clear decrease in TEC with increasing r h can be noted during both pre-and post-perihelion periods. The blue and red lines (top panel) represent the regression equations obtained from regression analysis between logarithm of TEC and r h during pre-and post-perihelion periods, respectively. From this analysis, TEC can be expressed as an exponential function of r h : TEC = Aexp(-Br h ), where A and B are constants. The values of the constants and the corresponding correlation coefficients (cc) are given in Table 3. High values of cc confirm significant association of TEC with r h . However, at the same r h , TEC values during the post-perihelion period are significantly (roughly two to four times) larger compared to the pre-perihelion TEC values. This "ionospheric hysteresis effect" is more prominent at large heliocentric distances. This result is also consistent with results shown in Table 2. Table 3. Cometary TEC (in 10 9 cm −2 ) vs. heliocentric distance r h (in au) during pre-and post-perihelion periods. Phase Relationship cc Pre-perihelion TEC = 14.3 × 10 4 exp(−2.5r h ) −0.93 Post-perihelion TEC = 6.7 × 10 4 exp(−1.8r h ) −0.91 Figure 3 (middle and bottom panels) shows clear hemispheric dependence of the hysteresis effect. In the northern hemisphere, pre-and post-perihelion TEC values are quite comparable at all heliocentric distances. However, in the southern hemisphere, post-perihelion TECs are significantly larger than the pre-perihelion values. A more detailed study on the hemispheric asymmetry is presented in Sect. 3.3. Hemispheric asymmetry of cometary TEC Previous studies (e.g., Edberg et al. 2016;Galand et al. 2016;Hajra et al. 2018a) reported that cometary N e variation exhibits dependences on the cometary sub-spacecraft latitude (λ) and longitude (θ). To verify any λ-θ dependence of TEC, we developed average quiet time TEC maps for the five intervals shown in Table 1. The maps are shown in Fig. 4. The average TEC values at each λ-θ grid are shown in the associated color scales. In general, the average TEC values decrease with increasing heliocentric distance before and after perihelion. The TEC is larger in the post-perihelion periods compared to the preperihelion intervals for the same heliocentric distance range. These are consistent with the results depicted in Fig. 3 and Table 2. In addition to these, variation in the hemispheric asymmetry is shown in Fig. 4. During post-perihelion, at both 1.5 au < r h < 2 au and at r h > 2.5 au, TEC values are prominently larger in the southern hemisphere (∼(50-70) × 10 9 cm −2 and ∼(6-10) × 10 9 cm −2 , respectively) compared to the northern hemisphere (∼(20-30) × 10 9 cm −2 and ∼(2-3) × 10 9 cm −2 , respectively). Thus, on the average, TEC in the southern hemisphere is approximiately three times larger than the TEC in the northern hemisphere during the post-perihelion period. During perihelion, TEC estimations are available from ∼70 • N latitude to ∼60 • S latitude of the comet. Around the low-to mid-latitude zone (∼20-50 • ), TEC values of ∼(150-200) × 10 9 cm −2 are recorded in the southern hemisphere, while TEC varies around ∼(350-400) × 10 9 cm −2 in the northern hemisphere. Thus, during perihelion, TEC in the northern hemisphere is approximately two times larger than the southern hemispheric TEC on the average. This clearly shows a reversal of hemispheric asymmetry between perihelion with higher TEC in the northern hemisphere, and post-perihelion with higher TEC in the southern hemisphere. However, such hemispheric asymmetry seems to be less prominent during the pre-perihelion intervals when overall TEC values were smaller. Figure 5 shows the variation of diurnal median TEC with the ionizing EUV fluxes incident on the comet 67P ionosphere (EUVc) at different phases of the cometary activity (Table 1). The TEC is found to increase linearly with increasing EUVc. The correlation coefficient cc of linear regression between TEC and EUVc exhibits an interesting dependence on the heliocentric distance r h . Both during pre-and post-perihelion periods, cc increases with decreasing heliocentric distance. The best correlation is however recorded at distances 1.5 au < r h < 2 au and decreases at larger heliocentric distances r h > 2.5 au. The relationships are A51, page 5 of 9 A&A 635, A51 (2020) Considering the contributions of solar ionizing fluxes on the cometary plasma, TEC values due to photo-ionization are estimated for the entire mission. These are compared with the estimated values (red) from the RPC-MIP observations in Fig. 6. The blue and green points in the top two panels correspond to model values computed using ion bulk velocities of 500 and 900 m s −1 , respectively. In the top panel, TEC model values are estimated by only considering photo-ionization of H 2 O, while TEC model estimations in the second panel from top correspond to CO 2 as the only ionized neutral species. It is interesting to note that estimated electron density due to photo-ionization of CO 2 is larger than that due to photo-ionization of H 2 O by ∼20-40%. This is consistent with results shown by Galand et al. (2016). According to their model study for a pre-perihelion period when comet 67P was at a heliocentric distance of ∼3 au, the photoionization frequency increased by 18% at most from a pure H 2 O to a half CO 2 and half H 2 O mixture (Läuter et al. 2018). Cometary TEC dependence on solar ionizing fluxes The diurnal average TEC values and standard deviations obtained from the actual observations are shown by red points and error bars, respectively, in the top two panels of Fig. 6. For 1.5 au < r h < 2 au, the model values are found to match well with the observed values both during pre-and post-perihelion periods. This result is consistent with the highest correlation coefficients between TEC and solar ionizing fluxes recorded during these periods as shown in Fig. 5. At larger heliocentric distances r h > 2.5 au, modeled values due to photo-ionization are found to underestimate the actual observation. This indicates possible domination of another ionization process over the photo-ionization. On the other hand, the model overestimates the observations during perihelion. This result is in good agreement with the recent study by , who demonstrate that the standard simplified ionospheric models (e.g., Galand et al. 2016;Heritier et al. 2018) overestimate observed electron density near perihelion, while the level of agreement improves at larger heliocentric distances. Dawn-dusk effects on cometary TEC The Rosetta spacecraft orbited comet 67P in its terminator plane corresponding to dawn and/or dusk local times. To study the dawn-dusk effects on the cometary TEC, if any, we considered TEC variations during the last four months of the mission, from 2016 June to September, when the comet was between ∼3.1 and ∼3.8 au from the Sun. All TEC measurements during this period were separated according to local dawn (0400-0800 LT at the sub-spacecraft point) and dusk (1600-2000 LT) time sectors. These are shown in Fig. 7. Dusk TEC values are often larger compared to the dawn-time values. When northern and southern hemispheric values are separated, a clear hemispheric dependence can be noted in the dawn-dusk TEC variability. Overall TEC values are smaller in northern hemisphere compared to those in the southern hemisphere, which is consistent with results shown in previous sections. While dawn and dusk TEC values are comparable (∼1.5 × 10 9 cm −2 ) in the northern hemisphere, dusk-time TEC values (∼10.9 × 10 9 cm −2 ) are often significantly larger than the dawn-time TEC values (∼3.2 × 10 9 cm −2 ) in the southern hemisphere. TEC variability We explored cometary ionospheric variability during solar/ interplanetary quiet intervals. The TEC values depict high variability of the quiet-time ionosphere of comet 67P. During the entire Rosetta mission, comet 67P exhibited a large TEC variation by ∼2 orders of magnitude on average, with a daily-average peak of ∼(133 ± 84) × 10 9 cm −2 near perihelion. This may be compared to a neutral outgassing rate variation by ∼3 orders of magnitude when the heliocentric distance varied from ∼1.2 to 3.8 au (Hansen et al. 2016;Heritier et al. 2018). The present study suggests an exponential decay of TEC with the heliocentric distance. This can be related to a steep evolution of the neutral outgassing rate with heliocentric distance, typically between r −6 h to r −7 h as reported in previous studies (e.g., Snodgrass et al. 2013;Simon Wedlund et al. 2016;Biver et al. 2019). There were very few previous attempts to estimate cometary TEC based on remote sensing and/or flyby experiments. Edenhofer et al. (1985) suggested a peak TEC value of ∼10 × 10 12 cm −2 of comet 1P/Halley (at a heliocentric distance of ∼0.9 au) based on a Doppler simulation during Giotto spacecraft encounter on 1986 March 14. The ionospheric sounding of the comet by coherent dual frequency (C-band: 5.8 GHz and L-band: 0.9 GHz) radio waves during the Vega-1 spacecraft flyby on 1986 March 6 revealed a peak cometary TEC of ∼5 × 10 12 cm −2 (Pätzold et al. 1997). Both of these studies used radial distribution of cometary electron density to estimate TEC as is done in the present work. The Halley TEC values are ∼2 orders of magnitude larger than the peak TEC value at comet 67P obtained in the present work. This is consistent with the fact that the cometary neutral outgassing rate of 67P is significantly less (by ∼2 orders of magnitude) than that at comet Halley (see, e.g., Mandt et al. 2016;Ksanfomality 2017, and references therein). Compared to comet 67P TEC, the highly variable terrestrial ionospheric TEC is ∼3 orders of magnitude larger near dayside maximum (see Browne et al. 1956;Evans 1956;Hargreaves 1992;Mannucci et al. 1998;Tsurutani et al. 2004;Chakraborty & Hajra 2008;Hajra 2012;Hajra et al. 2016, and references therein). TEC hysteresis A cometary ionospheric hysteresis effect is revealed for the first time in the present work. At the same heliocentric distance, overall TEC values are larger during post-perihelion than during pre-perihelion intervals. When separated in hemispheres, the hysteresis is found to be a dominant feature over the southern hemisphere, while north hemispheric TEC values are found to be comparable between pre-and post-perihelion intervals. It may be noted that Hansen et al. (2016) report local outgassing rate peak ∼20 days after perihelion, attributed to some plausible hemispheric effects and neutral density variation (see Hansen et al. 2016;Heritier et al. 2018). While the neutral outgassing rate shows asymmetry pre-and post-perihelion, TEC hysteresis may not be solely related to the cometary neutral density asymmetry. This is supported by the present work showing lower ionizing solar fluxes incident on the comet ionosphere after perihelion compared to the pre-perihelion fluxes (Fig. 6), confirming earlier results (Heritier et al. 2018). This latter result is related to the fact that the entire mission interval was in the descending part of the solar activity cycle 24. An additional cause may be that the inbound equinox was much closer to perihelion compared to the outbound equinox. The asymmetry may also be triggered by an asymmetry in electron-impact ionization, which is a key ionizing source at large heliocentric distances (Heritier et al. 2018). Other important factors contributing to TEC variability may be the attenuation of solar ionizing fluxes by the cometary neutrals and dissociative recombination between electrons and ions (Heritier et al. 2018;Beth et al. 2019). Both these processes can act to reduce TEC values. However, along the Rosetta trajectory the impacts of the dissociative recombination and solar absorption were found to be insignificant before and after perihelion (Heritier et al. 2018). Heritier et al. (2018) show that solar flux attenuation or photo-absorption effect may only be significant near the surface of the comet, while the dissociative recombination effect is significant at larger cometocentric distances. As TEC is an altitude-integrated plasma parameter, TEC variability is supposed to be modulated by both these processes. As mentioned above, the photo-ionization contribution was lower during post-perihelion. Thus higher observed TEC values during this period may be related to larger electron-impact ionization rates during post-perihelion. With the decrease of solar activity owing to the descending phase of solar cycle, the photo-ionization frequency gets smaller, while the electron-impact ionization frequency remains about constant or may even increase during post-perihelion phase. This raises the question of why there would be more electron-impact ionization and whether the electron acceleration processes could be more efficient during post-perihelion, and in such a case, why. Further study is required to fully understand this behavior. TEC hemispheric asymmetry We developed average latitude-longitude maps of TEC during different cometary activity conditions (heliocentric distances). A51, page 7 of 9 A&A 635, A51 (2020) In general, while pre-perihelion TEC exhibits approximate hemispheric homogeneity on average, significant asymmetry was recorded during perihelion and post-perihelion orbital periods of comet 67P. The maps can be compared with outgassing H 2 O maps developed by Hansen et al. (2016). Based on both observation and modeling, the H 2 O production rate was shown to be larger in the northern hemisphere compared to the southern hemisphere during pre-perihelion. This was shown to reverse during and after perihelion. Very asymmetric electron-impact ionization frequency is considered to have compensated the lower neutral density in the southern hemisphere exhibiting winter season during pre-perihelion (see Galand et al. 2016). On the other hand, during post-perihelion, the electron-impact asymmetry could not compensate for larger neutral density over the southern hemisphere exhibiting summer season (Heritier et al. 2018). During perihelion, average TEC values were roughly two times larger in the northern hemisphere that was exhibiting autumn season compared to that in the southern hemisphere where it was spring. However, the neutral outgassing rate was reported to be higher in the southern hemisphere than in the northern hemisphere (Hansen et al. 2016;Biver et al. 2019). The anti-correlation between TEC and neutral outgassing rate hemispheric variations deserves further study. Engelhardt et al. (2018) report an overall larger population of cold (<0.1 eV) electrons in the southern hemisphere during perihelion. This may suggest less electron-impact ionization in the southern hemisphere compared to the northern hemisphere. This is consistent with observed hemispheric asymmetry in TEC during perihelion. However, near perihelion, electron-impact ionization is reported to be a negligible source of ionization (Heritier et al. 2018). During post-perihelion, the hemispheric asymmetry was reversed and stronger; the southern hemisphere exhibited approximately three times larger TEC values compared to the northern hemisphere. This result is correlated with hemispheric inhomogeneity of outgassing rate reported by Gasc et al. (2017). Strengthening of the asymmetry (compared to that during perihelion) may be related to additional effects of electron-impact ionization during post-perihelion. However, this has yet to be confirmed quantitatively. The hemispheric asymmetry is also reflected in the local dawn-dusk TEC variability during the last four months of the mission (>3 au). In the southern hemisphere, dusk time TECs were significantly higher than the dawn time TEC values. No such dawn-dusk asymmetry was prominent in the northern hemisphere, where the overall TEC values are lower than in the southern hemisphere. As there is no significant difference in ionizing solar fluxes between dawn and dusk, the dawn-dusk asymmetry should be directly associated with higher neutral outgassing at dusk because of surface thermal inertia. However, this requires further confirmation. TEC dependence on photo-ionization To quantify the photo-ionization contribution to the TEC variability, we estimated the expected TEC values due to photoionization of cometary neutrals throughout the Rosetta mission. This process "detrends" the TEC variability with respect to the variability in the neutral outgassing rate. While photo-ionization seems to be the dominating contributor around 1.5 au < r h < 2 au, it underestimates the actual observations at large heliocentric distances (r h > 2.5 au). The latter result corroborates the finding of Heritier et al. (2018) that electron-impact ionization is a dominating source of ionization at large heliocentric distances. Around perihelion (r h < 1.5 au), modeled TEC due to photoionization seems to overestimate the actual observations of TEC. This is probably related to the overestimation of solar ionizing fluxes near perihelion and an underestimation of loss processes. Ionizing solar fluxes are suggested to suffer from absorption by cometary neutrals (Rees 1989;Beth et al. 2019), although at the location of Rosetta the electron density may not be affected by it (Beth et al. 2019). Solar flux also suffers from scattering and absorption by cometary dust (Johansson et al. 2017) and loss due to dissociative recombination (Heritier et al. 2018) around perihelion. This may result in reduction of actual ionospheric density (TEC) around perihelion. In addition, the model discrepancy may be associated with the possibility that the bulk speed of the plasma may be higher than that of the neutrals or that the plasma are decoupled from the neutrals. In fact, Odelstad et al. (2018) report ion speeds markedly higher than the neutral outgassing velocity. Acceleration along an ambipolar electric field and inefficient coupling to the neutrals could cause such a situation (Vigren & Eriksson 2017). Summary and conclusions We have used the in situ cometary plasma measurements by the Rosetta spacecraft to assess and interpret the variability of the cometary ionospheric TEC of comet 67P over the whole comet escort phase for the first time. Because it is an altitude-integrated plasma parameter, TEC gives a more comprehensive description of the cometary ionosphere compared to plasma measurements along the spacecraft trajectory. The present study covers the entire Rosetta mission operation period of approximately two years in order to explore the cometary ionospheric evolution depending on varying heliocentric distances. The main findings of the present work may be summarized as follows: 1. Cometary TEC exhibits large variability with diurnally averaged peak value of ∼(133 ± 84) × 10 9 cm −2 reached during perihelion (r h < 1.5 au). It decreases exponentially with heliocentric distance attaining values ∼2 orders of magnitude lower at larger heliocentric distance (r h > 2.5 au). 2. A clear ionospheric hysteresis effect is observed in heliospheric variation of TEC. At similar heliocentric distances of the comet, TEC values are significantly (roughly two to four times) larger during post-perihelion compared to preperihelion TEC values. The hysteresis effect is more prominent in the southern hemisphere of the comet and at larger heliocentric distances. Our study suggests possible contributions of larger electron-impact ionization (production) during post-perihelion and, to an lesser extent, of larger dissociative recombination (loss) effects during pre-perihelion. 3. On average, significant hemispheric asymmetry is recorded in TEC during perihelion and post-perihelion periods, while the asymmetry was less pronounced during pre-perihelion periods. During perihelion (r h < 1.5 au), the northern hemisphere exhibited roughly two times larger TEC values compared to the southern hemisphere. The asymmetry reversed and became stronger during post-perihelion when average TEC in the southern hemisphere was approximiately three times larger than that in the northern hemisphere. Variation in relative importance of electron-impact ionization and photo-ionization processes (seasonal variation of outgassing rates) are suggested as the plausible reasons. 4. A hemispheric asymmetry was also observed in dawn-dusk variations of TEC. While dawn and dusk TEC values are comparable in the northern hemisphere, dusk-time TEC A51, page 8 of 9 R. Hajra et al.: Ionospheric TEC of comet 67P values are more than three times larger than the dawn-time TEC values in the southern hemisphere. 5. While TEC is found to increase with increasing solar ionizing fluxes incident on the comet ionosphere at any heliocentric distance, the strongest association was noted just after perihelion (1.5 au < r h < 2 au). 6. We estimated the expected TEC values due to photoionization of cometary neutrals by solar ionizing fluxes. At moderate heliocentric distances (1.5 au < r h < 2 au), photo-ionization contribution matches well with the actual TEC observations. At larger heliocentric distances (r h > 2.5 au), photo-ionization contribution underestimates the actual observations implying importance of electron-impact ionization process in the cometary ionosphere, as reported previously (Galand et al. 2016;Heritier et al. 2018). In this paper, we presented a method to estimate the altitudeintegrated electron number density (TEC) from the comet 67P surface to an arbitrarily chosen altitude of 500 km, beyond which the cometary plasma density is supposed to decrease significantly. While TEC can be a suitable parameter to study the behavior of a cometary ionosphere which is expanding radially, care is suggested to be taken in the interpretation during varying cometary and solar activity conditions. For example, at low activity a radial expansion at constant velocity of the plasma may not be expected out to several 100 km. However, from the presented results in this work, TEC revealed a clearer picture of the cometary ionosphere compared to in situ spacecraft plasma measurement, in terms of ionospheric hysteresis effect, hemispheric asymmetry, and solar activity dependence. Further study can be done on solar wind coupling of cometary ionosphere using this parameter.
8,678.6
2020-03-01T00:00:00.000
[ "Physics", "Geology" ]
Quantitatively linking morphology and optical response of individual silver nanohedra The optical response of metal nanoparticles is governed by plasmonic resonances, which are dictated by the particle morphology. A thorough understanding of the link between morphology and optical response requires quantitatively measuring optical and structural properties of the same particle. Here we present such a study, correlating electron tomography and optical micro-spectroscopy. The optical measurements determine the scattering and absorption cross-section spectra in absolute units, and electron tomography determines the 3D morphology. Numerical simulations of the spectra for the individual particle geometry, and the specific optical set-up used, allow for a quantitative comparison including the cross-section magnitude. Silver nanoparticles produced by photochemically driven colloidal synthesis, including decahedra, tetrahedra and bi-tetrahedra are investigated. A mismatch of measured and simulated spectra is found in some cases when assuming pure silver particles, which is explained by the presence of a few atomic layers of tarnish on the surface, not evident in electron tomography. The presented method tightens the link between particle morphology and optical response, supporting the predictive design of plasmonic nanomaterials. Introduction Plasmonic nanoparticles (NPs) have optical properties which are controlled by their morphology.This enables a wide tuneability using a single material, such as silver or gold, just by size and shape control, 1 including chirality and the associated chirooptical response. 2The NP optical properties are described in terms of the cross sections for optical scattering (σ sca ) and absorption (σ abs ), which represent the strength of the NP-radiation interaction. 3While many experimental techniques have been developed to characterize the optical response at a single NP level, 4,5 only few of these methods are able to quantify both optical cross-sections in absolute units, 6 or equivalently, the complex polarizability of the NP. 7revious studies of correlative single NP optical-electron microscopy using scattering spectra show the complex and sensitive dependence of the optical response on the morphology. 8Numerical modelling of the optical response based on a 3D reconstruction from electron tomography was shown in ref. 9, using discrete dipole approximation (DDA) simulations of a faceted gold NP, and for large irregular gold NPs simulated scattering spectra were compared with experiments. 10Furthermore, gold-silver core-shell NPs were investigated, either showing simulations for a given morphology 11 or comparing simulations with measured scattering spectra as function of shell thickness. 12However, the above works did not attempt an accurate comparison of measured and simulated cross-sections, and focussed on the spectral features instead.Over the past years, we have developed a measurement and data analysis method to retrieve accurate quantitative cross-section spectra. 13,14In ref. 15 we combined this method with standard projection transmission electron microscopy (TEM) to investigate silver cubes.The cube geometry results in NPs orientated such that one of the flat sides is attached to the TEM grid, so that the NP geometric parameters can be reasonably extracted from projection images.For more complex shapes, however, conventional TEM is insufficient, and electron tomography is needed. In the present work, we study faceted silver NPs produced by photochemically driven colloidal synthesis, [16][17][18] including deca-hedra, tetrahedra and bi-tetrahedra.Similar to their gold counterparts, 19,20 their response is ruled by localized surface plasmon resonances.The chemical reactivity of silver surfaces makes these systems attractive for catalysis applications, 21,22 but also provides a route to chemical surface modifications which can be difficult to identify in TEM images while significantly modifying the optical response. 23We find here that an accurate quantitative study of cross-section spectra correlating experiment with simulation can uncover such detail.The presented case study on the one hand assesses the level of accuracy that can be achieved by our crosssection measurement method, and on the other hand exemplifies the kind of fine information that can be extracted from quantitative cross-section spectroscopy.Ultimately such progress might enable to reliably extract the 3D morphology of metal NPs from optical measurements alone. Materials and methods Let us present the workflow of the experiment summarized in Figure 1.Silver decahedra NPs are fabricated with a plasmon-driven method adapting the protocols of Zheng et al. 17 and Pietrobon and Kitaev 16 .As shown in Figure 1a, seeds grown by reduction of AgNO 3 in aqueous solution are thought to aggregate to form decahedra under irradiation by a high power light-emitting diode (LED) centred at a 447 nm wavelength (violet spectrum in the graph).The formation of decahedra can be monitored via the progressive red-shift of the extinction peak of the NP solution from spherical seeds (dashed line) to decahedra (solid line).Further details of the fabrication process and a kinetic study are reported in the ESI section S.I. As particle support for the correlative measurements we used a TEM grid (Ted Pella, 21530-10) composed of a 40 nm-thick SiO 2 film (refractive index n = 1.46) supported by a 200 nm-thick Si 3 N 4 film with 50 × 50 µm square windows, on a silicon substrate (one such window is the bright frame of Figure 1c).The grid was washed using two repetitions of the sequence deionised water -acetone -anisole -ethanol, and then dried in air.The grid was held by a teflon-coated stainless steel reverse-action tweezer throughout the functionalisation and washing process.The grid was incubated for 1 hour at 55 • C in 10 mL etching solution of 500 µL HCl (99%) diluted in 9.5 mL of 30% H 2 O 2 .The grid was then washed three times in water, followed by three times in ethanol.200 µL of (3-Aminopropyl) triethoxysilane (APTES) (Sigma Aldrich) was centrifuged at 20k RCF for 20 mins to spin down any large debris.100 µL of this APTES stock was then diluted in 9.9 mL ethanol (absolute, for HPLC, >99.8%,Sigma Aldrich) to obtain a 1% APTES solution, in which the grid was incubated for 1 hour.The grid was then washed three times in ethanol followed by three times in water.The resulting functionalised grid was dried in air at 55 • C for 30 mins and stored at 4 • C for no longer than one month.The decahedra solution (9 µl of 0.25 optical density at 475 nm) was wet-cast (see ref. 15) onto the functionalised grid.The grid was subsequently washed by gently and repeatedly dipping in water, and then dipped in ethanol and dried. To provide the NPs with a nearly homogeneous optical environment for the cross-section measurements, the TEM grid was sealed in anisole (n = 1.52) between a microscope slide (25 × 75 mm2 , Menzel Gläser) and a coverslip (#1.5, 25 × 25 mm 2 , Menzel Gläser) using a 0.5 mm thick adhesive silicone spacer (Grace Bio-Labs 664507), with the TEM grid surface facing the coverlip side.We chose anisole rather than microscope immersion oil as it is volatile and evaporates without leaving residuals, enabling subsequent electron microscopy.This assembly is mounted onto an inverted optical microscope (Nikon, Eclipse Ti-U) with a 40× dry objective (Nikon MRD00405, CFI plan apochromat λ series) of 0.95 numerical aperture (NA) as depicted in Figure 1b. The procedure for the optical measurements and the quantitative analysis of the optical cross sections is largely the same we adopted in ref. 15.We therefore limit ourselves here to recapitulate the main steps performed and parameters used, while we refer the reader to the aforementioned work 15 for an in-depth description.Single-particle microspectroscopy is performed by optically relaying the intermediate image plane created by the tube lens of the microscope onto the entrance slit of an imaging spectrometer (Horiba Jobin-Yvon, iHR550) equipped with a ruled plane diffraction grating (Horiba, 51048) of 78 mm square size and 100 lines per mm.Spectra were acquired with a Peltier-cooled backilluminated charge-coupled device (CCD) sensor (Andor, Newton DU-971N).The spectrometer images the entrance slit onto the sensor, allowing to use the zeroth order of the grating to provide an image of the sample to select a specific particle for spectroscopy.The entrance slit acts as a spatial filter in the horizontal direction (along the spectral dispersion), whereas in the vertical direction the binning of the CCD sensor itself is used to define a region of interest.Together these define a 1.0 × 1.0 µm square region centred on the NP of interest from which the signal is collected.The corrections required to account for this finite region of detection are described in section S.III of the ESI. Within the transillumination scheme adopted, we define two imaging modalities based on the angular range of the illumination, as illustrated in Figure 1d.In the first one -a bright-field (BF) scheme -the illumination NA range is set to match the collection range (0-0.95) of the objective.In the second one -a dark-field (DF) scheme, the illumination range 1.06-1.34NA is used, not overlapping with the collection range, so that only scattering is detected.As a result, scatterers such as NPs are visible as bright diffraction-limited spots on a dark backgroundsee for example Figure 1c (left).The two illumination ranges are defined by two corresponding 3D-printed apertures placed in the back focal plane (BFP) of the condenser lens (Nikon, T-C-HNAO, 1.34 NA oil-immersion) on a slider, which allows the reproducible switching between BF and DF required for an accurate correlation between transmitted and scattered light intensity. The optical cross sections are defined as the power removed from the exciting beam per excitation intensity: σ = P/I exc .Thus, a careful referencing to the exciting intensity 24 of the singleparticle extinction and scattering spectra enables us to measure accurately the magnitude of the cross sections.Note that the BF extinction signal includes contributions of both absorption and scattering, which have to be unravelled based on the scatteringonly DF signal.Such retrieval procedure is presented in ref. 13, and requires information on the directional properties of the scattering process.In the analysis this information is reduced to two parameters named η and ζ .η concerns the detection, and is the fraction of the total scattering collected by the objective.We note that η depends on the angular range of the illumination, such that η BF = η DF ; however, the difference is small for the decahedra, whose response is governed by the same dipolar mode under both BF and DF illumination.ζ concerns the excitation, and is the BFto-DF ratio of the scattered power; it depends therefore on the relative intensity of the BF-to-DF illumination (which we characterised for our set-up as described in section S.IV of the ESI), as well as on how much the resonant modes of the scatterer are excited under either illumination.In this work, η and ζ are computed numerically for each studied NP as described below.The details of the NP geometry for the cases studied here have a moderate effect, and therefore the values are rather similar for all NPs considered, see the ESI section S.V. Following the quantitation procedure outlined above, we can measure cross-section spectra in absolute units, such as nm 2 in Figure 1e.Note that σ sca (λ ) and σ abs (λ ) refer to a given illumination and collection range.Specifically, in this work we measure σ DF sca and σ BF abs , which differ 24 from the cross sections under plane-wave excitation. As illustrated by Figure 1c, optical and electron microscopy images can be correlated through the recognition of a specific NP pattern.In the high-angle annular dark-field scanning TEM (HAADF-STEM) overview on the right, white circles highlight the NPs visible, and a distinctive dimer in the middle is shown magnified.We are thereby able to select the NPs characterised optically for HAADF-STEM tomography, wherein the sample is tilted across a wide angular range under the electron beam, as depicted in Figure 1f, and the resulting stack of projection images is used to reconstruct the three-dimensional (3D) morphology of the NP.All electron tomography series were acquired using a FEI Tecnai Osiris electron microscope operated at 200 kV.The series are taken across the largest tilt range allowed by the TEM grids clearance -typically about ±65°-with a tilt increment of 3°.The 1k × 1k projection images are aligned to match the NP positions across each series using cross-correlation, and are then reconstructed using 15 iterations of the expectation-maximization reconstruction algorithm implemented in the ASTRA toolbox for MATLAB. 25,26The resulting reconstructions are downsampled by a factor 12 and segmented using the Otsu method to export them as .stlfiles, such as the one shown in Figure 1g.This geometry is then meshed in COMSOL for numerical simulation purposes with a free tetrahedral volume mesh displayed in Figure 1h.The influence of variations of this reconstruction procedure on the simulated cross-section spectra is investigated in subsection 3.1. The optical response of the particles is computed in the frequency domain using COMSOL Multiphysics ® , a commercial software implementing the finite-element method.In the model, the NP is defined as silver using the permittivity reported in ref. 27, immersed in a homogeneous medium of anisole (n = 1.52).We neglected the small index mismatch between the thin silica window (n = 1.46) and anisole and used a homogeneous medium instead of a multi-layered structure, therefore the model used here is equivalent to the one described in the SI of our previous work ref. 15, with the slab thickness set to zero (d = 0 nm).This simplification allowed us to automate the importing and alignment of particle geometries from HAADF-STEM tomography into COMSOL.The stationary solution of Maxwell's equations under plane-wave (PW) excitation of given frequency, polarization, and propagation direction computed by COMSOL determines the spatial distribution of the electromagnetic field E. Let us now discuss how we derive the observables of interest (namely σ abs , σ sca , η, ζ ) from this solution.Figure 1i shows the spatial distribution of the Joule (resisitive) heating where J c = ς E is the conduction current in terms of the AC electrical conductivity ς .We integrate Q J over the NP volume to compute the absorbed power P abs , and hence σ PW abs = P abs /I exc dividing by the excitation intensity I exc .The nearfield solution can be projected to the far field via the far-field transform available in COMSOL, resulting in an angular distribution of the field E FF (θ , ϕ) such as the one shown in Figure 1j.A dipole-like emission pattern is seen, with the dipole oriented close to the x direction (identified by the polar angle θ = π/2 and the azimuth ϕ = 0, π, 2π) -albeit not precisely along it, due to a tilt of a long axis of the particle, along which its polarizability is maximized.The far-field Poynting vector S FF (which is proportional 3 1j) can be integrated over the appropriate solid angle (4π or the objective acceptance) to compute the scattered power P sca (and hence σ PW sca = P sca /I exc ) and the collected fraction of scattering η. We emphasize that these values of σ and η are computed under PW excitation, which we have indicated with the PW superscript; in the experiment instead we use the incoherent illumination produced by a high-NA condenser, which is composed of a wide range of directions.To reproduce the measured σ DF sca and σ BF abs we therefore perform and average a large number of PW simulations sampling the directional range of illumination (either BF or DF), each direction being assigned an appropriate weight according to the angular dependence of the illumination intensity in our microscope, that we have characterised.This averaging results in the σ DF sca and σ BF abs spectra, which are shown in Figure 1k, and are quantitatively simulating the experimental ones in Figure 1e.From here on we drop the DF and BF superscript of the cross-sections for simplicity.In the next section we will compare in detail the experimental and simulated cross-sections, focussing on their differences to identify additional aspects of the system beyond its measured geometry yet to be included in the model.In this manner, the comparison can bring about additional knowledge on the system -such as the presence of surface layers or variations of the metal permittivity. Results and discussion Twenty particles were measured in total, which we numbered with increasing volume V .Figure 2 shows the measured and simulated cross-section spectra for six selected particles representing the range of shapes and sizes, along with the top and side view of their 3D reconstructions.The data for the remainder of the particles are shown in the ESI section S.V. Animated 3D renderings of the NP reconstructions are shown in the ESI section S.VI C. The top view shows the particle as seen along the illumination axis, indicating the main plane of excitation polarizations, even though due to the high NA also axial polarization is present, more markedly for the DF illumination.While the fabrication method was developed to produce decahedra (such as particles #20 and #18), other shapes are present, such as tetrahedra (#6 and #7), or a bi-tetrahedron (#19).The particles range in sizes, as summarised in the ESI Figure 3.The decahedra and tetrahedra show a single pronounced peak in the scattering crosssection, at a wavelength between 500 and 550 nm.The more elongated particles, #19 and #3, show two distinct peaks, which are dipolar modes with polarisations along the longer or shorter axis, centred at longer or shorter wavelengths, respectively.COM-SOL simulations of the scattering cross section of particle #19 under normal-incidence plane-wave illumination polarized along the shorter and longer axis (green and orange lines, respectively) confirm this attribution.For most particles we find a reasonable agreement in the lineshape and magnitude of the scattering crosssection peak around the dipolar resonance, though the position shows a systematic blue-shift of the simulated data relative to the measured one.The measured absorption spectra show regions of negative values, which is not expected, as it implies a net power emission by the particle.The absorption is determined as difference between extinction and scattering, using a range of numerically calculated and experimentally measured parameters, as mentioned in section 2. With the scattering dominating for most particles, the resulting small difference is affected by systematic errors in the measured extinction and scattering.These considerations and the wavelength dependence of the analysis parameters are discussed in more detail in the ESI section S.V. To correlate the results of experiment and simulations across all particles measured, we compare key spectral features in Figure 3.The position of the dipolar scattering peak (panel a) shows a redshift with increasing particle index and thus particle volume, and the amplitude of the peak (b) increases with volume, both of these effects are generally well known and understood in literature. 18,28The quantitative comparison between measurements and simulations shows a remarkable agreement, considering that no adjustable parameters have been used.The difference between simulated and measured peak positions can be seen in the inset, and separately in (c).We find good correlation, with most particles showing a red shift of the measurement relative to the simulation by a few tens of nanometres.This finding is reminiscent of the shift observed in experiments with silver cubes. 15The relative deviation between simulated and measured peak magnitude (see panel d) shows a significant fluctuation, mostly with the simulation being higher, though the deviation decreases for large particles.Generally, the signal-to-noise ratio in the HAADF-STEM projection images is smaller for smaller particles, allowing for a larger relative error.In addition, the finite angular range used for the tomography reconstructions gives rise to a so-called missing wedge artefact, a result of a lack of information along certain directions.This can lead to systematic errors depending on the particle morphology, which could cause particle to particle fluctuations.On the optical measurement side, smaller cross-sections are more affected by noise due to diffuse background scattering.However, the noise level is typically not significant in the present data, as can be seen in the scattering spectra shown in Figure 2. On the other hand, the absorption displays a better agreement for small particles, as can be seen in the ESI section S.V.This is due to the response of large particles being dominated by scattering, and the systematic error in the quantification of the absorption being proportional to the scattering, as previously discussed. In Figure 3e the particles are shown in a plane spanned by the ratio in amplitude and the difference in peak position between the measured and simulated data, to facilitate identifying and categorizing the possible sources for the discrepancy.The area shaded in red corresponds to both a blue shift and a decrease in amplitude of the simulated scattering dipole peak compared to the experimental one.It is known that rounding the edges of the particle causes a blue shift and a decrease of the magnitude of the plasmonic peaks.For example, it was observed for silver prisms, 29 silver cubes, 15,30 and gold decahedra. 31We note that the samples were shipped from the optical experiment at Cardiff to the electron tomography at Antwerp in nitrogen atmosphere in a sealed container at room temperature, providing up to 4 days during which such rounding might have developed. 32In the area shaded in blue the simulated peak is also blue shifted, but the simulated amplitude is higher than the experimental one.Based on our previous work 15 this is likely due to a surface layer forming on the particles.An increased damping in the permittivity can also lead to a decrease of the scattering cross section, as we shall discuss in subsection 3.2 below.The green area corresponds to a red shift of the simulated spectra with respect to the experimental ones, with an increase in amplitude.The two particles in this area show rather small deviations, within the accuracy of determining the values. Below we investigate some of these potential sources of deviation in more detail on two selected particles.As the cross-section simulations taking into account the wide NA range of the microscope illumination are computationally expensive, we increased the sampling step size of the illumination direction from 0.21 NA to 0.3 NA, reducing the simulation time by a factor of two, while affecting the cross-section spectra by less than a few percent. Geometry reconstruction accuracy The measured NP morphology dictates the simulated optical cross-sections, and thus should be as accurate as possible.In our analysis pipeline, the reconstruction of the electron tomography depends on analysis parameters which influence the resulting morphology.As mentioned earlier, electron tomography suffers from the missing wedge artefact, which leads to a lack of information along certain directions, and we found that the resulting morphology slightly depended on the number of iterations in the reconstruction process.One can also include pre-processing of the data such as smoothing procedures.In addition, to achieve a reasonable simulation time, the NP morphology needs to be meshed with an acceptable number of elements, which depends on the computational power available and the accuracy required.In this section we discuss the influence of these points on the reconstructed morphology and simulated spectra. We call R1 the meshed reconstructions used in Figure 2 and Figure 3, which employed 15 iterations of the expectationmaximization reconstruction algorithm and a downsampling factor of N = 12.Downsampling by a factor N bins together pixels in an N × N × N volume, so reduces the number of elements defining the NP's surface by a factor of N 2 .The exact number of facets depended on the NP, but in general for R1 the NP's surface geometry consisted of a few thousand faces.In the reconstruction procedure R2, we smoothed the input projection images with a pixel radius of 3 prior to the iterations to improve the signal-tonoise ratio, and reduced N to 4, which increased the number of surface elements to tens of thousands.For reconstruction R3, we furthermore increased the iterations to 100 and reduced N to 1, which increased the number of surface elements to hundreds of thousands. For the large number of surface elements resulting from R2 and R3, COMSOL was unable to reliably import the geometry and construct usable particle models.To circumvent this problem we reduced the number of surface elements to approximately 1000 before importing.This did not cause a significant loss of accuracy: we observed typically around 5 nm blue shift and 1% increase in amplitude (see ESI section S.VI for details about the procedure and the effects).We note that the mesh on which COMSOL solves the scattering problem is usually even coarser.This mesh was determined by investigating the convergence of the simulated scattering cross-section amplitude at the dipole peak versus the mesh size, as described in the ESI of ref. 15 -we choose the size of NP mesh elements so that the calculated dipole resonance scattering amplitude is within 1% from the converged value, yielding about 500-1000 surface elements on the NP. The reconstructions R2 and R3 resulted in a slightly altered geometry that was hardly discernable visually on the COMSOL mesh, so that we look here at the calculated volume and surface area changes (see ESI Table S1), and the effect on the crosssection spectra, as shown in Figure 4 (see ESI Figure S14 for more examples).For particle #20, the volume is V = (18.4,18.2, 17.4) × 10 4 nm 3 for (R1, R2, R3), and the volume to surface ratios are V /S = (10.8,10.6, 10.4) nm.For particle #3 the volumes are V = (4.09,4.29, 4.10) × 10 4 nm 3 and the volume to surface ratios are V /S = (6.65,6.79, 6.5) nm. For the larger particle (#20 shown in Figure 4a), the reconstructions have little influence on the simulation results.Des-pite the decreasing volume, we observed a small red-shift and increase in scattering cross-section for R2 and R3.Noting that these reconstructions create less smoothing of morphological features, the red shift can be related to a sharpening of the geometry.For the smaller particle (#3 shown in Figure 4b), R2 and R3 create different effects.For R2 we observe a small blue shift and a small increase in amplitude.The blue shift could result from remeshing, as mentioned before.The increase of the scattering amplitude is consistent with the increase in the volume.For R3 instead, we observe a red shift and slight redistribution of amplitude between the two peaks is seen.We attribute this to a sharpening of morphological features in the missing wedge region due to the higher number of iterations in the reconstruction algorithm.The slight increase in the splitting of the two peaks also suggests a small increase in aspect ratio.The modified simulated cross-sections result in modified analysis parameters (η, ζ ) which in turn modify the measured cross-sections slightly, as shown by the dashed lines. The results discussed in this section are indicative of the uncertainty originating from the reconstruction.For the following simulations we chose to use R2, having a slightly improved signalto-noise ratio compared to R1 due to the additional smoothing of the input projections, but avoiding R3 where the high number of iterations may lead to a roughening of the morphology by an overfitting of noise in the expectation-maximization algorithm. Modification of the permittivity It is well known that the permittivity of a metal measured by ellipsometry on a planar surface can require a modification for NPs due to the reduced mean free path of the electrons. 33We accordingly model the effect of additional damping (combining the surface damping, the so-called chemical interface damping, and crystal defects) on the Ag permittivity ε exp (ω) measured by ellipsometry on a planar surface of polycrystalline Ag films 27 as function of the angular frequency ω = 2πc/λ , with the speed of light c and the wavelength λ .We first fit ε exp (ω) in the wavelength range between 400 nm and 700 nm, avoiding the Ag interband transitions at shorter wavelengths, with a Drude model, p /(ω 2 + iωγ), as detailed in the ESI section S.VII, where ω p is the plasma frequency and γ is the damping.Then, we increase the damping by the term [33][34][35][36] gv F /R , where v F is the Fermi velocity, R is the effective radius, and g is a scaling factor.We use the radius R calculated from the particle volume V assuming a spherical shape, R = 3 3V /(4π), resulting in R = 35.2nm for particle #20 and R = 21.7 nm for particle #3.Finally, we add the permittivity change due to the increased damping to the measured permittivity data set, resulting in the modified permittivity ε m (ω) = ε exp (ω) + ε(ω, γ + gv F /R) − ε(ω, γ) to be used in the simulation. The effect of the increased damping on the cross-section spectra is shown in Figure 5.The scattering cross-section decreases with increasing g from 0 to 1.5 (a typical range reported previously 36 ), together with a broadening of the peaks, while the absorption cross-section increases (see ESI Figure S17).The measured cross section does not change notably with g, showing that the ana- lysis parameters (η BF , η DF , ζ ) are not significantly affected by the additional damping. Addition of a tarnish layer While it might be possible that changing both reconstruction procedure and damping could produce a σ sca matching the measurements, we did not find a reconstruction that would consistently move the spectra of all particles enough that a further permittivity change could explain the remaining discrepancy.Therefore we consider here another deviation of the particle description in the model from reality, given by an atomically thin chemical surface modification, which is not expected to be visible in the electron tomography for the imaging settings used.Such a layer, which can form on silver (as opposed to gold) due to its reactivity, is most likely sulfide or oxide. 37,38Both compounds have a high refractive index and also absorption, causing a red shift and a decrease in scattering magnitude, 15,39 mimicking the observed mismatch between simulation and measurement for the majority of the particles.More discussion and data regarding the possible origin and experimental evidence and of such layers, including energydispersive X-ray spectroscopy, is given in the ESI section S.II. To model such layers in COMSOL we used the following approach: Starting from the surface mesh of the particle, we modeled a surface layer by isotropically scaling down the mesh, while fixing its center of mass, to define a Ag core of volume V c , with the remaining space in the original volume V s providing the shell.The resulting average shell thickness h is taken as before and after the scaling, respectively.Since sulfur is typically more reactive with Ag than oxygen, the wavelength-dependent permittivity of the shell was set to the one of silver sulfide. 40or particle #20 we scaled down the mesh by a factor of 0.97, creating a layer of thickness h = 1.0 nm.For particle #3 we used a scaling factor of 0.985, yielding h = 0.3 nm.These shell thicknesses yield a good agreement between simulated and measured scattering cross-section spectra, as shown in Figure 6..For the absorption cross-section spectra, which are increased by the tarnish, some mismatch remains.We show in the ESI section S.VIII that assuming silver oxide instead of silver sulfide, a similar effect on the cross-sections is found for a slightly larger thickness.Importantly, we note that a tarnish layer can have a much more complex morphology than assumed here, and can also contain a mixture of sulfide, oxide, and even other compounds such as FeS.A residual mismatch is therefore expected considering the simple tarnish model employed.We emphasize that for some of the NPs (e.g.#12, see ESI Figure S10), there is a good agreement between measured and simulated spectra within the expected uncertainty from the shape reconstruction (see ESI Figure S14), without adjustable parameters, indicating that the formation of a tarnish layer varies between particles even within the same preparation and TEM grid. Conclusions We have used the pipeline for correlative and quantitative optical and structural electron microscopy characterization that we have recently developed 15 to study individual silver nanohedra synthesized by photochemistry.Importantly, we extended the method to include electron tomography, to determine the volumetric shape of the particles accurately, and used the resulting morphology and orientation for simulations of the quantitative optical cross-section spectra, for a fitting-parameter free quantitative comparison with the measured spectra.This is the first study of this type, combining fully quantitative optical cross-section measurements with correlative electron tomography determining quantitatively the 3D particle morphology and orientation, and corresponding quantitative simulations. While generally a good agreement of simulated and measured cross-sections is found, quantitative differences are revealed.Specifically a red shift of the measurements compared to the simulations by a few percent, mostly for the larger particles, and a difference in magnitude, mostly a reduction for the smaller particles.To understand the origin of the deviations, the influence of three aspects was investigated.(i) The tomographic reconstruction method was examined, showing resulting morphology variations mostly for the smallest particles investigated.(ii) The addition of a realistic surface damping in the permittivity resulted in only slightly modified spectra.(iii) Adding a thin surface layer of tarnish, here modelled as silver sulfide, brought about, for realistic thicknesses in the 1 nm range, a match within the expected systematic errors.Let us emphasize that such conclusions would have been less stringent without the information on the cross-section magnitude.For instance, the red shift of the measured spectra can be explained both in terms of the geometry being more sharp, within the reconstruction accuracy, and by the tarnish layer; but only the latter hypothesis is in agreement with the measured cross-section magnitudes. The accuracy of the method can be improved going forward.For example, one could add polarization-dependent measurements and simulations, using linearly, radially, and azimuthally polarised light, where the latter has the advantage of only inplane polarized excitation for both BF and DF, thus exciting the same resonances.Furthermore, the slight angular dependence of the objective transmission could be calibrated and taken into account.To avoid the formation of a tarnish layer, a similar study on gold nanohedra could be envisaged, allowing to isolate the accuracy of geometry and permittivity. This work and the adoption of the developed methodology paves the way towards an accurate quantitative understanding and verification of the morphology -optical response relation in plasmonic nanoparticles, especially for particles with complex shapes, which are important building blocks for next-generation devices. S.I. SAMPLE FABRICATION The fabrication procedure is based on Ref. [S1].The seeds were fabricated by reduction of silver nitrate (AgNO 3 ) in aqueous solution.8 mL seed solution was prepared by mixing 0.5 µM silver nitrate (Sigma Aldrich), 6.25 µM polyvinylpyrrolidone (PVP) of molecular weight 10k (Sigma Aldrich), 3 mM trisodium citrate (Na 3 C 6 H 5 O 7 ) (Sigma Aldrich) and 0.65 µM sodium hydroborate (NaBH 4 ) (Sigma Aldrich) with vigorous stir for 3 mins at room temperature until the color of the solution turned into light yellow.The seed solution, placed in a glass vial (Fisherbrand, type 1 class A borosilicate glass) and covered with a glass coverslip (Agar Scienti c #1.5) was then irradiated for 7 hours at room temperature via a royal-blue (447 nm) LUXEON Rebel ES LED with a measured optical power of 710 mW in a home-built chamber with a cylindrical inner volume of 53 mm diameter and 95 mm height, made of aluminium.The inner surface of the chamber was painted rst with a white primer (Starglow Universal Primer, Glowtec, UK) then by a re ective varnish (Starglow Clear Re ective Paint, Glowtec, UK) to achieve a high di use re ectivity (> 95%) improving the intensity and homogeneity of the irradiation.The product solution after irradiation had an orange color, and was then puri ed via a 2-step centrifugation to minimise the aggregation in the pellet: a rst step at 500 relative centrifugal force (RCF) for 20 min (to remove the small silver crystals) was followed by a second step at 1500 RCF for 20 min.After each step, the supernatant was removed and the pellet was resuspended in 0.1% PVP with 2 mM trisodium citrate solution.The product solution was stable (seen by a stable colour) in the fridge at 4 • C for several months.All nanoparticles (NPs) analysed were synthesised no more than two days two before optical imaging.Conventional high resolution transmission electron microscopy (HRTEM) images of the puri ed solution were acquired on a JEOL JEM-1011 microscope equipped with a thermionic gun at 100 kV accelerating voltage.Samples were prepared by drop-casting NP suspensions onto carbon lm-coated 200 mesh copper grids.Fig. S1 shows the fabricated NPs, which are dominantly decahedra, but also include triangular plates, bipyramids, and other shapes. A preliminary kinetic study shown in Fig. S2 was performed to determine the formation times.The observed UV-Vis spectral evolution of the photochemical growth is consistent with data previously published in Ref. [S2].A progressive rise of a plasmonic peak at 480 nm is observed, which is characteristic of Ag decahedra in aqueous solution.In the main text we discuss the in uence of a thin Ag 2 S tarnish layer, leading to a better agreement between simulated and measured optical cross-sections.Such a layer can form due to exposure to trace amounts of sulfur, either in the surrounding atmosphere (H 2 S for example), or on the TEM grid as residuals from the sample preparation. In order to minimize contamination, the samples were shipped as follows: immediately after the optical measurements, the sample grids were placed in a standard TEM grid holder, and the holder was encapsulated in polypropylene centrifuge tubes lled with nitrogen, and rigorously sealed with para lm.The tubes were shipped from Cardi to Antwerp in room-temperature packaging via next-day delivery.The samples were then measured within two days of arrival, and opened immediately before loading onto the electron microscope for imaging. Notably, in a rst round of experiments (not included in the results presented in the main text), the SiO 2 lm of the TEM grid was cleaned and activated by the standard Piranha solution with reduced sulfuric acid (5%).The grid was incubated for 1 hour at 55 • C in 10 mL etching solution of 500 µl H 2 SO 4 (99%) diluted with 9.5 mL of 30% H 2 O 2 . Figure S3.High-angle annular dark-eld scanning TEM (HAADF-STEM) images of Ag particles on the SiO 2 windows which were cleaned and activated with a protocol involving sulfuric acid in the rst round of experiments, after optical characterisation.For most particles, a set of smaller surrounding debris is visible.Figure S4.Same as Fig. S3, but zooms on selected Ag particles whose cross-section spectra showed a plasmon resonance peak. S4 While plasmonic NPs were identi ed in the optical measurements, subsequent electron microscopy indicated that many NPs in this batch were either completely converted to or were surrounded by debris containing sulfur (most likely Ag 2 S) as shown in Fig. S3.These also included NPs whose optical cross-section displayed plasmonic peaks in the preceding measurements(see Fig. S4).Energy-dispersive X-ray (EDX) spectroscopy, shown in Fig. S5, con rmed the sulfur content of the debris, revealed by a characteristic sulfur peak emerging at 2.5 keV and the corresponding decrease of the Ag peaks at 0.3 and 3.0 keV.EDX maps such as the one displayed in Fig. S5 top right were acquired using the Super-X detector of the Tecnai Osiris TEM operated at 200 kV.The maps were generally acquired for 10 min at a current of 150 pA. These ndings in the rst round of experiments indicated that the piranha solution might leave some H 2 SO 4 on the grid, which might dissolve in anisole.After optical imaging the sample grid was held by a reverse-action tweezer and air-dried at 32 • C, which could allow the residual to deposit on the grid surface and the particles, promoting sul disation. In the second round of experiments, the sulfuric acid in the piranha solution was substituted with hydrochloric acid (see the Sec. 2 of the main paper).We found that this change in the protocol still granted NP immobilization, but avoided the formation of the obvious debris around particles.Since there was no structure observed in HAADF-STEM indicating a surface layer such as Ag 2 S in the second round, EDX measurements were done only on a few particles.An example for particle #14 is shown in Fig. S6.The summed spectrum of the whole area contains no clear S peak, and the S map shows only uncompensated background signal.These results show, that within the limited signal-to-noise ratio of EDX, no S could be detected.Compared to the rst round of experiments (Figs.S3 -S5), a possible Ag 2 S surface layer must be very thin.As the grid is made of silica, the oxygen signal does not allow to locate a possible Ag 2 O layer. It should be noted that due to the low signal-to-noise ratio in EDX measurements, detecting thin (0.3-1.6 nm) sul de layers as introduced in the main text is a challenge.Achieving su cient sensitivity in EDX maps requires long acquisition times at higher beam currents than HAADF-STEM imaging, which induces beam damage.As a result, the spatial distribution of the element maps would be questionable, not only due to particle reshaping but also due to the possibility that the chemical modi cation could have occurred during the EDX measurements by material released from the support. Regarding the possible origin of the sulfur, we revisited the protocol used.A thinkable origin could be the stainless steel reverse-action tweezers used to hold the grid during the cleaning and grid functionalisation process.They might contain small amounts of iron sul de, which could react with the etchant used in a reaction FeS + 2 HCl → FeCl 2 + H 2 S. The silica surface on the grid was functionalized with reactive amine groups which might react with the H 2 S [S3] and carry the sul de to the next step.The sample Ag NPs were in contact with the silica + amine surface in both polar and apolar solvent, and also under light irradiation during the measurement, which can be important for the reactivity of silver.Oxidation of silver can also be promoted by UV light.Generally speaking, it is well known that tarnish commonly forms on silverware when left in atmosphere for extended periods of time.This by itself indicates that the atmospheric sulfur S5 or oxygen content is enough to promote tarnishing of exposed silver surfaces, with no need of being fostered by reactions speci c to our protocol.Even more so in the case of silver in NP form, whose reactivity is increased by the high surface-to-volume ratio [S4, S5]. S.III. CORRECTION FOR FINITE REGION OF DETECTION In our micro-spectroscopy experiments the imaged area is delimited along the dispersive direction by the input slit of the spectrometer having a width of 80 µm.The slit is imaged with same size onto the sensor, where it is matched along the orthogonal direction by the on-chip binning (we read out a bin of 5 pixels of 16 µm pitch).Considering the magni cation of about 80× from sample to sensor (characterized experimentally by a controlled displacement of the sample stage), the 80 × 80 µm region of interest on the sensor corresponds to a square imaged area of lateral size = 1.0 µm on the sample.This value was chosen to accommodate a particle image -which for su ciently small particles corresponds to the point spread function (PSF) of the imaging system -while leaving some margin for possible lateral drifts over the acquisition time (few tens of seconds; the typical thermal drift of the imaged position is about 100 nm/min). However, the mathematical PSF of a point source extends in nitely in space, albeit in practice only few rings (if any) are typically visible above the background noise level.This means that only a fraction < 1 of the particle signal is detected as the tails of the PSF are cropped by the spatial ltering of the image.Note that ≠ , since the PSF in bright eld (BF) and dark eld (DF) images di er due to the di erent angular range of excitation and the di erent contrast mechanism.Speci cally, in DF, the scattered intensity is measured, with a PSF determined by the objective numerical aperture (NA) and particle focus (within an approximated scalar di raction theory neglecting the polarization dependence) whereas in BF, the transmitted power is measured, which results from the interference between incident and scattered eld, leading to a partially coherent imaging.Matching condenser and objective NA, as we do in our experiment, the PSF in BF is of similar size as the one in DF. In our quantitative analysis, the reduction of the excitation and scattering signal due to the nite area of detection is accounted for by rescaling ext and sca by and , respectively.We determine and for our set-up with the following procedure.Wide eld images are acquired with a low-noise scienti c sCMOS camera (PCO Edge 5.5).Illumination is provided by a 100 W halogen lamp (Nikon V2-A LL 100 W), ltered using bandpass lters (Thorlabs FKB-VIS-40) with centre wavelengths of [450;500;550;600] nm, so to address the wavelength dependence of .The illumination and detection NA ranges for BF and DF are the same as in the experiment (namely, we use the same condenser and set of 3D printed apertures, and the same objective) to ensure we characterize the same PSF.We also use the same silver nanohedra sample, although ideally the PSF is the same for any isotropic subwavelength object. We analysed the acquired transmission and scattering images using Extinction Suite, a plug-in for the image processing programme ImageJ which we have been developing within our group -see https://langsrv.astro.cf.ac.uk/Crosssection/Crosssection.html and publications referenced therein.An analysis routine determines the particle position via a Gaussian t of its transmission or scattering image; the extinction or scattering magnitude is quanti ed by integrating over a circular region of interest of radius i centred around the particle position.Fig. S7a shows a measurement of the extinction as a function of i , after S8 subtraction of the local background measured over an area 2 i .The extinction saturates at about i = 3/NA 1.7 µm, above which uctuations of the local background dominate.The scattering magnitude shown in Fig. S7b exhibits a similar behaviour with a slightly slower saturation (at about 2 µm).We associate the value = 1 to the saturation magnitude -indicated by the horizontal lines in Fig. S7a,b -and normalize to it the extinction or scattering. We estimate and in our micro-spectroscopy experiments at the equivalent radius / √ = 564 nm (vertical dashed line) which has the same area as the square region detected in micro-spectroscopy; the resulting values are reported in Fig. S8 for the four colour channels used. S9 The experimental data are tted with the phenomenological function and the parameters ( = 4091 nm, = 1.18) for BF and ( = 71908 nm, = 0.353) for DF.The functions () and () are used to correct the measured cross section magnitudes according to Eq. ( S2) below.The decreasing trend for longer observed in in Fig. S8 is explained by the scaling of the PSF with ; for instance, the Airy function (which describes the focal spot created by a perfect lens with a circular aperture in the paraxial approximation) has a rst dark ring of diameter 1.22/NA.Note that this scaling is consistent with the limit behaviour → 1 for → 0 of the tting function of Eq. (S1).We estimate that the error in the determined factors is about 5 to 10%, mostly due to the determination of the saturation value for large i , which is a ected by uctuations in the background value that increase for larger integration areas.S10 S.IV. CHARACTERIZATION OF THE BF-TO-DF ILLUMINATION INTENSITY Another crucial parameter of the experimental set-up for quantifying the cross-section magnitude is the BF-to-DF ratio of the illumination intensity.The need for this parameter arises because in DF the excitation intensity cannot be directly measured, and it has therefore to be retrieved from the BF background through a proportionality factor.We call this parameter , and it acts as a scaling factor for the magnitude of sca -see Eq. (S2) below. is governed by the amount of light blocked by the BF and DF apertures in the back focal plane (BFP).It is therefore possible to derive a simple analytical expression of (Eq.(3) in Ref. [S6]) assuming an aplanatic behaviour of the condenser lens.However, in most microscopy set-ups the illumination is not homogeneous over the BFP of the condenser; moreover the condenser transmittance drops towards the edges of is aperture.These e ects add up to give a strong decrease of the illumination intensity at large NA values, which e ectively lower the DF illumination.Such angular e ciency of the excitation path can be characterized experimentally and used to correct as described in § S.VI.B in Ref. [S7] In this work (similar to what we did already in Ref. [S6] for the polystyrene beads) we have instead measured = 2.04 directly for the speci c BF and DF 3D-printed apertures used in the experiment with the following procedure.Using an excitation path replicating the microspectroscopy experiments, a 1.45 NA objective is used in the detection path to collect all exciting light also in the DF illumination con guration, which has a maximum of 1.34 NA.A clean glass slide is used in place of the sample (to hold the immersion oil of objective and condenser) and Köhler illumination is adjusted focussing the eld aperture.To minimize the e ect of chromatic aberrations and reproduce the experimental focussing conditions a colour lter centred at 550 nm and a width of 40 nm (Thorlabs FBH550-40) is used, which is the spectral region of the plasmonic resonance of the decahedra.Wide eld images are then acquired with a scienti c sCMOS camera (PCO Edge 5.5) using the BF and DF 3D-printed apertures.The illumination intensity is proportional to the mean value of the camera readout over a region of interest in the centre of the eld of view.Taking the ratio of BF to DF readout (after subtracting from both the dark o set of the camera digitizer) yields .Note that this procedure neglects the angular dependence of the collection e ciency; however, for the high-quality objective used (Nikon MRD01905, 100× 1.45 NA PlanApo Lambda series) we expect this dependence to be weak over the range up to 1.34 NA of the transmitted illumination, considering the signi cant margin to the objective NA. S.V. SPECTROSCOPY OF ALL PARTICLES Let us report here for convenience the formulas derived in Ref. [S6] that we use to quantify the cross section magnitude, slightly adapted to match the notation of this work where () are the the scattering/extinction spectra detected (after subtraction of the dark o set of the CCD digitizer) under DF/BF illumination, imaging either the nanoparticle (NP subscript) or the background (bg subscript) in an empty area nearby.The parameters , , and are speci c to the experimental set-up and settings only (they have the same value for all measured particles) and were discussed in Sec.S.III and Sec.S.IV.Conversely, the parameters and are speci c to each particle and, as mentioned in the article, encode the directional properties of scattering with respect to excitation and detection, respectively.Fig. 2 of the article shows the measured and simulated quantitative optical cross-section spectra () of six representative particles.The data for all twenty particles investigated in this work are shown in Fig. S9-S12; a transmission electron microscopy (TEM) tomographic reconstruction of each particle is included as an inset.The study we presented in the article focuses on sca () which dominates the response of the investigated particles.The untreated absorption spectra (not shown here) display for most particles negative values over a large spectral range in correspondence of the dipolar plasmonic resonance governing the scattering spectra.To pinpoint the origin of such non-physical result (which would imply a net power emission from the particle) it is useful to look at the structure of Eq. (S2a).The rst term is the measured ext and the second term is the portion of BF scattering not collected by the objective, which is subtracted from the total extinction to isolate the absorptive contribution.Therefore abs < 0 can result from either underestimating ext or overestimating sca = ( /) sca .The rst can be corrected by decreasing and the latter by increasing or decreasing .Each correction corresponds to di erent aspects of uncertainty in the experiment; speci cally, refers to spatial ltering (e. g. the particle drifts away from the centre of the imaged area during the acquisition, and hence is lowered) and to directional ltering (e. g. accuracy of the fabrication and positioning of the 3D printed apertures in the BFP of the condenser).For the spectra shown in Fig. S9-S12 we decreased the parameter by 20% to avoid negative values of the spectral average of the experimental absorption.Note that for these particles the opposite contributions of extinction and scattering have similar magnitude, so that they approximately balance each other; this implies that a similar result can be obtained by a 20% correction of either or .An even more accurate measurement of the parameters for the experimental setup seems required to avoid such an adjustment.We emphasize that the same correction was used for all spectra shown. Let us now turn our attention to the scattering parameters, () and (), which are computed for each particle and correlated to () in Fig. S9-S12 (bottom panels).It is instructive to compare these numerical simulations -which take into account the complexity of the particle shape -to our previous analytical calculations performed in the electrostatic approximation -see section S.V. of Ref. [S6].Let us start with , which is the fraction of the scattering power collected by the objective.Given the excitation and detection NA ranges of the experiment and assuming a homogeneous immersion medium, one nds = = 0.148 for a polarisability perpendicular to the optical axis and = 0.136, = 0.111 for an isotropic polarisability.The simulated compares well with this estimate, being closer to the spherical value for most particles.Note that is determined by the orientations of the electric dipoles excited in the particle: the larger the angle formed with the optical axis of the objective, the lower the fraction of emission collected, resulting in the largest for a dipole lying at on the substrate.Looking at the simulations below, these considerations also explain why > -as more inclined dipoles are excited in DFand why for most particle decreases for < 450 nm, where multipolar resonances are excited. As for , which is the BF-to-DF ratio of the total scattered power, no straightforward comparison to the values in the dipole limit ( = 1.22 for an in-plane polarisability and = 0.897 for an isotropic polarisability) can be made.This is because in this set of measurements the NA range of the DF illumination reaches the edge of the condenser aperture (1.34 NA) where the illumination intensity is signi cantly reduced -see Fig. S7 of Ref. [S7].The numerical modelling used in this work takes into account the angular dependence of the illumination intensity based on our experimental characterization of the performance of the condenser -see Ref. [S7].Conversely, the analytical calculations assume a homogeneous lling of the back aperture of the condenser, thereby overestimating the DF illumination and scattered power, which leads to a lower .Along the same line of reasoning, the dip of for < 450 nm implies that the multipolar modes in that region are comparatively better excited by the tilted illumination of DF.As described in the main text, we investigated the dependence of our results on the geometry reconstruction method, and evaluated three procedures (R1 to R3).The di erent reconstruction methods were applied to a selection of particles, and the resulting volumes, and volume-to-surface ratios are given in Table S1 along with a summary of the parameters used for each algorithm.Generally we nd that the volume vary by some 5% to 10%, with R1 resulting in the lower and R2 in the higher volumes.The volume to surface ratios also vary by some 5% to 10%, but there is no clear trend across the particles for the di erent reconstructions. While with the R1 procedure the resulting mesh can be directly imported and re-meshed by C , R2 and R3 sizeably increase the number of surface elements de ning the particle, which could not be imported, processed, and meshed reliably with C . We therefore reduced the number of surface elements using the free software Meshlab and a procedure illustrated in Fig. S13 for two exemplary particles of di erent appearance.First the number of faces was changed to 1000 using the option 'quadratic edge collapse', then the result was turned into pure triangular mesh, and nally the errors in the geometry (such as holes or crossing mesh elements) were repaired by the option 'remove non manifold edges by removing faces'.The resulting mesh was then imported into C . Table S1.Volume and volume-to-surface ratio of particles reconstructed with di erent procedures.The parameters identifying the reconstruction procedures R1, R2, R3 are given with the following abbreviations: It: iterations, N: factor of downsampling, Sm: smoothing, Rm: remeshing. S19 B. Cross-section spectra Fig. S14 shows the scattering cross-section spectra of the six particles studied in the article obtained using the di erent TEM tomography reconstruction procedures R1 to R3.There is no clear common trend of the e ect of the di erent reconstructions.We typically see variations of some 10 to 20 nm in peak position, and in peak splitting, and some 5% to 20% in peak amplitude. S20 For particle #3 and #6 C could process the R2 geometries without remeshing, therefore we can use these two particles to investigate the e ects of the remeshing.In Fig. S15 we show the simulated cross section spectra for these two particles.We nd a small blue-shift of approximately 5 nm due to the remeshing for both particles, and an increase in amplitude below 1%.The e ects should be even smaller for larger particles, which are less sensitive for small surface changes that are caused by the meshing.All other simulations shown in the supplement or in the article uses the outlined remeshing steps for R2 and R3, and the 'rm' label is dropped.We provide as online material animations of the reconstructed geometries of all reported NPs.S21 S.VII. SURFACE AND INTERFACE DRUDE DAMPING In this section we investigate the e ect of increasing the Drude damping in the Ag permittivity, to model the increased surface or defect scattering in the particles compared to the permittivity datasets [S8] measured by ellipsometry on thin lms.Such an increase is expected due to the particle size being smaller than the crystallite sizes in the measured lms, and the additional crystal defects which can be created in the colloidal growth [S2, S9].We t the data set [S8] with the Drude model (, ) = ∞ − 2 p /( 2 + ) in the range 400 to 700 nm.The t parameters are ∞ = 3.8575, p = 1.366616 s −1 and = 7.784913 s −1 .The resulting analytical permittivity is shown in Fig. S16 along with the tted experimental dataset exp () of Ref. [S8].We note from panel b that the imaginary part of the permittivity has some deviations from the Drude model in this range, and it has been shown [S10] that additional poles are needed for an accurate t.However, since we are here only interested in modelling the change of the permittivity by increasing Drude damping, the simpler model su ces.Following [S11], we then add a damping Δ = / where = 1.366 m/s is the Fermi velocity, and the equivalent radius is calculated from the volume based on a spherical particle, = 3 √︁ 3 /4, and replace with = + Δ in the modi ed permittivity (, ) = ∞ − 2 p /( 2 + ).We vary the damping parameter , and the resulting change to the imaginary part of the permittivity (, ) is shown in Fig. S16b,c.For particle #20 the permittivity changes less compared to #3 due to its larger .The real part of the permittivity is changed by less than 0.1% therefore this is neglected here.We take the change of the Drude permittivity Δ (, ) = (, ) − (, ), and add it to the measured data exp (), resulting in the modi ed permittivity m (, ) = exp () + Δ (, ) used in the simulation. In the article we show in Fig. 4 the e ect of the permittivity change on the scattering cross section, while Fig. S17 shows the e ect on the absorption cross section.The simulated absorption increases for stronger damping as expected.The experimental absorption, like already seen for the scattering, is una ected because the simulated parameters and are only weakly a ected by the increased damping (in the electrostatic limit, they are dispersionless and depend only on the particle geometry but not its material properties.). S.VIII. CROSS-SECTION SPECTRA FOR SULFIDE OR OXIDE TARNISHING We have already discussed in Sec.S.II that the chemical composition of the tarnish layer on the NP surface is uncertain, although silver sul de (Ag 2 S) seems the most likely candidate based on previous reports in literature.In this section, we investigate a possible di erent composition of such layer, namely silver oxide (Ag 2 O), comparing the simulated spectra to those obtained with an Ag 2 S layer, which was used in the main text for the NPs #3 and #20 -see Fig. 6. The permittivity spectra used as material properties were taken from Ref. [S12] for Ag 2 S as in the main text, and from Ref. [S13] for Ag 2 O.To compare the e ect of the two materials, it is su cient to evaluate the cross-section spectra for normal incidence illumination, given that the cross-section spectra quantitatively simulating the measurements, which use a range of illumination directions, are reported for Ag 2 S tarnish layers in the main text.The layers are modelled as described in Sec.3.3. The resulting cross-section spectra are shown in Fig. S18.For the Ag 2 S tarnish layer, the same thicknesses as in the main text are used, while for the Ag 2 O tarnish layer, the thicknesses are chosen to provide a similar change of the cross-sections as for the Ag 2 S tarnish layer.For particle #20, the scaling factor for the Ag 2 S layer is 0.97, yielding a thickness of approximately 1 nm.The layer redshifts the dipolar peak and decreases its amplitude, as discussed in the main text.For the Ag 2 O layer, a scaling factor of 0.95 was used, yielding a thickness of about 1.6 nm.We nd that the Ag 2 O layer results in a slightly larger amplitude reduction for a given shift.This would slightly increase the deviation from the experiment seen in Fig. 6 for the Ag 2 S layer.For particle #3 , the scaling factor for the Ag 2 S layer is 0.985, yielding a thickness of approximately 0.3 nm.For the Ag 2 O layer, a scaling factor of 0.98 was used, yielding a thickness of about 0. shift for the oxide layer would slightly decrease the deviation from the experiment.The actual morphology of the tarnish is likely more complex than the thin layer of homogeneous thickness used here for modelling; for example one could expect a higher reactivity of corners.Therefore, the observed small di erences between Ag 2 S and Ag 2 O layers are not conclusive.Notably, Ag 2 O could not be detected in the EDX results (see Sec. S.II) due to the presence of oxygen in the SiO 2 support, so that even the Ag 2 O thickness of 1.6 nm used for particle #20 would not be easily visible in EDX.S24 1 1Fig. 1 Fig. 1 Schematic workflow as described in the text.a) Photochemical formation of decahedra using blue LED illumination, monitored via the redshift of the extinction from spherical seeds (dashed line) to decahedra (solid line).b) Deposition of decahedra onto a TEM grid with SiO 2 windows, index-matched by anisole immersion, and encapsulated by a glass slide and a coverslip.c,d) Optical micro-spectroscopy in dark-field and bright-field configurations.BFP and FFP indicate, respectively, the back and front focal plane of the objective (obj) and condenser (cond) lens.e) Measured single-decahedra scattering and absorption cross-section spectra in absolute units.f) Correlative HAADF-STEM tomography through recognition of NP patterns as exemplified in c).g) 3D shape reconstruction from tomography.h) Tetrahedral volume mesh used in numerical simulations.i) Calculated spatial distribution of the Joule (resistive) heating.j) Calculated far-field distribution of the scattering intensity.k) Numerical simulations of cross-section spectra.Panels e-k refer to the exemplary particle #20. 1 σ f o r P 1 σ f o r P 2 σ 3 Fig. 5 1- 10 | 5 Fig. Measured(dashed lines) and simulated (solid lines) scattering (blue) and absorption (red) cross-section spectra of 6 selected particles as labelled, along with HAADF-STEM tomography surface views from the top and side.The scale bar is 40 nm.For particle #19, we show additionally the simulated scattering cross-section for normal incidence for linear polarizations along (orange line) and across (green line) the long axis of the particle, as well as their average (black line). i m u l a t i o n b l u e -s h i f t e d s i m u l a t i o n h i g h eFig. 3 Fig. 3 Comparison of measured and simulated properties of the dipole peak in the scattering cross-section spectra for all investigated particles.a) Position of the peak λ D sca .For particles with multiple peaks, such as #19 or #3, the longer wavelength peak is shown.The symbols are indicative of the particle shape (see insets in Figure 2 and ESI section S.V): #6 & #7 are tetrahedra, #8 & #10 are half spheres, 19 is a bitetrahedron, #3 is not well defined, the rest are decahedra.The inset shows simulated versus measured positions.b) Amplitude of the peak.The inset shows simulated versus measured amplitudes.c) Difference between the simulated and experimental peak position.d) Ratio between simulated and experimental peak amplitude.e) Peak amplitude ratio versus position difference. Fig. 4 Fig. 4 Simulated and measured scattering cross-section spectra for particle #20 (a) and #3 (b) using different tomography reconstruction settings R1 to R3 as labelled (see text). 7 1Fig. 5 Fig. 5 Same as Figure 4, but for increasing surface scattering gv F /R in the Drude damping of the Ag permittivity. Fig. 6 Fig.6 Same as Figure4, but for the addition of a silver sulfide (Ag 2 S) tarnish layer of thickness h, and additionally showing the absorption crosssection. Figure S5 .Figure S6 . Figure S5.EDX elemental analysis.Top left: HAADF-STEM image of Ag particles as in Fig. S3.Two areas are indicated, whose EDX spectra are shown on the bottom.Top right: EDX map (smoothed with 3 pixels width) of the Ag peak net counts (red, 2.9 − 3.1 keV) and the S peak net counts (green, 2.2 − 2.4 keV). Figure S7 .Figure S8 . FigureS7.Fraction of (a) extinction and (b) scattering detected in imaging as a function of the integration radius i , normalized to its saturation value at large i indicated by the horizontal guideline at = 1.The vertical dashed lines correspond to the equivalent radius of the imaged sample region in our microspectroscopy experiments. Figure S13 . Figure S13.Comparison of three geometry reconstruction procedures (R1 to R3) for two di erent particles (#3 and #6) viewed from the top.For R1 the 3D reconstruction is directly imported into C and then meshed, while for R2 and R3 an intermediate step is introduced to reduce the number of faces de ning the geometry. 6 Figure S15 . Figure S15.Simulated cross-section spectra of particles constructed via the R2 algorithm, with (labeled rm.) and without remeshing before importing into C . Figure S16 . 3 Figure Figure S16.Fit of the experimental permittivity dataset of S8 with the Drude model and additional damping.a) Real part, data (circles), and model (line).b) Imaginary part, data (circles), and model (lines) for = 0 (black), as well as with the added damping using = 0.5 (red), 1.0 (green), and 1.5 (blue), for particle #3.c) Same as b) but for particle #20. h = 1 Figure S18 . Figure S18.Simulated cross-section spectra of particle #20 (top) and #3 (bottom) for a silver-sul de (red) and a silver-oxide (blue) tarnish layer of thickness ℎ as given covering the particle, for normal incidence illumination.
15,799
2022-04-23T00:00:00.000
[ "Physics", "Materials Science" ]
(Dys)Zphilia or a custodial breaking Higgs at the LHC Electroweak precision measurements established that custodial symmetry is preserved to a good accuracy in the gauge sector after electroweak symmetry breaking. However, recent LHC results might be interpreted as pointing towards Higgs couplings that do not respect such symmetry. Motivated by this possibility, we reconsider the presence of an explicitly custodial breaking coupling in a generic Higgs parameterization. After briefly commenting on the large UV sensitivity of the T parameter to such a coupling, we perform a fit to results of Higgs searches at LHC and Tevatron, and find that the apparent enhancement of the ZZ channel with respect to WW can be accommodated. Two degenerate best-fit points are present, which we label `Zphilic' and `dysZphilic' depending on the sign of the hZZ coupling. Finally we highlight some measurements at future linear colliders that may remove such degeneracy. Introduction The main goal of the LHC is to shed light on the mechanism of ElectroWeak Symmetry Breaking (EWSB). The recent excesses observed in searches for the Higgs boson at ATLAS and CMS, supplemented by some hints from the Tevatron, can be seen as the starting point in this direction. Even though they are far from being conclusive, the experimental results point to a resonance with mass around 125 GeV and, broadly speaking, Higgs-like behavior. If such hints really correspond to the first manifestation of a new degree of freedom, then the measurement and study of its properties will be crucial to unveil EWSB. This is even more true in the absence of any direct evidence of physics beyond the Standard Model (SM) so far. The EWSB sector has been indirectly probed by the LEP precision tests, which represent a primary source of information: one of the most important outcomes of precision measurements is that the gauge sector after EWSB must approximately respect an SU (2) c custodial symmetry. Such requirement is satisfied by the SM description of EWSB. On the other hand, at the moment experimental excesses at the LHC may be interpreted as pointing to non-SM Higgs couplings, especially in the gauge sector. In fact, not only is there a trend of underproduction in the W W channel and of overproduction in the γγ channel (for the latter, the excess is stronger in the vector boson fusion subchannel), but an enhancement of the ZZ signal with respect to W W is observed, whereas custodial symmetry implies that the two have the same strength (when normalized to their SM values). Clearly such hints could be just due to statistical fluctuations, or to issues with the modeling of complex backgrounds (for example, in the h → W W channel). Nevertheless, it is interesting to ask what would be the implications if the current pattern of excesses were to be confirmed with more data. In this spirit, we relax the assumption of custodial invariance in the couplings of the Higgs resonance and perform a fit to the results of Higgs searches by employing a parameterization where explicit custodial breaking is allowed. Our model-independent approach is similar in spirit to other recent analyses of the Higgs experimental results, see Refs. [1][2][3][4][5] 1 . We also analyze the effects on the electroweak parameter T , finding that if the couplings hW W and hZZ do not respect custodial symmetry, then T receives quadratically divergent corrections. In a concrete model, new degrees of freedom below the cutoff must therefore conspire to make the total contribution to T compatible with electroweak precision tests (EWPT). Not surprisingly, the fit to the results of Higgs searches points to a Higgs coupling more strongly to ZZ than to W W . Two exactly degenerate best-fit points appear, which we label 'Zphilic' and 'dysZphilic' depending on the sign of the hZZ coupling. Such sign, although unobservable in current Higgs searches, is physical in processes involving interference. We therefore discuss some future measurements at colliders that may be used to resolve the degeneracy. We remark that many proposals for physics beyond the SM exist in the literature where the custodial symmetry is not respected: for example, models where the Higgs sector is extended with scalar triplets that get a non-vanishing vacuum expectation value, generic two Higgs doublet models, as well as theories where the Higgs arises as the pseudo-Goldstone boson of a coset G/H where H does not contain SO(4) ∼ SU (2) × SU (2), such as SU (3)/(SU (2) × U (1)), fall in this class. Our paper is structured as follows: we start by introducing our parameterization and discussing the fit to LHC data in Section 2, where we also briefly comment on the effect of explicit custodial breaking on the electroweak T parameter. In the light of our results, we discuss in Section 3 some implications for future precision measurements of Higgs properties. Finally, we conclude in Section 4. Lagrangian, T parameter and fit to LHC data We employ the usual parameterization of interactions of SM fields with a generic Higgs boson by considering an EW chiral Lagrangian coupled to a scalar resonance h. The Goldstone bosons corresponding to the longitudinal polarizations of the W and Z are introduced through the chiral field with v 246 GeV. The Lagrangian mass terms are then We omit for simplicity lepton masses, which could be introduced in the same way as for quarks. Notice that this Lagrangian is approximately invariant under a global SU (2) L × SU (2) R , under which Σ transforms as This invariance is broken in the vacuum to the diagonal SU (2) c (the 'custodial symmetry'), which guarantees that the ρ parameter, defined as satisfies the tree level relation ρ = 1, as experimentally verified to good accuracy. In principle the Lagrangian (2.2) could contain an additional term v 2 Tr Σ † D µ Σ σ 3 2 (2.6) that is gauge invariant, but explicitly breaks SU (2) L × SU (2) R and therefore the custodial symmetry. To prevent large deviations from ρ = 1 and thus tensions with precision tests, its coefficient has to be very small, O(10 −3 ), so the term (2.6) is usually neglected. As it is well known, the description (2.2) leads to amplitudes for longitudinal gauge boson scattering that grow with energy, and as a consequence to a loss of perturbative unitarity at a scale 4πv ∼ 3 TeV . To moderate the growth of amplitudes and therefore postpone the perturbative unitarity breakdown, a scalar resonance transforming as a singlet under the custodial symmetry can be introduced. We can thus add to Eq. (2.2) all possible interactions with the scalar resonance up to second order, obtaining [16] (see also Refs. [17,18] for an introduction) where a, b, c, c 2 are free parameters (the SM is retrieved by choosing a = b = c = 1 , c 2 = 0 and vanishing terms of higher order in h). We do not write explicitly the scalar self-interactions contained in V (h), as they will not be relevant in our discussion. Since we are interested in custodial breaking effects, we add to the Lagrangian the following terms where t cb and a cb are free parameters 2 and the overall normalization has been chosen for later convenience. As we already mentioned, t cb contributes to T at tree level,T = −t cb . On the other hand, the consequences of the coupling a cb can be seen by going to the unitary gauge, Σ = 1: the interactions of the Higgs with vector bosons are modified as follows Clearly the ratio between the two couplings differs from the usual SM value g hW W /g hZZ = cos 2 θ W . In a SILH Lagrangian [19], where the SM gauge symmetries are linearly realized in the strong sector, we can consider the following operators where H is the (composite) Higgs doublet emerging as a pseudo-Goldstone boson from the strong sector. We find However, in addition a contribution t cb = −c T (v 2 /f 2 ) is generated, or equivalently a correction T = c T (v 2 /f 2 ) . Therefore in this case the coefficients t cb and a cb in Eq. (2.8) are of the same order. We recall that c H is in general 3 positive definite [20], implying the generic expectation a < 1 in composite Higgs models. However, in the following we will not restrict ourselves to this range. For a discussion of how a > 1 could arise, see Ref. [21]. T parameter It is well known that when a = 1 in Eq. (2.7), a logarithmically divergent contribution to T (as well as to S) arises. Such contribution is due to the diagrams in Fig. 1(a), and is computable within the low-energy theory, see Ref. [22]. However, in the present case we also need to consider the effects of explicit custodial breaking contained in Eq. (2.8). Even if we set t cb = 0, a quadratic UV sensitivity appears in T , due to the diagrams involving the Higgs shown in Fig. 1(b). This quadratic divergence readŝ where Λ is the cutoff: setting Λ = 4πv, we obtain a contribution of tree-level size. In a concrete model, new degrees of freedom below the cutoff will need to conspire to make the total contribution to T compatible with EW precision data. This will require in general a certain amount of tuning, which we quantify in Fig. 2 by showing isocontours of |∆ is the tree-level contribution that arises when the full gauge invariant operator O T is considered. We see that the level of tuning is roughly similar in the two cases. A full computation of T requires choosing a complete model, see Refs. [24][25][26] and references therein for examples. Recent LHC results In this section we will perform a fit to the results of experimental searches for the Higgs at LHC and at Tevatron. We are going to use the full set of data released in March by ATLAS [27, 28], CMS [29,30] and Tevatron [31], as reported in Fig. 3 4 . Experimental results are given in terms of In presence of a signal a best fit for this quantity is given along with errors. Several comments are in order about the dependence of µ on (a, a cb , c) for the different channels: • The pp → hjj → γγjj sample at CMS is assumed to be produced through Vector Boson Fusion (VBF) with a small contamination coming from gluon fusion [32], so that (2.16) .93 is the ratio between ZZ and W W fusion production in the SM (at LHC, 7 TeV) [33], σ gg is the gluon fusion production cross section and σ V BF /σ gg ≈ 0.079. • We include the ATLAS results from fermiophobic (FP) Higgs searches in pp → hX → γγX. Following Ref. [4] we take the production to be dominated by VBF with a sizable contamination from gluon fusion: Taking into account the possibility of V being either a W or a Z we have where R V h is the ratio of Zh to W h production in the SM, equal to 0.55 at LHC and to 0.61 at Tevatron. • All the other channels are assumed to come from inclusive production. In this case for LHC where σ V h /σ gg ≈ 0.058, and the last approximate equality holds because the main production mechanism is gluon fusion. We have checked that considering inclusive W W and ZZ production as coming only from gluon fusion and VBF, as done in Ref. [3], does not significantly affect our results. An equation completely analogous to (2.19) holds for inclusive production at Tevatron. • The partial width for h → γγ, which arises both from W and from heavy fermion (top, bottom and tau) loops, gets rescaled as for m h = 125 GeV . After computing production cross sections and BRs we construct a χ 2 function whereμ i is the experimental central value, and δµ i is the total error. The latter is obtained by summing in quadrature the experimental error (symmetrized by means of an average in Figure 3: Summary table of the experimental results that we included in our analysis. The signal strengths for all CMS and Tevatron channels, as well as for the ATLAS W W and γγ F P are taken at m h = 125 GeV. On the other hand, for the ATLAS ZZ and γγ channels we use the peak signal strength. We report the leading scaling with the parameters (a, a cb , c) both for production cross section and partial decay width in the various channels. The predictions of the best fit points are also shown in orange. quadrature) to the theoretical error. The theoretical error comes from the uncertainties on cross sections, and is relevant only when two or more production mechanisms are summed over. We simply propagate the errors, taking their values for the single production mechanisms from Ref. [34]. Since we are interested in the gauge sector, and in particular in custodial breaking effects, we treat c as a nuisance parameter. Thus a χ 2 restricted to (a, a cb ) can be computed by marginalizing over c : and it can be used to perform a minimum χ 2 procedure. The result of the fit is summarized in Fig. 4 left, where we also show for completeness the results without marginalization (fixing c = 1). The best fit points are respectively (a, a cb ) = (0.93, 0.25) and (0.93, −2.11), both corresponding to χ 2 = 9.2 with 13 d.o.f. As expected the best fit points are 'Zphilic' (or equivalently, W phobic): µ ZZ /µ W W = (cos 2 θ W g hZZ /g hW W ) 2 = (a + a cb ) 2 /a 2 ≈ 1.6. Notice that all the observables involved in Higgs searches are insensitive to the sign of a + a cb (as such combination always appears squared), implying the symmetry of the contours under (a, a cb ) → (a, −(2a + a cb )). In the best-fit region where a + a cb < 0 , the Higgs is actually 'dysZphilic', since the sign of the hZZ coupling is opposite with respect to the standard case. We will discuss in Section 3 some future measurements that may lift the degeneracy between a Zphilic and a dysZphilic Higgs. As we have already mentioned in Section 2.1, new light degrees of freedom are required in order to make a sizeable a cb compatible with EWPT. In the absence of a symmetry a significant tuning is generically needed, as shown in the right panel of Fig 4. In principle, such new light degrees of freedom could affect the Higgs couplings, and therefore alter the interpretation of results of Higgs searches. An obvious consequence of a cb = 0 is that the ratio µ ZZ /µ W W differs from unity. This is shown in Fig. 5, where we plot for each value of a the range of µ ZZ /µ W W obtained varying a cb within the 68% CL region of the LHC fit (colored region). We see that within the LHC preferred region the wide range 0.3 µ ZZ /µ W W 3.5 is obtained, with the possibility of a severe Zphilia (although Zphobia cannot be totally excluded at the moment). Another channel that can be effectively enhanced is γγ, due to both c and a cb . For example if µ γγjj /µ ZZ is considered, dramatic effects are possible even within the LHC 68% C.L. region, as can be seen in Fig. 5. Signal strength ratios at the LHC We have to stress that the (a, c) and (a, a cb , c) parameterizations are different and in principle it is possible to distinguish between them. The best way is to look at ratios between well measured µ i , as most of the QCD production uncertainties are thus cancelled (especially if the production channel is the same), as well as the dependence on the total width. See Refs. [3,35] for a discussion of how to break degeneracies in similar fits by using ratios of signal strengths. To show how it can be possible to distinguish between the different cases, we choose the ratios (µ γγ /µ ZZ , µ bb /µ ZZ ): in Fig. 6 we show isocurves of such ratios in the (a, c) and (a, a cb ) planes respectively, superimposing them to the LHC best fit regions. To simplify the comparison, in the right panel of Fig. 6 we have set c = 1. We see that in the (a, c) case the range allowed for the ratios is significantly smaller than it is in the custodial breaking case. Future implications We are left with the issue of determining the sign of a cb (or of a + a cb if you prefer). Not an easy quest, as the sign is physically relevant only in the presence of interference. One readily available choice would be to look again at precision tests, in this case corrections to Figure 6: Isocurves of µ γγ /µ ZZ (solid) and of µ bb /µ ZZ (dashed) in the (a, c) plane (left panel) and in the (a, a cb ) plane (right panel). In both plots the LHC best-fit regions are also shown; in the right panel, c = 1 has been set to facilitate the comparison with the custodial-preserving case. the Zbb vertex. However the ratio between the main 1-loop Higgs contributions and the one of interest for us goes as m 2 t /m 2 b and so we expect the latter to be negligible. Thus we have to turn our attention to other, not yet measured, processes. We are going to briefly discuss four possible experimental signatures that are, or can be in principle, sensitive to the sign of the hZZ coupling. Before moving to a discussion of the single channels, few comments are in order. We are interested in processes where diagrams both with and without the hZZ vertex interfere, and we need such interference to be non negligible in order to distinguish between the two signs. Let us stress that the separation has to be bigger than both experimental and theoretical uncertainties. Concerning the latter, a precise knowledge of the absolute value of the coupling constants (a and a + a cb in particular) is required. Thus we are going to focus on possible scenarios at e + e − Linear Colliders (LC), for which it is reasonable to assume a measurement of g hZZ and g hW W at the level of ∼ 1%, corresponding to |δa|, |δ(a + a cb )| ∼ 1% . (3.1) Such precision is expected both at ILC [36] and CLIC [37] with reference values m h = 120 GeV, √ s = 500 GeV and with 500 fb −1 of integrated luminosity. In the following we fix c = 1 in order to highlight the main points under study. h → ZZ decay width The first channel we investigate is the width of h → ZZ → 4l. Here the interference occurs between tree level and higher orders, the former being sensitive to the sign flip a + a cb → −(a + a cb ). On the contrary we assume, in order to maximize the separation, that most of the radiative corrections arise from loops not directly involving the hZZ vertex (see the diagrams in Fig. 7). In this approximation the two cases a + a cb ≷ 0 have different relative sign between LO and NLO. Thus we can write the width in the two cases as (the superscript corresponds to the sign of a + a cb ) Γ ± ZZ ≈ Γ 0 ZZ (1 ± δ), with δ ≈ 1% for SM couplings [38]. Assuming departures from the leading approximation a + a cb = ±1 to have negligible effects, we quantify the relative separation with It is clear that a very high precision is required to resolve the two cases. In fact, even considering perfect knowledge of the coupling constants, the experimental uncertainties should be at least of the same size or smaller of ∆. We conclude that the measurement under study is not realistic. htt associated production We now focus on a case where the interference arises between different LO contributions. In Higgs boson associated production with tops (heavy fermions in general) the process is essentially e + e − → Z → tt with a scalar emitted either by the Z or by one of the tops (as shown in Fig. 8). We can write the total cross section for the two cases a + a cb = ±1 as follows where the index refers to the particle the Higgs boson is emitted from. We have σ int /(σ t +σ Z ) ≈ 1 − 4% , leading to that needs to be compared to the experimental resolution. It has been shown [39] that from e + e − → tt the coupling g tth could be measured up to 6% precision 5 , which directly translates in a precision of around 10 − 12% on the cross section, at least 3 or 4 times larger than ∆. So even this case seems unlikely to be able to resolve the different signs. Zh associated production The third channel we examine is the Higgs-strahlung process e + e − → Zh, see Fig. 9. As in the first case above, we are interested in the change in sign of NLO corrections with respect to the tree level amplitude. Following detailed analyses present in literature [40,41] we can divide the main electroweak corrections in three different terms, as following: • Initial State Radiation (δ ISR ): whose amplitude clearly has the same sign of the tree level one; • Fermionic contributions (δ F ): they are mainly due to self energy corrections to the Z propagator. Thus, in first approximation, we expect them to have the same sign of the LO amplitude; • Bosonic contributions (δ B ): they are due to box diagrams usually involving W bosons. It is reasonable to assume that most of these would not involve the hZZ vertex and so to assume that δ B does not present a sign flip for a disZphilic Higgs. It is then possible to write, for a + a cb ≷ 0, as a rough estimate of the effect. Referring to a center of mass energy of 1 TeV the expected magnitudes for such corrections 6 are δ ISR ≈ 20%, δ F ≈ 10% and δ B ≈ −20%. Thus σ + ≈ 1.1σ 0 , σ − ≈ 1.5σ 0 and we are able to quantify the separation between the two cases as if we consider the simple choices a + a cb = ±1 . A comparison with the expected experimental sensitivity [43], which is of ∼ 3 − 5%, shows that this measurement would indeed be able to resolve the sign. Zhh production Another process where interference is at leading order is e + e − → Z → Zhh . In this case there are three distinct constributions: the diagram with two subsequent Higgs-strahlungs, the diagram involving the hhZZ vertex, and a third one involving the Higgs self-coupling (see Fig. 10), the last being the only one that changes sign under (a + a cb ) → −(a + a cb ) . The cross section for a + a cb = ±1 can then be written as and for √ s = 500 GeV (which is the best choice for the process e + e − → Zhh) we find σ + 0.28 fb, σ − 0.09 fb. Therefore that needs to be compared to the experimental resolution. For an integrated luminosity of 2000 fb −1 and SM couplings, this can be as low as 10% [44]. In the case of flipped hZZ coupling, by taking into account the reduced statistics we estimate the resolution to be still less than 20%, i.e. more than two times smaller than ∆. So this case is promising. However, we warn the reader that in the previous discussion we have made stronger assumptions than for the other precision measurements we presented. First, when setting the Higgs self-coupling λ hhh to its SM value, we assumed to know it to a good accuracy, even though the measurement of such coupling at the LHC would be a difficult task, and the best channel to measure the trilinear at a LC with moderate √ s would be e + e − → Zhh itself (an independent measurement of λ hhh could come from the W W fusion process e + e − → ννhh at √ s ∼ 1 TeV ). Second, we assumed the hhZZ coupling to have its SM value although its measurement is challenging even at a LC, and despite the fact that in a theory with (a, a cb ) = (1, 0) we should in general expect deviations from the standard values also in the couplings hhZZ and hhW W . As a consequence, one should take the estimate in Eq. (3.8) with some caution. Conclusions Motivated by recent results of experimental searches for the Higgs boson, we relaxed the assumption of custodial invariance in its couplings to the W and the Z. We described custodial breaking through an additional parameter a cb and we showed how it can accommodate the current pattern of observed excesses, which mildly point to a Zphilic (or W phobic) Higgs. Should such hints be confirmed by more data, they would be evidence for custodial breaking in Higgs couplings. Such breaking implies that the electroweak T parameter receives quadratically divergent corrections. New light degrees of freedom would then be expected to play a role in mimicking the approximate custodial invariance observed in EW data, generically at the price of a sizable tuning. We also noticed that Higgs searches are insensitive to the sign of the hZZ coupling, that is to say they do not allow to tell a Zphilic Higgs from its dysZphilic counterpart. However the sign of such coupling is physical, and processes in which interference is present can remove the degeneracy. We presented some measurements at future linear colliders that could be used for this purpose.
6,181
2012-04-30T00:00:00.000
[ "Physics" ]
Hot perturbative QCD in a very strong magnetic background We compute the pressure, chiral condensate and strange quark number susceptibility from first principles within perturbative QCD at finite temperature and very high magnetic fields up to next-to-leading order and physical quark masses. The region of validity for our framework is given by $m_s \ll T \ll \sqrt{eB}$, where $m_s$ is the strange quark mass, $e$ is the fundamental electric charge, $T$ is the temperature, and $B$ is the magnetic field strength. We study the convergence of the perturbative series for the pressure for different choices of renormalization scale in the running coupling, $\alpha_s (T,B)$. Our results for the chiral condensate and strange quark number susceptibility can be directly compared to recent lattice QCD data away from the chiral transition. Even though current lattice results do not overlap with the region of validity above, perturbative results seem to be in the same ballpark. I. INTRODUCTION The understanding of the phase structure of hadronic matter under the influence of different control parameters, such as temperature, baryon chemical potential and electromagnetic fields, must ultimately be derived from in-medium quantum chromodynamics (QCD), its fundamental theory. The case of magnetic QCD, where one of the control parameters is an external magnetic field, is phenomenologically relevant in different scenarios. In the astrophysics of compact stars, magnetars can exhibit very large fields, of the order of 10 15 Gauss [1][2][3], which corresponds to ∼ 20 MeV 2 . In non-central, high-energy heavy ion collisions one can reach much larger values, ∼ 10 19 Gauss ∼ 10m 2 π [4][5][6][7][8][9][10]. In the early universe, primordial magnetic fields could be a few orders of magnitude higher [11][12][13]. From the theoretical perspective, the case of thermal magnetic QCD, where the control parameters are the temperature T and external magnetic field B, is particularly attractive. Since it does not suffer from the Sign Problem [14], it can be tackled by Monte Carlo simulations, and lattice QCD has produced a variety of relevant results in the last decade, including a great portion of the phase diagram [15][16][17][18][19][20][21][22][23][24]. It can also be addressed analytically within limits of the fundamental theory: in perturbation theory [25][26][27], for large values of T and B; in hard thermal loop perturbation theory [28][29][30][31][32]; for a large number of colors N c [33]; in the low-energy sector, via chiral perturbation theory [34][35][36]. Of course, hot hadronic matter in the presence of external magnetic fields can also be described within effective models. For a detailed discussion and list of references, see Refs. [37][38][39][40]. In this paper we investigate the behavior of the pressure, chiral condensate and strange quark number susceptibility from first principles within perturbative QCD at finite temperature and very high magnetic fields up to two-loop (2L) for 3 flavors with physical quark masses. For the pressure we show that the exchange contribution increases with the magnetic field, but nevertheless corresponds to a correction of less than 20% at intermediate temperatures (T ∼ 300 MeV) even for extremely large magnetic fields. In order to compare our perturbative results to the benchmark provided by lattice QCD simulations, we need very large magnetic fields on the lattice, so that the domain of validity of our calculation, given by m s ≪ T ≪ √ eB, where m s is the strange quark mass and e is the fundamental electric charge, can be reached. A few years ago, Endrödi [23], in a pioneering tour de force, was able to reach magnetic fields of the order of eB = 3.25 GeV 2 in his simulations. The expectation, then, using extrapolations of the available lattice data combined with an effective description of QCD, was that for magnetic fields eB ∼ 10 GeV 2 the crossover in the temperature-magnetic field phase diagram would become a true first-order phase transition. Recently D'Elia et al. [41,42] have extended thermal magnetic QCD on the lattice to magnetic fields as large as eB = 9 GeV 2 , providing numerical evidence that the onset of a first-order line happens within the range eB = 4 − 9 GeV 2 . This work is organized as follows. In Section II we present the perturbative setup and a few details on the calculation of the pressure and chiral condensate to 2L, as well as the running of the coupling and strange quark masses. In Section III we discuss our results and compare some of them to what has been obtained recently on the lattice. Section IV contains our summary and outlook. II. PRESSURE AND CHIRAL CONDENSATE In this section we compute the pressure and chiral condensate in the lowest Landau level approximation up to 2L in perturbative QCD. We assume that the system is embedded in a uniform, very large magnetic field B = Bẑ, where the field strength B is much larger than the temperature and all masses. A. One-loop contribution to the pressure Let us start with the one-loop (free), contribution to the pressure of thermal QCD in the presence of high magnetic fields. The one-loop (1L) contribution coming from the quark sector is given by the following renormalized expression (subtracting the pure vacuum term) [37][38][39][40]: where f /2q f B, T = 1/β is the temperature, µ is the quark chemical potential, N c is the number of colors, f labels quark flavors, q f is the quark electric charge, and n = 0, 1, 2, · · · stands for the Landau levels. In this expression, Matsubara sums have already been performed in the medium contribution. One should notice that there is an inherent arbitrariness in the renormalization procedure (see Refs. [43][44][45][46][47][48][49][50] for a discussion). In Eq. (1), all mass-independent terms were neglected and the pure magnetic term goes to zero in the limit m → 0. There are renormalization procedures where other terms survive and the pure magnetic expression diverges as m → 0. This discrepancy in the renormalized expression leads to differences in some physical quantities, e.g. the magnetization [46]. However, it turns out that the two different pure magnetic terms have the same derivative with respect to the mass, so that quantities such as the condensate and the self-energy must in principle coincide in both approaches. Taking the limit of very high magnetic fields (m s ≪ T ≪ √ eB), one ends up with the lowest Landau level (LLL) expression The 1L contribution from the gluons has the usual Stefan-Boltzmann form [51] P G free = 2(N 2 c − 1) B. Two-loop contribution to the pressure The 2L contribution from the quark sector to the pressure of thermal QCD in the presence of high magnetic fields was computed in Ref. [25]. For numerical purposes, however, it is convenient to recast the result found in that reference in a different fashion. Let us start with the 2L (exchange) pressure, in the LLL approximation extracted from Ref. [25], where and k L = (iω B ℓ , k z ), p L = (iω F n1 , p z ), q L = (iω F n2 , q z ). Here we restrict our discussion to the case where µ = 0. At this point one can follow two different paths: • First evaluate the Matsubara sums and then the momentum integrations. This was the path followed in Ref. [25]. This has the advantage of producing an expression that is also valid in the case where µ ̸ = 0. However, the resulting integrals are quite involved numerically due to intertwined divergences. • First evaluate the momentum integrals and then carry out the Matsubara sums numerically at µ = 0. This produces a term that depends on temperature and magnetic field which can not be separated into vacuum and medium contributions. This is the path we will fallow in this work. Using the Dirac delta and the Kronecker delta in Eq. (5), one obtains where ω ℓ = 2πℓT and ω n2 = (2n 2 + 1)πT . Now one can first compute the integrals, which yields where Then, in Eq. (4), for each value of the Matsubara frequencies, one must perform the integrals in k. One can use polar coordinates, so that the final expression for the exchange pressure has the form This expression has the advantage of being numerically simple. Its downside, however, is that it only holds for µ = 0 and can not be used for cold and dense QCD. Eq. (8) is numerically equivalent to that of Ref. [25]. From a simple analysis of Eq. (8), one can check that, for m f → 0, the exchange contribution to the pressure vanishes. This was also reported in Ref. [25]. Another important advantage of Eq. (8) is that it is easy to check that the IR domain for the momenta, m k → 0, is regulated by the fermionic mass and Matsubara frequencies. Taking into account the 2L contribution from the gluons, given by the well-known formula [51]: the total 2L pressure can be written as: C. Chiral condensate and strange quark number susceptibility The chiral condensate is a very relevant observable in the investigation of the phase diagram for strong interactions. For massless quarks it is the true order parameter for the chiral transition. When one includes light quark masses, however, this is no longer true but its behavior near the transition (or crossover) still exhibits a "memory" of this feature, with the condensate varying appreciably but not sharply, so that it can be considered a pseudo order parameter for the chiral transition in this case. Of course, our perturbative analysis is reliable only for very large temperatures and even larger magnetic fields, so that it cannot bring information on the region near the phase transition or crossover. Nevertheless, since there are lattice results for high temperatures and magnetic fields, the comparison of these two first-principle calculations in this region is certainly relevant. The condensate is obtained from the pressure as a derivative with respect to the quark mass. So, the f -flavor condensate is given by From the expressions obtained in the previous section, we can derive straightforwardly and where n F is the Fermi-Dirac distribution. On the lattice, one computes the f -flavor renormalized condensate which eliminates additive and multiplicative divergences. Here, m π = 135 MeV, f π = 86 MeV, and m f = 5 MeV for the light quarks. To obtain the vacuum condensate, one can not simply take the zero-field limit since we assumed very large fields from the outset [25]. One usually utilizes the renormalized light quark chiral condensate, built from the sum of the up and down quark contributions, to locate the (pseudo-)critical temperature [42]. Since we assume very high magnetic fields and temperatures, with the scale hierarchy given by m s ≪ T ≪ √ eB, this subtraction is negligible in our perturbative calculation. However a direct comparison to the renormalized lattice results must happen in scales not so favorable to pQCD, so that deviations are expected. One should also have in mind that, since the perturbative approach can only capture the behavior of the condensate for large temperatures, it is completely insensitive to features related to the crossover or possible first-order phase transition at high magnetic fields. A different observable that can also be computed and directly compared to available lattice data is the strange quark number susceptibility which has previously been computed using hard thermal loop resummation at one-loop order [32]. Given the presence of a derivative with respect to the chemical potential, pure vacuum terms are excluded. This presents an advantage when comparing lattice results to pQCD, even if the temperature range in the simulations is still far from optimal for this purpose [23,42]. D. Running coupling and strange quark mass The pressure and chiral condensate to 2L for 3 flavors with physical quark masses depend not only on the temperature and magnetic field, but also on the renormalization subtraction pointΛ, an additional mass scale generated by the perturbative expansion. This comes about via the scale dependence of both the strong coupling α s (Λ) and strange quark masses m s (Λ). The running of both α s and m s are known to four-loop order in the MS scheme [52]. Since we have determined the pressure and chiral condensate only to first order in α s , we use for the coupling [53] where Since α s depends on N f , fixing the massive quark at some energy scale also depends on the number of flavors. For the strange quark mass, we have in Eq. (17). As usual, there is arbitrariness in the way one should connect the renormalization scaleΛ to a physical mass scale of the system under consideration [51]. In thermal QCD where, besides quark masses, the only scale is given by the temperature, and T ≫ m f , the usual choice is the Matsubara frequency 2πT with a band around it, i.e. πT <Λ < 4πT . In the present case, where the magnetic field also provides a relevant mass scale given by √ eB, the choice becomes more ambiguous. Therefore, in the literature of thermal magnetic QCD, one can find a few different assumptions for the form of the running coupling. Since this issue has induced some debate, we show results for a few representative choices and discuss their implications for our observables. Although we have our preference for the most physical choice, we believe that, ultimately, this will be settled by direct comparison to lattice QCD simulations. Since this problem will also arise in a realm of parameter space still unreachable by Monte Carlo methods, due to the Sign Problem, understanding this in thermal magnetic QCD becomes even more relevant. In what follows, we show results for the following cases: (i) A fixed value of α s = 0.336. This corresponds, essentially, to ignoring all the effects from the renormalization group running. (ii) The running form proposed in Ref. [56]: where α s (Λ 2 ) corresponds to the usual MS one-loop running coupling. HereΛ = 1.5 GeV. The main motivation in this reference has been to try to provide an understanding of the phenomenon of inverse magnetic catalysis (for a review, cf. Ref. [57]). As will be clear below, however, this form for the running coupling displays an odd behavior as one plays with the magnetic field strength. (iv) α s given by Eq. (16) andΛ = 2πT . This corresponds to the usual thermal QCD choice, and ignores the possible effect of the magnetic field on the scaleΛ. (v) Same as the previous one, but withΛ = (2πT ) 2 + eB. This is, in our view, the most natural and physical choice, which is an extension of what is done in finite-temperature field theory [51]. The running of the strange quark mass will, obviously, be affected by the choice for the running of α s . In Fig. 1 we show the running of α s as a function of temperature for two different (large) values of the magnetic field and as a function of the magnetic field strength for two different temperatures. Temperatures are chosen to be large, since we are using perturbative QCD, but within the region of validity for the use of the lowest Landau level approximation, as discussed previously. We also include the case without running (α s = 0.336, case (i)) which provides a scale for comparison. One can verify that cases (ii) and (iii) display a possibly unphysical behavior with increasing magnetic field, since α s simply grows while the energy density is also increasing. First, it renders perturbative calculations meaningless for high magnetic fields. Second, it seems incompatible with the expected asymptotic freedom property of strong interactions. In cases (iv) and (v), α s exhibits the same qualitative (usual) behavior. The quantitative difference comes about because in case (v) the magnetic field contributes to the running scale on an equal footing with respect to the temperature. In Fig. 2 we show the running of the strange quark mass, m s , as a function of temperature for two different (large) values of the magnetic field and as a function of the magnetic field strength for two different temperatures. Temperatures are again chosen to be large, since we are using perturbative QCD, but within the region of validity for the use of the lowest Landau level approximation. We included a black continuous line for m s = T as a reminder that one has the constraint m s ≪ T . The behavior of the different running cases is analogous to what has been discussed for Fig. 1. The fact that the quark mass increases with magnetic field is probably related to the original motivation of running choices like cases (ii) and (iii), namely, trying to encode magnetic catalysis and inverse magnetic catalysis in the properties of the running of the strong coupling [27,56]. From the discussion above, we believe that only cases (iv) or (v) could be regarded as providing a physical description of the running coupling and running quark mass. Nevertheless, since it can also be tested by direct comparison to lattice data, we will keep all cases in our results for the pressure, chiral condensate and strange quark number susceptibility. III. RESULTS We can now discuss our perturbative results for the pressure, chiral condensate and strange quark number susceptibility to 2L for very large magnetic fields. We show results for the different running schemes discussed in the literature, and compare them to what has been obtained recently on the lattice. A. Pressure In what follows, we present results for the pressure as a function of the temperature for the highest value of magnetic field attained in present lattice simulations (eB = 9 GeV 2 ), Fig. 3, and for an even larger field (eB = 50 GeV 2 ), Fig. 4. We also present results for the pressure as a function of the the magnetic field for T = 0.6 GeV (Fig. 5). In these figures, we show a panel with the free pressure, P s free , the exchange diagram contribution, P s exch , the ratio P s exch /P s free , and the full strange pressure, P s . We show results for the contribution from the strange quark because mass effects are more relevant in this case. The ratio P s exch /P s free provides a certain measure of the reliability of perturbation theory, since it seems to be more well behaved than the case in the absence of a large magnetic field [25]. Finally, for the sake of completeness, we show how the pressure behaves for huge values of the magnetic field eB = 10 3 GeV 2 (Fig. 6). For such high fields, one should definitely take into account anisotropy effects [20,58], which we fully neglect for simplicity. Results shown here would correspond to the longitudinal pressure in an anisotropic description [22]. For phenomenological applications, one usually has to take into account effects from anisotropy. In Figs. 3 and 4 we can observe how the behavior of the pressure is modified for the different choices of the running of the strong coupling. For the cases (ii) and (iii) discussed in the previous section, one finds a much poorer convergence, which becomes worse as one increases the magnetic field. This is compatible with the somewhat unphysical behavior observed in the running of α s and m s for these choices of renormalization scale. Cases (iv) and (v), on the other hand, seem to be well behaved. One should notice that a future comparison to lattice results will have to take into account the different vacuum subtraction schemes adopted in lattice simulations, pQCD calculations and effective models [43,44,46,47]. In Fig. 5 we display the same cases as before, but as a function of the magnetic field. We also include bands corresponding to increasing/decreasing the central renormalization running scale by a factor of 2. As usual, the size of these bands correspond to a rough measure of the theoretical uncertainty of the perturbative series, since it represents the residual renormalization scale dependence [51]. Notice that case (ii) has no band by construction, since Λ is fixed. From the first and last panel it is clear that the quark pressure is dominated by the free gas contribution. In Fig. 6, as we increase the possible values of the external magnetic field dramatically, we see a clear separation in the behavior of cases (ii) and (iii), which are essentially ill defined perturbatively, and cases (iv) and (v), which behave well. Case (i) is trivial, since there is no running. One should notice also that P LLL exch changes sign depending on the value of the temperature and magnetic field. This behavior is exhibited in Fig. 7 in the √ eB − T plane. One should emphasize that this behavior is not sensitive to the value of any other parameter. B. Chiral condensate and strange quark number susceptibility Now we present our results for the chiral condensate and the strange quark number susceptibility as a function of the temperature in the presence of high magnetic fields. We show results for the highest magnetic fields attained in lattice QCD simulations so far. Of course, the perturbative approach has the caveat of being reliable only for large temperatures, so that we will not be able to describe nontrivial features of the condensate, such as its behavior near the transition (or crossover). In any case, perturbation theory would not be sensitive to such effects. In Fig. 9 we show the renormalized light quark chiral condensate as a function of the temperature for eB = 4 GeV 2 and eB = 9 GeV 2 computed using perturbative QCD. We also show points obtained via lattice simulations for comparison [42]. In Fig. 10 we do the same for the strange quark number susceptibility, and, In Fig. 11 we show the strange quark number susceptibility for a lower value of magnetic field [23]. Unfortunately, in both cases the temperature range for lattice results is well below the ideal for a fair comparison to perturbative QCD. Nevertheless, one can see that perturbative results are in the right ballpark for the upper end of temperatures. It is still unclear from the available lattice data whether our calculations capture the qualitative trend at high temperatures. Lattice results for higher temperatures, and even higher magnetic fields, would be necessary for this purpose. As argued previously, the strange quark number susceptibility represents a better observable for our comparison, since in our approach the vacuum contribution is neglected, even though it might still be relevant for the chiral condensate at the temperatures currently accessible to lattice simulations. From the figures one sees that the comparison of pQCD results to lattice data on the strange quark number susceptibility seems to display a more promising trend for temperatures above the ones currently simulated. It is important to note that our framework here is valid only if the hierarchy of scales m s ≪ T ≪ √ eB is satisfied. In this sense, it is not a problem if the perturbative results deviate from lattice data for very high temperatures at fixed eB. It is also clear from the plots, moreover, that for such high fields loop corrections to the free case become almost irrelevant, as was already remarked in Ref. [25] in the context of the pressure in the chiral limit. Moreover, the analysis of the different possibilities for the running scale choice show that the width of the band for case (iii) basically diverges (not shown in the figures), case (iv) has a wide band that also diverges at some point for the susceptibility, and case (v) is always well behaved. This behavior is, of course, compatible with what has been observed for the pressure. IV. SUMMARY AND OUTLOOK In this paper we computed the pressure, chiral condensate and strange quark number susceptibility within perturbative QCD at finite temperature and very high magnetic fields up to two-loop and physical quark masses. Since we adopt the lowest-Landau level approximation in order to obtain analytic results and more control on qualitative aspects, the region of validity for our framework is restricted to m s ≪ T ≪ √ eB, where m s is the strange quark mass, e is the fundamental electric charge, T is the temperature, and B is the magnetic field strength. Since the literature in the field exhibits several possibilities for the running scheme, we study the convergence of the perturbative series 1 for the pressure using the most commonly adopted choices for the scale and functional form of the running coupling, α s (T, B). Our findings seem to indicate that cases (ii) and (iii) are inconsistent from the point of view of the convergence of the perturbative series, while cases (iv) and (v) pass this criterion, case (v) being the most well behaved. Currently, there are essentially two completely opposite scenarios for the way a magnetic field background affects the QCD interactions: either an enhancement of the strong coupling that renders perturbative calculations not applicable even for physically achievable magnetic fields; or a coupling that is strongly suppressed as the energy density grows, in accordance with usual expectations from asymptotic freedom. It would be desirable to have lattice results that help clarifying this issue. Moreover, the difficulty of choosing a running scale in a setting in which more than one relevant control parameter exists will also be present in the description of systems at finite density. In particular, the physics of magnetars could be sensitive to this choice [1][2][3]. Our results for the chiral condensate and for the strange quark number susceptibility were directly compared to recent lattice QCD data away from the chiral transition. Even though, as discussed previously, current lattice results do not overlap with the region of validity for our approximations, perturbative results seem to be in the same ballpark, which is encouraging. The window of applicability is still narrow, but our results are obtained from a clean firstprinciple calculation that can be systematically improved. Furthermore, as argued previously in Ref. [25] for a fixed strong coupling α s , medium loop corrections seem to become essentially negligible as compared to the free term for very high magnetic fields for physical choices of the renormalization running scale.
6,139
2023-03-21T00:00:00.000
[ "Physics" ]
Fabrication of Metallic Nano-Ring Structures by Soft Stamping with the Thermal Uplifting Method : In this study, the unconventional microfabrication method by the combined processes of the chemical soft stamping technique with the thermal uplifting technique to fabricate metal nanoarrays on a glass plate is proposed and their feasibility verified. The gold micro-ring arrays on a quartz glass plate are realized by utilizing a chemical template with the thermal uplifting method. Their optical properties are studied experimentally. First, a plastic mold is made of a Biaxially Oriented Polyethylene Terephthalate (BOPET) via the hot embossing method. Then, the Methanal micropatterns are transferred onto an etched surface of a substrate via a soft stamping process with a BOPET mold. The gold thin film is coated onto the methanol patterned glass plate via the Ar+ sputter coating process. Finally, the metallic micro-ring structures are aggregated on a glass plate via the thermal uplifting technique. The LSPR optical properties as the extinction spectrums of the gold micro-ring structure arrays are investigated experimentally. It is confirmed that this method was able to fabricate plasmonic micro-ring arrays with low cost and high throughput. this study, a quartz glass plate was etched using an Argon sputtering machine. The ions of argon gas are more efficient in breaking off pieces of a glass plate. An electrical discharge is generated in a vacuum by employing a voltage on a glass plate. The ions of argon are pulled onto a plate and therefore onto a glass plate with electron volts power the same as the setting voltage of 0.8 kV. Then, the argon ions are generated by creating an arc that travels across the of the target. The arc strikes lead to the ejection of H 2 O and SiO 2 atoms onto a glass plate. Since an adhering between H 2 O and a glass plate atom is cracked by the Argon gas etching, glass atoms are stimulated, then the surface energy on a glass plate is raised. It is revealed that a glass plate’s surface energy is increased by the argon ion spatter etching technique. Figure shows the layer of gold directly adhered to an etched area on a substrate. It is confirmed that throughout a thermal uplifting process, the adhering strength between a glass plate and a gold layer is higher than the uplifting force of vibrating hot bath processes. Therefore, a gold thin layer in the etched area perfectly adheres to a glass plate. the as the of Then, the argon ions are by the of the The arc to the Since an and a by the then the surface energy on a glass is It is that a glass plate’s surface energy is by the argon ion spatter technique. shows the layer of gold to an on a It is confirmed throughout a uplifting the adhering strength between a glass plate and a gold layer is higher Introduction To the best of our knowledge, the noble metallic element at the nanoscale introduces extraordinary optical characteristics, the localized surface plasmon resonance (LSPR) [1]. This LSPR happens when electrons in the nanoparticles are stimulated by a particular wavelength of light that falls on the metal nanoparticles. LSPR properties rely on the form, size, and placement of nanostructures [2]. In particular, the spectral extinction is contingent on the refractive index of the surrounding medium. Some small changes in the local dielectric environment, such as molecular adsorption on the nanostructure surface, affect the spectral extinction characteristics [3]. The few behavioral changes are demonstrated as changes in the number of scattered, transmitted, and absorbed light at dissimilar wavelengths and can be analyzed with high spectral resolution in an ordinary optical transmission or reflection configuration. Moreover, metallic nanostructures have been developed for plasmonic biosensors. The nanofabrication of novel metallic nanostructures, for instance, nano-ring, suggests its applications for a plasmonic biosensor [4][5][6][7]. Gold micro-/nano-ring structures are expected to provide basic implemented feasible structures for plasmonic biosensing with excellent detection performance [8]. Due to the need for biosensing applications, an efficient nanofabrication method to produce the ordered gold micro-rings with tunable LSPR properties is demanded. Novel metal nanostructures are generally produced via conventional nanofabrication approaches, for instance, extreme UV Lithography [9,10], Focus Gallium Ion Beam Milling [11,12], and E-Beam Exposure Lithography [13,14]. While these techniques can produce nanostructures with a well-controlled size, shape, and alignment, they need costly facilities, stringent processes, and cannot address the problem of low throughput. Nanoimprinting with Thermal/UV Lithography (NIL) is a preferred conventional approach, and it shows the advantage of high resolution. However, the nanopattern mold necessary in the NIL process is usually created via conventional lithography techniques. Hence, NIL cannot overcome the disadvantages of conventional lithography processes [15][16][17][18][19][20]. Another technique for the nanofabrication of metallic nanoarrays is the annealing technique of a metal layer on a glass plate [21,22]. An annealing process enables us to produce metal nanopatterns with an easy annealing technique. Nevertheless, it is hard to control the size and alignment of the metal nanoarrays. To address these problems and supplement the limitations of the current fabrication methods, the authors propose the efficient nanofabrication process to fabricate the gold micro-ring arrays on a glass plate by a combination of soft stamping and the thermal uplifting method. The objective of this study is to verify the solution possible with the combination of fabrication techniques and demonstrate its capability to achieve gold microrings with a huge pattern and low nanofabrication costs. Plastic Film Mold A thermoformable biaxially oriented polyethylene terephthalate film (Polyplex Thailand, BOPET, SH140) of 50 µm thickness was utilized to be a soft stamping mold. The hot embossing process is employed for the fabrication of a film mold. The nanohole pattern on the master mold on a silicon wafer was transferred onto a BOPET film via the hot stamping process. Figure 1 demonstrates the atomic force microscopy figures and a line profile of the height of a BOPET mold. Figure 1a represents an image of micropillar patterns, and Figure 1b is the line of height profile of the pillar patterns. The result shows that the mean diameter of the pillars was about 2.1 µm, and the mean peak was 900 nm. Figure 1c was a 3-dimensional topography of micropillar structures. The BOPET film mold was utilized for the soft stamping process. Subsequently, the word "microdot" was utilized to express a chemical templated by the micropillar patterns of a BOPET stamp. Figure 2 shows the microfabrication processes of gold micro-ring structures on quart glass plate for this study. A quartz glass plate was 1 mm in thickness, 12 × 12 mm and was washed in an acetone bath with an ultrasonic cleaner machine for 10 min. Th a glass plate was dried at room temperature. A dried glass plate was etched by etchi with Argon for 1 min to reduce the impurities of the glass plate. (i) The methanol w dropped onto a BOPET mold. (ii) The BOPET mold with the methanol was manually i pressed onto an etched surface of a glass plate. (iii) Then, a stamped glass plate was coa with a gold layer in the sputtering machine. The spatter gas was Argon, with a pressu of about 15 Pa. The vertical gap between a glass plate and a gold target was about 35 m The voltage was provided at 0.8 kV, and the coating current was maintained at 10 m throughout the coating operation. The thickness of a gold thin film was controlled by a justing the coating time. (iv) A micro-ring aggregation process uses a thermal uplifti technique inspired by the vibrating hot bath processes. A glass plate was dipped int hot water bath. The magnetic stirrer with a heating plate was utilized to control the te perature of water to 100 degrees Celsius. The protuberance of the gold micro-ring patte on a glass plate was analyzed with an Atomic Force Microscope, CoreAFM, Nanosurf. Figure 2 shows the microfabrication processes of gold micro-ring structures on a quart glass plate for this study. A quartz glass plate was 1 mm in thickness, 12 × 12 mm 2 , and was washed in an acetone bath with an ultrasonic cleaner machine for 10 min. Then, a glass plate was dried at room temperature. A dried glass plate was etched by etching with Argon for 1 min to reduce the impurities of the glass plate. (i) The methanol was dropped onto a BOPET mold. (ii) The BOPET mold with the methanol was manually impressed onto an etched surface of a glass plate. (iii) Then, a stamped glass plate was coated with a gold layer in the sputtering machine. The spatter gas was Argon, with a pressure of about 15 Pa. The vertical gap between a glass plate and a gold target was about 35 mm. The voltage was provided at 0.8 kV, and the coating current was maintained at 10 mA throughout the coating operation. The thickness of a gold thin film was controlled by adjusting the coating time. (iv) A micro-ring aggregation process uses a thermal uplifting technique inspired by the vibrating hot bath processes. A glass plate was dipped into a hot water bath. The magnetic stirrer with a heating plate was utilized to control the temperature of water to 100 degrees Celsius. The protuberance of the gold micro-ring patterns on a glass plate was analyzed with an Atomic Force Microscope, CoreAFM, Nanosurf. The Soft Stamping Technique with Gold Thin Film Sputtering Process Figure 3a demonstrates a topography figure using the atomic force microscope, and Figure 3b shows the peak profiles of a gold layer coated onto a methanol templated glass plate. The gold layer was about 30 nm in thickness. It was revealed that the microdots had appeared on a gold thin layer. Figure 3c illustrates a 3D atomic force microscope image topography. It was revealed that a peak of gold layer on the methanol stamped area was raised up. The mean diameter of the microdots was 1000 nm, with a mean peak of 140 nm. Crystals 2022, 12, x FOR PEER REVIEW 5 of 11 Figure 3a demonstrates a topography figure using the atomic force microscope, and Figure 3b shows the peak profiles of a gold layer coated onto a methanol templated glass plate. The gold layer was about 30 nm in thickness. It was revealed that the microdots had appeared on a gold thin layer. Figure 3c illustrates a 3D atomic force microscope image topography. It was revealed that a peak of gold layer on the methanol stamped area was raised up. The mean diameter of the microdots was 1000 nm, with a mean peak of 140 nm. The Thermal Uplifting Process Figure 4a demonstrates a topography figure using the atomic force microscope, Figure 4b shows the peaks, and Figure 4c demonstrates a 3D atomic force microscopy figure topography of the gold micro-rings aggregated on a glass plate after a thermal uplifting technique in a hot water bath of 100 • C for 10 min. The result showed that the mean diameter of the stamped template was broadened to 1100 nm. The difference between the height of the edge and the center of the gold micro-ring array was 50 nm. It was also revealed that the gold thin films had agglomerated into the micro-ring structures on a methanol stamped glass plate. Figure 4c demonstrates a 3D atomic force microscopy figure topography of the gold micro-rings aggregated on a glass plate after a thermal uplifting technique in a hot water bath of 100 °C for 10 min. The result showed that the mean diameter of the stamped template was broadened to 1100 nm. The difference between the height of the edge and the center of the gold micro-ring array was 50 nm. It was also revealed that the gold thin films had agglomerated into the micro-ring structures on a methanol stamped glass plate. The Spectral Absorbance Properties The optical characteristics of the gold layer on a glass plate were assessed via extinction spectrum. Figure 5 demonstrates the spectral absorbance of the gold arrays originating on a glass plate with the different structures. Consecutively, (i) a blue line illustrates the absorbance spectrum of the gold micro-rings on a glass plate. A peak of spectral absorbance was discovered at a wavelength of 500 nm. (ii) A red line shows the absorbance spectra of a gold microdots on a methanol stamped substrate. The peak of spectral absorbance was discovered at a wavelength of 560 nm, and the height of the peak is lower than the absorbance of gold micro-rings. (iii) A black line shows the absorbance spectra of a bare gold layer on a glass plate. The absorbance graph is relatively flat and has no peaks. It was revealed that the agglomeration and alignment of the micro-ring arrays are the important variables for the spectral absorbance properties. It was also revealed that the good uniformity of micro-ring structures led to higher improvement, while an exposed gold layer caused a reduction in the height and an expansion of the spectral absorbance. It is conceivable that the enhanced spectral absorbance can be controlled by the gold micro-ring structures on a quartz glass plate [23,24]. The optical characteristics of the gold layer on a glass plate were assessed via extinction spectrum. Figure 5 demonstrates the spectral absorbance of the gold arrays originating on a glass plate with the different structures. Consecutively, (i) a blue line illustrates the absorbance spectrum of the gold micro-rings on a glass plate. A peak of spectral absorbance was discovered at a wavelength of 500 nm. (ii) A red line shows the absorbance spectra of a gold microdots on a methanol stamped substrate. The peak of spectral absorbance was discovered at a wavelength of 560 nm, and the height of the peak is lower than the absorbance of gold micro-rings. (iii) A black line shows the absorbance spectra of a bare gold layer on a glass plate. The absorbance graph is relatively flat and has no peaks. It was revealed that the agglomeration and alignment of the micro-ring arrays are the important variables for the spectral absorbance properties. It was also revealed that the good uniformity of micro-ring structures led to higher improvement, while an exposed gold layer caused a reduction in the height and an expansion of the spectral absorbance. It is conceivable that the enhanced spectral absorbance can be controlled by the gold micro-ring structures on a quartz glass plate [23,24]. Discussion It has been established that an absorption behavior is a surface-based physicochemical process that creates a layer of an adsorbate on the surface of the glass plate. According to this study, a quartz glass plate was etched using an Argon sputtering machine. The ions of argon gas are more efficient in breaking off pieces of a glass plate. An electrical discharge is generated in a vacuum chamber by employing a voltage on a glass plate. The Discussion It has been established that an absorption behavior is a surface-based physicochemical process that creates a layer of an adsorbate on the surface of the glass plate. According to this study, a quartz glass plate was etched using an Argon sputtering machine. The ions of argon gas are more efficient in breaking off pieces of a glass plate. An electrical discharge is generated in a vacuum chamber by employing a voltage on a glass plate. The ions of argon are pulled onto a plate and therefore onto a glass plate with electron volts power the same as the setting voltage of 0.8 kV. Then, the argon ions are generated by creating an arc that travels across the surface of the target. The arc strikes lead to the ejection of H 2 O and SiO 2 atoms onto a glass plate. Since an adhering between H 2 O and a glass plate atom is cracked by the Argon gas etching, glass atoms are stimulated, then the surface energy on a glass plate is raised. It is revealed that a glass plate's surface energy is increased by the argon ion spatter etching technique. Figure 6a shows the layer of gold directly adhered to an etched area on a substrate. It is confirmed that throughout a thermal uplifting process, the adhering strength between a glass plate and a gold layer is higher than the uplifting force of vibrating hot bath processes. Therefore, a gold thin layer in the etched area perfectly adheres to a glass plate. ions of argon are pulled onto a plate and therefore onto a glass plate with electron volts power the same as the setting voltage of 0.8 kV. Then, the argon ions are generated by creating an arc that travels across the surface of the target. The arc strikes lead to the ejection of H2O and SiO2 atoms onto a glass plate. Since an adhering between H2O and a glass plate atom is cracked by the Argon gas etching, glass atoms are stimulated, then the surface energy on a glass plate is raised. It is revealed that a glass plate's surface energy is increased by the argon ion spatter etching technique. Figure 6a shows the layer of gold directly adhered to an etched area on a substrate. It is confirmed that throughout a thermal uplifting process, the adhering strength between a glass plate and a gold layer is higher than the uplifting force of vibrating hot bath processes. Therefore, a gold thin layer in the etched area perfectly adheres to a glass plate. However, the surface energy between a gold layer and the etched glass is the most significant parameter for the aggregation of micro-ring structures. Figure 6b shows a layer of methanol that was attached to a glass substrate. Methanol is absorbed by this mechanism on a porous glass plate [25]. The adhering of SiO2 with a methanol carbonyl group on a glass atom affects the glass plate surface energy. It is confirmed that a glass plate's surface energy is decreased by a methanol layer between a gold layer and a glass plate. The adhesion strength between a gold layer and a glass plate surface is decreased. Then, a gold layer on a methanol-stamped template is uplifted as the micro-ring arrays via the thermal uplifting technique. It was preferable to cause a large difference in the surface energy between an etched area and the methanol templated area to enhance the arrangement of the gold micro-ring aggregation process. Fourier transform infrared spectroscopy (FTIR Spectroscopy, Thermo Scientific, Thailand, Nicolet iS5) was used to observe the chemical properties of material and tools in the soft stamping process. Figure 7 illustrates the FTIR spectra. (i) The green curve shows the spectrum of a BOPET film mold. The spectrum of the BOPET film has peaks between 2800 and 3000 cm −1 . (ii) The red curve illustrates the spectra of the dried methanol on a quartz glass plate. It has a sharp peak at 1000 cm −1 . (iii) The blue curve shows the spectrum of a quartz glass specimen that was stamped with a BOPET film mold, which was spotted with methanol. It has a sharp peak at 1000 cm −1 and is higher than (ii). Results show that the difference between (ii) and (iii) is not very large. However, the absorbance However, the surface energy between a gold layer and the etched glass is the most significant parameter for the aggregation of micro-ring structures. Figure 6b shows a layer of methanol that was attached to a glass substrate. Methanol is absorbed by this mechanism on a porous glass plate [25]. The adhering of SiO 2 with a methanol carbonyl group on a glass atom affects the glass plate surface energy. It is confirmed that a glass plate's surface energy is decreased by a methanol layer between a gold layer and a glass plate. The adhesion strength between a gold layer and a glass plate surface is decreased. Then, a gold layer on a methanol-stamped template is uplifted as the micro-ring arrays via the thermal uplifting technique. It was preferable to cause a large difference in the surface energy between an etched area and the methanol templated area to enhance the arrangement of the gold micro-ring aggregation process. Fourier transform infrared spectroscopy (FTIR Spectroscopy, Thermo Scientific, Thailand, Nicolet iS5) was used to observe the chemical properties of material and tools in the soft stamping process. Figure 7 illustrates the FTIR spectra. (i) The green curve shows the spectrum of a BOPET film mold. The spectrum of the BOPET film has peaks between 2800 and 3000 cm −1 . (ii) The red curve illustrates the spectra of the dried methanol on a quartz glass plate. It has a sharp peak at 1000 cm −1 . (iii) The blue curve shows the spectrum of a quartz glass specimen that was stamped with a BOPET film mold, which was spotted with methanol. It has a sharp peak at 1000 cm −1 and is higher than (ii). Results show that the difference between (ii) and (iii) is not very large. However, the absorbance spectra of (i) are very different from those of (ii) and (iii). It was also confirmed that the BOPET molecules were not transferred to the quartz glass via the chemical stamping process. Therefore, the methanol molecule did not dissolve the BOPET film mold and did not affect the pattern transfer on the substrate. Moreover, this study does not utilize stringency in the processes. Crystals 2022, 12, x FOR PEER REVIEW 9 of 11 spectra of (i) are very different from those of (ii) and (iii). It was also confirmed that the BOPET molecules were not transferred to the quartz glass via the chemical stamping process. Therefore, the methanol molecule did not dissolve the BOPET film mold and did not affect the pattern transfer on the substrate. Moreover, this study does not utilize stringency in the processes. To overcome the issues of low productivity associated with the top-down fabrication method, bottom-up fabrication techniques have recently been given attention as an alternative manufacturing strategy. This method involves fabrication techniques via self-organization methods. Nanosphere lithography (NSL) is a widespread bottom-up fabrication technique to pattern solid surfaces with sub-micrometer and nanometer-scale features. NSL is a simple and very high throughput system. However, the limitation is the geometrical constraint of the nanosphere template. Another approach is the porous anodic alumina technique. This technique shows the advantage that it can be used to fabricate large-area nanostructures with a very high aspect ratio. However, the limitation is the geometrical constraint of the template. Another self-organization is the thermal dewetting method. This process offers the advantage of being a simple fabrication process. However, thermal dewetting suffers from the disadvantage of random distribution of the dot dimensions. In conclusion, bottom-up methods are often utilized to fabricate nanostructures with high throughput. However, it is difficult to control the size and distribution of nanostructures by self-assembly methods. In this study, the advantage of the soft stamping technique of the BOPET mold is that it can be reused as a stamped mold. The micropillar pattern was fabricated onto a BOPET film via the embossing method. The BOPET mold can be very low-cost and easy to utilize. The A4 dimension of the marketable BOPET was produced to form a 10 × 10 mm 2 for about 300 copies. Most significantly, a BOPET mold was used as a soft stamping tool to produce the metallic micro-ring arrays on a glass plate without lithography equipment. The comparison between each micro/nanofabrication method is summarized as shown in Table 1. To overcome the issues of low productivity associated with the top-down fabrication method, bottom-up fabrication techniques have recently been given attention as an alternative manufacturing strategy. This method involves fabrication techniques via self-organization methods. Nanosphere lithography (NSL) is a widespread bottom-up fabrication technique to pattern solid surfaces with sub-micrometer and nanometer-scale features. NSL is a simple and very high throughput system. However, the limitation is the geometrical constraint of the nanosphere template. Another approach is the porous anodic alumina technique. This technique shows the advantage that it can be used to fabricate large-area nanostructures with a very high aspect ratio. However, the limitation is the geometrical constraint of the template. Another self-organization is the thermal dewetting method. This process offers the advantage of being a simple fabrication process. However, thermal dewetting suffers from the disadvantage of random distribution of the dot dimensions. In conclusion, bottom-up methods are often utilized to fabricate nanostructures with high throughput. However, it is difficult to control the size and distribution of nanostructures by self-assembly methods. In this study, the advantage of the soft stamping technique of the BOPET mold is that it can be reused as a stamped mold. The micropillar pattern was fabricated onto a BOPET film via the embossing method. The BOPET mold can be very low-cost and easy to utilize. The A4 dimension of the marketable BOPET was produced to form a 10 × 10 mm 2 for about 300 copies. Most significantly, a BOPET mold was used as a soft stamping tool to produce the metallic micro-ring arrays on a glass plate without lithography equipment. The comparison between each micro/nanofabrication method is summarized as shown in Table 1. Conclusions In this study, an efficient microfabrication method of gold micro-ring structures using soft stamping with a thermal uplifting technique was proposed. A BOPET mold was utilized as a soft stamping mold to make the chemical templates on a quartz glass plate. The thermal uplifting process of a gold layer on a glass plate was studied experimentally. The proposed technique is effective in producing gold micro-ring arrays on a glass plate. A BOPET mold is used for methanol stamped on an etched glass plate. The round shape templates of methanol can be adhered to and dried on a glass plate. Then, a gold layer was coated onto a stamped glass plate. The thermal uplifting technique of the vibrating hot bath process was used for the aggregation of gold micro-ring structures. It was confirmed that inhibition of adhering between the gold layer and a glass plate via a methanol soft stamping process affected the thermal uplifting of the gold layer. Due to an Argon etching, the surface energy on a glass plate was raised, and it caused strong adhesion between a gold cluster layer and a glass plate. At the same time, the surface energy of the methanol stamped region was decreased by a methanol atom templated by a BOPET mold, and a gold layer in this region was self-uplifted by the vibrating hot bath processes. It was also revealed that the preference of this proposed method is that it does not require a costly machine other than a sputtering machine. The spectral absorbance of the micro-ring arrays on a glass plate shows a peak in the visible light region. The spectral absorbance height intensity was raised when the gold micro-rings were arranged along with the templated thermal uplifting. It was revealed that a multiformity in the spectral absorbance characteristic has intimately corresponded to the aggregation of gold micro-ring structures on a quart glass plate.
6,300
2022-05-06T00:00:00.000
[ "Engineering", "Materials Science" ]
A Convex Constraint Variational Method for Restoring Blurred Images in the Presence of Alpha-Stable Noises Blurred image restoration poses a great challenge under the non-Gaussian noise environments in various communication systems. In order to restore images from blur and alpha-stable noise while also preserving their edges, this paper proposes a variational method to restore the blurred images with alpha-stable noises based on the property of the meridian distribution and the total variation (TV). Since the variational model is non-convex, it cannot guarantee a global optimal solution. To overcome this drawback, we also incorporate an additional penalty term into the deblurring and denoising model and propose a strictly convex variational method. Due to the convexity of our model, the primal-dual algorithm is adopted to solve this convex variational problem. Our simulation results validate the proposed method. Introduction Noise interferences often occur in many systems such as wireless communications [1] and social networks [2,3]. Hence, images are inevitably corrupted by both blur and noise during the acquisition and transmission. Hence, the restoration of clean images from blurred and noisy observations is a fundamental task in the image processing community. A wide range of approaches has been proposed to remove additive Gaussian noise [4][5][6]. However, many other noises, such as impulse noise [7][8][9][10][11][12], multiplicative noise [13,14], Poisson noise [15][16][17], Cauchy noise [18,19], and Rician noise [20], commonly appear in the real world and thus are studied by many researchers. Another impulsive noise is often caused by alpha-stable noise, which normally appears in many applications, such as wireless communication systems, synthetic aperture radar (SAR) images, biomedical images, and medical ultrasound images [21,22]. Mathematically, the image restoration problem can be expressed as where u ∈ R mn is obtained from a two-dimensional pixel-array with dimension m × n and defined on a connected bounded domain Ω ⊂ R 2 with compact Lipschitz boundary, K ∈ R mn×mn denotes a known linear and continuous blurring operator, η is the noise obeys certain distribution (for example alpha-stable noise is the noise which obeying alpha-stable distribution), and f ∈ R mn is the blurred image with the additive noise. In particular, when f is corrupted only by noise, it is then given by f = u + η. It is well known that restoring u from f is normally an ill-conditioned problem. Variational methods are proposed to handle this ill-posed inverse imaging problems. These methods are usually summarized as convex and non-convex methods, respectively. The total variation (TV) regularization method [23] plays a significant role in convex variational-based image processing, since it can preserve sharp edges in images due to the piecewise smooth property of the TV norm. The ROF (Rudin Osher and Fatemi) denoising model is one of the most famous total variational models for restoring images with additive Guassian noise, which was proposed by Rudin et al. [6], as given by where Ω |Du| is the TV regularization term, BV is the space of the functions of bounded variation, Ω (u − f ) 2 dx is the data fidelity term, and λ > 0 is the regularization parameter, which represents the trade-off between the data fidelity term and the TV regularization term. It is possible to modify the ROF denoising model to incorporate a linear blurring operator K [6]. The ROF deblurring and denoising model is then given as follows: Although the ROF deblurring and denoising model is a very useful deblurring and denoising approach with additive Gaussian noise, it does not achieve good performance in the scenario of non-Guassian environments. As a result, many kinds of variational models based on TV have been proposed for restoring clean images from blurred and non-Guassian noise distribution, such as that of impulse noise [7][8][9][10][11][12], multiplicative noise [13,14], Poisson noise [15], Cauchy noise [18,19], and Rician noise [20]. Based on different noise distributions, and data fidelity terms, one can obtain appropriate variational models for image denoising and deblurring in the presence of different noises. For example, Ω |Ku − f |dx is the data fidelity term of TVL1 deblurring and denoising model with impulse noise [11], and Ω log γ 2 + (Ku − f ) 2 dx is the data fidelity term of Cauchy deblurring and denoising model with Cauchy noise [18]. Recently, some methods have been considered to mitigate alpha-stable noise. For example, Zozor et al. [24] employed a parametric approach for suboptimal signal detection. They dealt with the detection of a known signal embedded in alpha-stable noise and discussed the robustness of the detector against the signal amplitude and the stability index. Sadreazami et al. [25] modeled the contourlet coefficients of noise-free images with the alpha-stable distribution. They have also presented a new approach for despeckling SAR images and a multiplicative watermark detection in the contourlet domain using the alpha-stable distribution [26,27]. Yang et al. [28] proposed a total variational method to restore images that are degraded by alpha-stable noise based on the property of meridian distributed. Until now, to the best of our knowledge, there is no paper reporting on a variational method for blurred image restoration in the presence of alpha-stable noise. In order to restore images from blur and alpha-stable noise while also preserving their edges, this paper proposes a novel variational method based on the statistical property of meridian distribution and the TV, and our numerical experiments demonstrate that it performs better than many standard deblurring and denoising method in impulsive noisy environments (with small α values, i.e., α ∈ (0, 1.5)), while providing comparable or better performance in less demanding, light-tailed environments (with high α values, i.e., α ∈ (1.5, 2)). The main contributions of this paper are summarized as follows. (i) Based on the statistical properties of meridian distribution and the TV, we propose a new variational method for restoring blurred images with alpha-stable noise and then analyze the existence of the solution for the variational model. (ii) By adding a penalty term, we propose a strictly convex variational method and prove the existence and uniqueness of the solution for the convex variational model. (iii) The primal-dual algorithm is employed to solve the novel convex variational problem, with its convergence being analyzed. (iv) We compare our proposed method to state-of-the-art methods such as the TVL1 model [11], the Cauchy model [18], and the meridian filter [29] and show the effectiveness of our proposed method. The rest of this paper is organized as follows. In Section 2, we describe the alpha-stable and the meridian distributions. In Section 3, we propose a variational method for simultaneous deblurring and denoising, and study the existence of the solution for the proposed model. We also propose a convex variational method to restore blurred images with alpha-stable noise, and analyze the existence and uniqueness of the solution for the convex variational model. The primal-dual algorithm for solving the proposed convex restoration problems is given in Section 4. Section 5 presents extensive numerical results to evaluate the performance of the proposed method in comparison with well-known methods. Finally, concluding remarks are provided in Section 6. A Brief Review of the Alpha-Stable and Meridian Distributions The alpha-stable noise which obeys alpha-stable distribution is often found in radar-and sonar-related applications. The heaviness of the alpha-stable distribution tails is controlled by the parameter α ∈ (0, 2), namely, the tails grow thicker as α values becomes smaller. Hence, alpha-stable noise can be seen as a type of impulsive noise with small α values (α ∈ (0, 1.5)) [21]. The alpha-stable distributions are closed under additions, i.e., the sum of two alpha-stable random variables is still an alpha-stable random variable. Moreover, the alpha-stable random variables obey the generalized central limit theorem [21]. However, this class of alpha-stable distribution random variables has no closed-form expressions for densities and distribution functions (except for Gaussian distribution, Cauchy distribution, and Levy distribution). The distribution with α = 2 corresponds to the well-known Gaussian distribution, and the one with α = 1 corresponds to the Cauchy distribution. Figure 1 shows the probability density functions (PDFs) of alpha-stable distributions S (α, 0, 1, 0) with different values of α. We can see that the distributions of this class are all bell-shaped, with increasing density on the left and decreasing on the right. In addition, the tail of the bells becomes heavier as the value of α decreases. The meridian distribution is a member of the generalized Cauchy distributions (GCD) family [30], and it combines the advantages of the GCD and alpha-stable distributions. Moreover, an estimator derived from the meridian distribution is robust to the impulsive noise [30]. The probability density function (PDF) of the meridian distribution is given by where γ > 0 is the scale parameter, and θ is the localization parameter. Without loss of generality, we consider θ = 0 in our paper. A careful inspection of the meridian distribution shows that its PDF tail decays slower than the Cauchy case, resulting in a heavier-tailed PDF, that is, the meridian PDF exhibits tails heavier than that of the Cauchy PDF [29]. Moreover, by examining the well-established statistical relation between the Laplacian and meridian distributions, we can find that the ratio of two independent Laplacian distributed random variables is a meridian distribution [29]. The influence function of the meridian distribution is given by where sgn(·) is the sign function. The influence function determines the effect of contamination. The rejection point of the meridian is smaller than that of the Cauchy distribution as it has a higher influence function decay rate. This indicates that a signed detection algorithm in the presence of the impulsive noise with the meridian distribution is more robust than that in the Cauchy distributed noise [29]. The Proposed Variational Model In this section, we propose a new variational model for restoring blurred images under the alpha-stable noise environments. Motivated by existing work [6,13,18,29], we propose a variational model by applying the Bayes rule and the maximum a posteriori (MAP) estimator to restore the blurred images with alpha-stable noise based on the property of the meridian distribution and the TV. First, we focus only on the denoising scenario. Given a known image f , as in [6,13], by using the Bayes rule as well as the MAP estimation, we havê In obtaining Equation (6), we have omitted log (P ( f )) since it is a constant respect to u. As the image is corrupted by alpha-stable noise, for each pixel x ∈ Ω, we have where γ > 0 stands for the scale parameter. Therefore, Inspired by the idea of Aubert et al. [13], u is assumed to follow a Gibbs prior distribution. Therefore, we can obtain the TV regularization of u as follows: where β > 0 is a parameter, and R is the normalization factor. Hence, solving Equation (6) is equivalent to find the minimization of the following logarithmic probability. That is, Here, please note that the log 2 + log γ + log R is omitted since the three terms are all constants with respect to u. Therefore, our pure denoising with alpha-stable noise is given by where λ = 2 β > 0 is a regularization parameter. As one can see, we keep the same regularization term as in the ROF denoising model (Equation (2)) since the TV regularization term is useful for preserving edges, but we adapt the data fidelity term to the alpha-stable noise, introducing one that is suitable for such noise. We emphasize that the proposed model can be extended to other modern regularization terms such as framelets, sharelets, rank surrogates, dictionary learning, or the tight-frame approach. These regularization terms are effective for the restoration of blurred and noisy images. Thus, we start to prove the existence of the solution for Equation (11). (11) has a solution u * ∈ BV (Ω) satisfying: This leads to E (u) being lower-bounded, and we can find a minimal sequence {u n } ⊂ BV (Ω). In addition, for any fixed we can assume that 0 < a ≤ u n ≤ b, which implies that u n is bounded in L 1 (Ω). According to the definition of {u n }, E (u n ) is bounded. In addition, it is proved that u n is bounded in BV (Ω) since Ω |Du n | is bounded [31]. Hence, there is a subsequence that converges strongly in L 1 (Ω) and weakly in BV (Ω) to some u * ∈ BV (Ω). Furthermore, given 0 < a ≤ u * ≤ b, the lower semicontinuity of the TV, and the Fatou's Lemma, the solution to Equation (11) is obtained as u * . We then extend Equation (11) to the simultaneous deblurring and denoising scenarios. The restoration is conducted by solving the following optimization model: It is worth mentioning that Equation (12) is also a non-convex problem, as in the scenario of the pure denoising Equation (11). Since Equations (11) and (12) are both nonconvex, they cannot guarantee a global optimal solution. To overcome this drawback, we incorporate an additional penalty term into Equations (11) and (12) to obtain novel convex variational models in the following section. This penalty term is based on the median-filtered result of the noise image. In the following section, we propose a convex variational model for deblurring and denoising images, which is corrupted by both blur and alpha-stable noise. We first also focus on a convex variational model for denoising only. By introducing a penalty term into Equation (11), we obtain a convex variational model as follows: where g = medfilt2 ( f ) (g is the median filter function of f ) [18], λ > 0 and µ > 0 are the regularization parameters, respectively. As a result, three theorems are provided to confirm that the above model is strictly convex under certain conditions, and there is a unique solution to Equation (13). Proof. For each fixed x ∈ Ω, let the real function h on R + ∪ {0} be defined as We can easily compute the first and second order derivatives of h, as given by Since , h is convex. Furthermore, the function h has only one minimizer, so h is strictly convex when µγ 2 ≥ 1. Since the total variation regularization is convex, we can also conclude that the objective function in Equation (13) is strictly convex for µγ 2 ≥ 1. Based on Lemma 1, we can now prove the existence and uniqueness of the solution to Equation (13). Lastly, we also extend our convex variational model for the following simultaneous deblurring and denoising case: Since the blurring operator K is linear and nonnegative, we can conclude that the model in Equation (14) is convex when µγ 2 ≥ 1. In the following theorem, we state the existence and uniqueness of its solution. Proof. Let {u n } ∈ BV (Ω) be a minimizing sequence for Equation (14). Since the objective function in (14) is bounded, we know that Ω |Du n | is bounded [13,18]. As in the proof of Theorem 2 of [18], we can verify that u n − m Ω (u n ) 2 and u n − m Ω (u n ) 1 are bounded for each n (where m Ω (u n ) = 1 |Ω| Ω u n dx, |Ω| denotes the measure of Ω). Due to the continuity of the operator K ∈ L L 1 (Ω) , L 2 (Ω) , we know that the sequence {K (u n − m Ω (u n ))} is bounded in L 2 (Ω) and in L 1 (Ω). Moreover, for each n, the objective function in Equation (14) is bounded, hence (Ku n − g) 2 is bounded in L 1 (Ω). Thus, Ku n − g 1 is bounded as well, and hence Ku n 1 is bounded. One can easily find that |m Ω (u n )| K1 1 is bounded from Equation (15). Since (Ω) and in L 1 (Ω). Since BV (Ω) is closed and convex, {u n } is also bounded in BV (Ω). As a consequence, there is a possible subsequence u n k , which converges in L 1 (Ω) to some u * ∈ BV (Ω), and Du n k converges slightly as a measure to Du * . Since the linear operator K is continuous, Ku n k converges to Ku * in L 2 (Ω). Thus, u * is a solution of Equation (14) according to the lower semicontinuity of TV and Fatou's lemma. Primal-Dual Algorithm In this section, we employ the primal-dual algorithm [32,33] to solve the minimization problem in (14) since it is easy to implement and its convergence is guaranteed [32]. Due to the convexity of Equation (14), there are many algorithms that can be employed to solve the proposed image deblurring and denoising model such as the alternating direction method of multipliers (ADMM) [5,34,35] and the split-Bregman algorithm [36]. We address the general deblurring and denoising case, since the pure denoising case can be considered special when K is an invariant parameter. At first, the discrete version of our proposed image deblurring and denoising Equation (14) is derived, and the corresponding numerical solution is then given. Suppose that the noisy image f ∈ R mn is obtained from a two-dimensional pixel-array with dimension m × n, and K ∈ R mn×mn is the discretization of the continuous blurring operator. Now we introduce the discrete version of Equation (14): where G : R mn → R is defined as The first term of Equation (16) denotes the discrete total variation of the image u, and it is defined as where the discrete gradient ∇ ∈ R 2mn×mn is given by ∇u = ∇ x u ∇ y u . The first term on the right side of Equation (17) is a robust distance metric, which can be defined as the meridian norm. The meridian norm tends to behave like the L 1 norm for points within the unitary L 1 ball and gives the same penalization to large sparse deviations as to small clustered deviations [30]. As in [32], we introduce new variables v ∈ R 2mn and w ∈ R mn , and Equation (16) is then clearly equivalent to the following constrained optimization problem: To employ the primal-dual algorithm, we study the following optimization problem: where p ∈ R 2mn and q ∈ R mn are the dual variables, X is a real vector space R mn , and i+mn . Now we apply the primal-dual algorithm to the optimization problem of Equation (20). The primal-dual algorithm is defined through the following iterations: In the following, we provide details on how to solve them. Since the objective functions of Equations (21)-(23) are quadratic, the update of p, q, and u can be computed efficiently by where the divergence operator div = −∇ T . The update in Equation (24) can be obtained by applying the soft thresholding operator as where t k = v k − τ p k+1 . The optimality condition for (25) is given by that is We remark that, if K is the identity operator, i.e. the degraded image f is not blurred but is only corrupted by noise, there is no need to introduce the primal variable w and the dual variable q, and the algorithm can be simplified accordingly. The primal-dual algorithm above to solve the optimization problem of Equation (20) can be summarized in the following table. The termination condition in Algorithm 1 will be discussed in Section 5. In the rest of this section, we study the existence of the solution to Equation (20) and the convergence of Algorithm 1. Proposition 1. The saddle-point set of Equation (35) is nonempty. Proof. The proof of the above proposition is the same as that for Proposition 2 of [37]. We remark that we can easily verify that the required conditions in [38] are satisfied for the proposed primal-dual formulation: (H1): X and Y are nonempty closed convex sets; (H2): The objective function (denote Φ (x, y) ) of (35) is convex-concave on X × Y in the following sense: for each y ∈ Y, the function Φ (·, y) is convex, for each x ∈ X, the function Φ (x, ·) is concave; (H3): X is bounded, or y 0 ∈ Y such that Φ (x, y 0 ) → +∞ when x → +∞; (H4): Y is bounded, or x 0 ∈ Y such that Φ (x 0 , y) → +∞ when y → +∞; Thus, there exists a nonempty convex compact set of saddle-points on X × Y of Equation (35). The following proposition shows the convergence of Algorithm 1. Proposition 2. Let A 2 be the operator 2-norm of A , and the iteration of x k , y k be defined by Algorithm 1. If στ A 2 2 < 1, then x k , y k converges to a saddle point(x * , y * ) of primal-dual problem in Equation (35). Proof. The proposition can be seen as a special case of Theorem 1 in [32]. The conclusion (a) of Theorem 1 in [32] establishes that x k , y k is a bounded sequence, so that some subsequence x k l , y k l converges to some limit (x * , y * ). Observe that the conclusion (b) of Theorem 1 in [32] implies that lim k→∞ x k − x k−1 = lim k→∞ y k − y k−1 = 0, and x k l −1 and y k l −1 in particular converge, respectively, to x * and y * . It follows that the limit (x * , y * ) is a fixed point of the iterations of Algorithm 1, hence a saddle-point of our problem. Experimental Results and Analysis In this section, numerical results are obtained by applying our proposed models to blurred images corrupted by alpha-stable noise. We also compare our models with other existing and well-known models. We take six images-Cameraman (256 × 256), Peppers (256 × 256), Lena (256 × 256), Phantom (256 × 256), Boat (256 × 256), and Fruits (256 × 256)-for experiment and comparison. For further comparison, four objective image quality metrics-the peak signal noise ratio (PSNR) in dB, the measure of structural similarity index (SSIM) [40], the multiscale SSIM (MS-SSIM) [41], and the feature similarity index (FSIM) [42]-are used to measure the performance of the proposed models for the test images. Each of the same experiments is repeated 10 times, so the PSNR, SSIM, MS-SSIM and FSIM values are the averaged results of 10 experiments. The PSNR and SSIM are respectively defined as follows: whereû is the restored image, u is the original image, µû and µ u are their respective mean, σ 2 u and σ 2 u are their respective variances, σû u is the covariance of them, and c 1 , c 2 > 0 are constants. PSNR, SSIM, MS-SSIM, and FSIM are all measures of the performance of an image. A higher PSNR indicates that the better restored image will be picked up, and the SSIM, MS-SSIM, and FSIM values are closer to 1. The characteristic of the restored image is more similar to the original image. In our numerical simulations, we terminate the algorithm when the relative change of the objective function between two consecutive iterations becomes small enough, i.e., where E(·) denotes the objective function of the proposed Equation (14), and ε > 0 is a tolerance. For Algorithm 1, we have found that smaller tolerance values (e.g., ε = 10 −4 ) do not consistently improve the relative error as the runtimes increase, so we set ε = 10 −3 in our numerical experiments. Since γ depends on the noise level, we take the same value of the parameter found in [30], that (where f (c) denotes the cth quantile of f ). We chose σ = τ = 0.3 and µγ 2 = 1. In addition, the regularization parameter λ balances the trade-off between the TV regularization term and the data fidelity term. We manually tune it in order to obtain the highest PSNR values of the restored image. We would first like to illustrate the different effects of Gaussian noise, impulse noise, and alpha-stable noise. Figure 2a shows the original Cameraman image, and Figure 2b-d represent, respectively, the images degraded by Gaussion noise, impulse noise, and alpha-stable noise (with It is clear from Figure 2 that the image corrupted by Gaussian noise looks different from the images corrupted by impulse noise and alpha-stable noise (with α = 0.5), while to some extent the alpha-stable noise and impulse noise are close to each other. For example, some pixels are degraded to white or black with the impulse noise and the alpha-stable noise (with α = 0.5), while the image corrupted by Gaussian noise is uniformly modified and all the pixels are corrupted by noise (see Figure 2f). Although the alpha-stable noise is similar to the impulse noise, there are also some very important differences, for instance, in the impulse noise, some pixels are noise-free (see Figure 2g), while in the alpha-stable noise, the noise free pixels are very rare (see Figure 2h). Thus, due to the impulsive character of the alpha-stable noise, we employ the meridian norm in our proposed model. Image Denoising In this subsection, we first focus only on the pure denoising case. The noisy image f is generated as f = u + η = u + ξρ where ρ follows the alpha-stable distribution, and ξ > 0 gives the noise level. We compare the proposed image denoising model with the Cauchy model [18], the TVL1 model [11], and the meridian filter [29]. These models are all efficient for recovering images in impulsive noise. The proposed image denoising model is applied to the Cameraman image in the presence of alpha-stable noise at different tail parameters α (with ξ = 0.04 and ρ following the alpha-stable distribution S (α, 0, 0.2, 0)). In order to evaluate quantitatively the performances of the proposed image denoising model, two objective criteria, PSNR and SSIM, are computed and provided in Figure 3. The Cauchy and TVL1 models for image denoising perform similarly, so we only provide the results of the Cauchy model in Figure 3. Figure 3 gives the PSNR and the SSIM of the noisy Cameraman image and the recovered images resulting from the proposed image denoising model, the Cauchy model, and the meridian filter at different tail parameters α. As the tail parameter α increases, the PSNR values and the SSIM values become higher in all of these methods; And as the tail parameter α decreases, the superiority of the proposed method becomes obvious. Moreover, our proposed image denoising model outperforms the Cauchy model and the meridian filter in terms of the PSNR and SSIM at the same tail parameter. In all, the proposed model significantly outperforms the commonly employed image denoising models in impulsive noisy environments (with small α values) while providing comparable performances in less demanding, light-tailed environments (with high α values). In particular, the PSNR values of our proposed model are all above 30 dB at the tail parameter of α ≥ 1, and such values are considered to be perfect recovery results, so we employ the value of ρ, which, in this part, follows the alpha-stable distribution S (1, 0, 0.2, 0). For comparison of the performance quantitatively, the PSNR in dB and the SSIM are used to measure the performance of different models for the three noisy test images: Cameraman, Peppers, and Lena. The PSNR values in dB and the SSIM values for noisy images (ξ = 0.04 and ρ obeying S (1, 0, 0.2, 0)) and recovered images given by different methods are listed in Table 1. Table 1 gives the PSNR values and the SSIM values for three different test images and the recovered results of these noisy images resulting from our proposed image denoising model, the Cauchy model, the TVL1 model, and the meridian filter, respectively. Obviously, our proposed image denoising model outperforms the TVL1 model, the Cauchy model, and the meridian filter in terms of the PSNR and SSIM at the same noise levels (ξ = 0.04 and ρ following S (1, 0, 0.2, 0)). Take the Cameraman noisy image as an example, with our method, we can increase the PSNR values of the recovered images by 2.836 dB at the same noise levels and obtain the largest SSIM values. Image Deblurring and Denoising In the following subsection, we focus on the deblurring and denoising case. Here, we consider the recovery of the blurred images corrupted by both the Gaussian blur (a window size 9 × 9 and standard deviation of 1) and alpha-stable noise (ξ = 0.04). As in the previous subsection, we compare our proposed deblurring and denoising model with other well-known image deblurring and denoising methods for impulsive noise, such as the TVL1 model [11] and the Cauchy model [18]. The proposed image deblurring and denoising model is applied to the blurred and noisy Cameraman image at different tail parameters α. The PSNR and SSIM are computed and provided in Figure 4. Figure 4 provides the quantitative results of our proposed image deblurring and denoising model, the TVL1 model, and the Cauchy model. It is clear that these methods perform well. As the alpha values increase, the PSNR and SSIM values become higher for all these methods. And, as the alpha values decrease, the superiority of our proposed model becomes obvious. Hence, our proposed model has better performance at the same tail parameter α than that of the TVL1 model and the Cauchy model. Since the PSNR and SSIM performances depend on the tail parameter, it is necessary to choose an appropriate tail parameter for image deblurring and denoising. In the following test, the tail parameter is set to α = 1. In practice, we can see from Figure 4 that the recovered results with α = 1 are of good quality for all models. In order to evaluate quantitatively the performance of the proposed image blurring and denoising model, we apply it now to recover three different images (Phantom, Boat, and Fruits) with the Gaussian blur (a window size 9 × 9 and standard deviation of 1) at the same noise level (ξ = 0.04 and ρ following S (1, 0, 0.2, 0)). Experimental results on these test images are shown in Figures 5-7, respectively. Figure 5a is the Phantom blurred and noisy image, and Figure 5b-d are the recovered images from our proposed image blurring and denoising model, the TVL1 model, and the Cauchy model, respectively. The source images in Figures 6 and 7 have similar situations for the Boat and Fruits images, respectively. It is clear from Figures 5-7 that the recovered images of our proposed image blurring and denoising model have more detailed information and are much closer to the original test images as compared with the recovered images from the TVL1 model and the Cauchy model. Figure 8a-d are the magnified top left regions of Figure 7a-d, respectively. It is clear from Figure 8 that the reconstruction result obtained with our proposed method produces characterizations that are superior to those of the TVL1 and Cauchy methods. We also can see that the restored result of the proposed method can maintain salient features of the line in the original image and has clearer outlines and reduced noise and blur effects. For further quantitative comparison of the performance of the proposed image deblurring and denoising model, the PSNR in dB and SSIM were computed using the different models for the three different groups of blurred and noisy test images. The PSNR and SSIM values for blurred and noisy three different test images: Cameraman, Peppers, and Lena (the Gaussian blur with a window size 9 × 9 and standard deviation of 1, ξ = 0.04 and ρ following S (1, 0, 0.2, 0)). The recovered images given by different methods are listed in Table 2. For easy observation, we took the Fruits image as an example and magnified the top left regions of the restored results with different algorithms. The magnified local regions of the restored results with different algorithms are shown in Figure 8. To further verify the performance of the algorithm, the PSNR, SSIM, MS-SSIM, and FSIM for blurred and noisy Phantom images and recovered images given by different methods are listed in Table 3. It is obvious from Table 3 that a notable performance improvement has been achieved by the proposed image deblurring and denoising model as compared with the TVL1 model and the Cauchy model in terms of these four image quality metrics. This is also consistent with the visual effects of Figure 5. In addition, we have employed other classical test images to evaluate the deblurring and denoising performance and found that a similar performance gain in terms of the PSNR, SSIM, MS-SSIM, and FSIM has been achieved by the proposed method. Conclusions In order to restore images from blur and alpha-stable noise while also preserving their edges, we have proposed a new variational method for restoring blurred images with alpha-stable noise in this paper. Inspired by the ideas of the ROF model and the Cauchy model as in [18], we have obtained a convex model. Theoretical results support the existence and uniqueness of the solution to our proposed model. In addition, we have employed the primal-dual algorithm [32] to solve the corresponding convex problem involved in our proposed model and show that the convergence is guaranteed. Experimental results demonstrate that the proposed method significantly outperforms the commonly employed image deblurring and denoising models in impulsive noisy environments (with small α values, i.e., α ∈ (0, 1.5)), while providing comparable or better performance in less demanding, light-tailed environments (with high α values, i.e., α ∈ (1.5, 2)).
7,908.6
2018-04-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Gribov ambiguities at the Landau -- maximal Abelian interpolating gauge In a previous work, we presented a new method to account for the Gribov ambiguities in non-Abelian gauge theories. The method consists on the introduction of an extra constraint which directly eliminates the infinitesimal Gribov copies without the usual geometric approach. Such strategy allows to treat gauges with non-hermitian Faddeev-Popov operator. In this work, we apply this method to a gauge which interpolates among the Landau and maximal Abelian gauges. The result is a local and power counting renormalizable action, free of infinitesimal Gribov copies. Moreover, the interpolating tree-level gluon propagator is derived. Introduction One of the most important and challenging open problems in theoretical Physics is the full comprehension of the non-perturbative features of Yang-Mills theories. Responsible for describing the successful Standard Model at high energies, Yang-Mills theories still lack a complete consistent quantization. As pointed out by V. N. Gribov [1] at the Landau gauge, a residual gauge symmetry survives the Faddeev-Popov gauge fixing procedure [2]. It is a known fact that, to quantize a gauge theory, it is necessary to consistently eliminate the gauge freedom of the Yang-Mills action, see also [3]. The residual gauge symmetry is characterized by the presence of redundant configurations (called Gribov copies) which still contribute to the path integral. A very important remark is that it is not a particular defect of Landau gauge, but of all covariant gauges, as formally shown by I. M. Singer [4]. Since these configurations represent a redundancy in the theory, their elimination is an unavoidable requirement. Still in [1], Gribov showed that copies which are related by infinitesimal gauge transformations are associated with the zero-modes of the Faddeev-Popov operator (or, equivalently, to the poles of the ghost propagator) for the Landau gauge. In fact, this is true at least for all gauges that depend exclusively on the gauge field, see [5]. Moreover, Gribov proposed the elimination of these infinitesimal copies by restricting the path integral to a region which is free of infinitesimal copies. This region is known as the first Gribov region or, simply, Gribov region. Essentially, it is defined as the region where the Faddeev-Popov operator is positive-definite, a property that ensures that no infinitesimal copies are present. A very important feature is that all gauge orbits actually cross the Gribov region [6]. Then, since all physical configurations have at least one representative inside the Gribov region, the restriction is a consistent improvement of the Faddeev-Popov trick. The restriction of the path integral to the Gribov region implies on a dramatic modification of the gluon and ghost propagators. In one hand, the gluon propagator is suppressed at the infrared regime and acquires imaginary poles, on the other hand, the ghost propagator is enhanced by an infrared behaviour of the type 1/k 4 . These properties show, in an explicit way, that the elimination of the infinitesimal copies is of great importance for a consistent quantization, deeply modifying the theory and providing evidences of confinement. The solution proposed by Gribov works nicely when the Faddeev-Popov operator is hermitian. The reason is that hermiticity ensures that the spectrum of the Faddeev-Popov operator is real and, therefore, it is possible to establish an order relation between the eigenvalues of such operator. Hence, it is possible to define a region where the Faddeev-Popov operator is positivedefinite and the restriction of the path integral to this region ensures the absence of infinitesimal copies. However, if we desire to work with non-hermitean Faddeev-Popov operators, to perform such restriction is not a clear procedure because the order relation and, therefore, the definition of a region, do not make sense anymore. In this sense, hermiticity of the Faddeev-Popov operator plays a fundamental role for the elimination of copiesà la Gribov. extra constraint. Therefore, all infinitesimal Gribov copies are eliminated at the classical level. In a certain sense, this elimination is direct and does not require the construction of a geometric region to restrict the path integral. The only requirement is to avoid all zero-modes, which characterize the infinitesimal copies. Since the identification of Gribov copies and the zeromodes of the Faddeev-Popov operator is independent of the hermiticity of this operator, this method should also be employable to treat gauges with non-hermitian Faddeev-Popov operators. Thus, this method brings a new perspective on the elimination of copies. It is important to recall that, the method developed in [5] requires exclusively A-dependent gauge conditions, although any particular choice was imposed. Therefore, in principle, there is a large class of gauges for which the method would be applicable. In particular, still in [5], consistency tests were made by applying the method to the Landau and maximal Abelian gauges. It is worth mention that the method here described is not the only alternative to the Gribov and Zwanziger techniques. There are other alternative techniques to deal with the Gribov ambiguities, see for instance [20,21]. In the present work, we apply this new method [5] to a gauge with non-hermitian Faddeev-Popov operator: the Landau -maximal Abelian interpolating gauge (LMAIG) [22,23,24]. This gauge has, at least, three advantages to motivate the present investigation. The first one, as already mentioned, is that the traditional approaches are not able, in principle, to deal with this gauge or any other gauge with non-hermitian Faddeev-Popov operators. Second, it is a gauge that link the two gauges where the Gribov problem can be handled. Thus, it is possible to verify the consistency of the results by interpolating among both limits of the LMAIG. Third, this gauge can be defined through a minimizing function given by where η is the interpolating parameter and the indices refer to the non-Abelian and Abelian sectors of the SU (N ) group (see Sec. 3 for the conventions). The gauge conditions of the LMAIG can be obtained by the minimization of the operator (1) with respect to gauge transformations. This means that the LMAIG could be, in principle, implemented on the lattice. This is a very welcome feature because it can work as a test for the application of the method 1 . This paper is organized as follows: in Sect. 2, a brief review of the method developed in [5] is given. In Sect. 3, we provide a review of the decomposition into diagonal and off-diagonal components of algebra-valued quantities, present the maximal Abelian gauge and make the explicit decomposition of the Landau gauge. After this, we introduce the LMAIG and discuss its important features for the analysis of Gribov copies. Then, in Sect. 4, we apply the method to the LMAIG, and construct an action free of infinitesimal copies. This is done in Sect. 4. In Sect. 5 we calculate the diagonal and off-diagonal gluon propagators and show how it is possible to deform it into Landau and maximal Abelian gauges propagators. In Sect. 6 we make some comments about the gap equation in this method. Finally, in Sect. 7, we provide our conclusions. Many algebraic details were left to appendices to avoid big interruptions along the text. A brief review of the method The elimination method proposed in [5] is based on the introduction of an extra constraint that ruins the Gribov copies equation. In this section, we provide a brief review of the method in order to apply it to the interpolating LMAIG [23,24]. It is not our intent to be rigorous here. For any formal detail we refer to [5]. Let us consider Yang-Mills theory for a given semi-simple Lie group G. We choose a gauge condition ∆ A that depends exclusively on the gauge field, i.e. ∆ A = ∆ A (A), where the group indices vary as A, B, C, . . . ∈ {1, 2, . . . , dim G}. As pointed out by Gribov [1], the Faddeev-Popov gauge fixing procedure does not ruin completely the gauge symmetry. Thus, some redundant configurations, which are connected by gauge transformations, are still being considered in the path integral. The existence of these copies depends on the existence of solutions for the Gribov copies equation, and this is obtained by the requirement of gauge invariance of the gauge condition, i.e. where U ∈ G and g is the coupling parameter. Besides the fact that we do not have much knowledge about the elimination of Gribov copies generated by large gauge transformations, we reasonably understand how to handle those copies generated by infinitesimal transformations. For this reason, we restrict ourselves to this case. The infinitesimal gauge transformation is then given by where ζ B is the infinitesimal gauge parameter. Thus, the copies equation (2) becomes where D AB µ ≡ δ AB ∂ µ − gf ABC A C µ is the covariant derivative and f ABC represents the structure constants. At first order in ζ, Eq. (4) can be written as where ∇ AB is the Faddeev-Popov operator, Summarizing, Eq. (5) is obtained by requiring infinitesimal gauge invariance of ∆(A). The BRST transformation defined through the nilpotent operator s is given by where c A is the Faddeev-Popov ghost field, c A is the antighost field and b A is the auxiliary Nakanishi-Lautrup field. It is immediate to see that the first equation of (7) has the same form of (3). Of course, we have to understand that these are different transformations: The BRST, in particular, transforms a field with vanishing ghost number into a composite field with ghost number +1, while the gauge transformation does not change the ghost number. Nevertheless, it was proved in [5] that these transformations are homotopic. Thus, since they have the same formal structure, we can obtain the copies equation by requiring, not the gauge invariance of the gauge condition, but the BRST invariance 2 of the very same gauge condition. Hence, we can write the copies equation as The key point of the method resides at this stage: Since we want a theory free of copies, we have to ruin the copies equation. This might be seen as a new constraint for the theory. Thus, from Eq. (8), we can see that, to ruin this equation, we need to break the BRST invariance of the copies equation. In this sense, we want to write an equation such that where the term Ω A must prevent the theory to develop infinitesimal copies. Roughly speaking, this is the main idea behind the method. Now, in order to implement Eq. (9) in a gauge theory, we have to be careful to preserve all well established features of the perturbative regime. A very important requirement we have to make is that the BRST symmetry is restored at the perturbative regime. This means that the BRST breaking must be soft. Another requirement is that, since we do not want to affect the ghost sector, which is of great importance for the perturbative sector, we must introduce a set of trivial auxiliary fields to mimic Eq. (8). Finally, to ruin the copies equation and impose a consistent equation compatible with (9), the introduction of a soft BRST breaking term must be performed. As argued in [5], these goals are achieved by the introduction of two extra terms to the gauge fixed action, namely S triv and Ξ. These terms are responsible to implement a new constraint to the theory, satisfying the requirements mentioned before and reproducing Eq. (9). Thus, we impose the action where As stated before, the term S triv is introduced to mimic the copies equation. To do so, we introduce a BRST quartet in such a way that It is easy to see that the equation of motion for ϕ produced by S triv is exactly the copies equation. Moreover, the indices of the auxiliary fields are not arbitrary and describe the degeneracy of the copies equation. Since our point is precisely to ruin this equation, the term Ξ has the following general form where γ is a mass parameter introduced to fullfill the soft breaking requirement. With Ξ, we see that the equation of motion for ϕ is modified and represents a "ruined" copies equation which is the extra constraint that ensures the absence of infinitesimal copies. The action given by Eq. (10) is then an action which satisfies the constraint given by Eq. (15). With this we eliminate all infinitesimal copies at the classical level. This result qualitatively coincides with the well establish refined Gribov-Zwanziger action, see [14]. It is worth mention that the form of the breaking term defined by eq. (14) has a kind of "freedom". To ruin the copies equation we must add a term which will be responsible for the breaking of BRST invariance of the gauge condition. This term must depend on ϕ for the obvious reason that, if it does not, the variation of the action (10) with respect to ϕ would not produce a "ruined" copies equation, as required. Moreover, the derivative of this term with respect to ϕ must depend exclusively on the gauge field A. The reason is that, if it depends on other fields, this term vanishes at their trivial vacua. Requiring the exclusive dependence on A, we ensure that the only copies that can be generated are related to A = 0. However, if they exist, they are necessarily different from zero and, therefore, the constraint will eliminate the copy A = 0 for the appropriate A. The conclusion is that the first term of Eq. (14), is sufficient for our requirements. In this sense, to ruin the copies equation in a minimal way, we could add only (16) to the original action and it will generate a theory free of infinitesimal copies. It this case, the extra terms can be included by the LCO technique, in the usual way [25,26,27,28,29]. On the other hand, once γ is at our disposal, the extra soft terms in (14) are allowed by power counting. What would decide if they are present or not are the Ward identities of the particular chosen gauge. In both cases, the effect is the obtention of the refined Gribov-Zwanziger action [14,15]. Furthermore, there is another possible freedom, for each term proportional to the mass parameter γ, we could replace it by different mass parameters. Essentially, this can also be obtained by the redefinition m i = ζ i γ, which means that the independent character of these coefficients are accounted by the parameters ζ i . Let us also comment on the term proportional to ζ 1 . As it is possible to see from eq.(14), a larger field combination is associated with the parameter ζ 1 . The reason is that we can introduce such combination as a BRST exact form, γ 2 s d 4 x ω AB µ ϕ AB µ , for instance. We could introduce this mass terms in a independent way, but following the idea of a minimal breaking of the BRST symmetry, a BRST exact term fits better for our plans. Finally, it is important to understand that the inclusion of all extra terms proportional to γ implies on a deep modification of the so-called gap equation [5]. Until now, no results are known for this generalized gap equation and we are not able to decide if it is a better choice to follow it or not. However, we will keep these terms as in (14) because they are important to reproduce the refined Gribov-Zwanziger features and also because we will not deal with the gap equation in this work 3 . In fact, if the extra terms are not allowed for any reason, all we have to do is to set the corresponding ζ i to zero. The Landau and maximal Abelian gauges and their interpolation From now on, we restrict ourselves to the SU (N ) gauge group. In [23,24], a gauge fixing which interpolates among Landau, Coulomb and maximal Abelian gauges was studied. In the present work, we will analyze the Gribov problem in this gauge. However, we will consider only the interpolation between Landau and the maximal Abelian gauges and avoid the Coulomb sector of the gauge. Since the MAG is characterized by imposing different gauge conditions to the diagonal and off-diagonal components of the Lie algebra-valued fields, we will decompose the Landau gauge in order to provide an explicit comparison with the reduction of the interpolating gauge to the Landau case 4 . To fix notation and conventions, we will briefly review this kind of decomposition, called Abelian decomposition [18]. Essentially, the SU (N ) group is dismembered into its Abelian and non-Abelian sectors where the Abelian sector is recognized as the Cartan subgroup. The gauge field decomposition is taken as where G A correspond to the (N 2 − 1) generators of the SU (N ) group; G a are the N (N − 1) off-diagonal generators of the gauge group; and G i represent the (N − 1) Cartan subgroup generators. The indices {a, b, c, . . . , h} run from 1 to N (N − 1) and the indices {i, j, k, . . .} run from 1 to (N − 1). As a consequence of this decomposition, we can write the decomposed BRST transformations (7) as where the covariant derivative is defined with respect to the Abelian sector and acts on non-Abelian quantities, We can now write the gauge fixed Yang-Mills action (11) as where F a µν and F i µν are the components of the field strength, which are, explicitly, and ∆ a (A) and ∆ i (A) are related to the gauge condition components. To complete the Abelian decomposition we can write the Jacobi identities as The maximal Abelian gauge The maximal Abelian gauge imposes different gauge conditions to the diagonal and off-diagonal sectors of the gauge fields, namely and the corresponding gauge fixing action is given by where the operator ∇ ab is the Faddeev-Popov operator, The gauge conditions (23) can be obtained from the equations of b a and b i . If we think of the conditions for the existence of Gribov copies, we can derive the copies equation requiring the gauge/BRST invariance of the gauge condition [5]. Hence, since we have two different gauge conditions in MAG, it is natural to expect two copies equations. In fact, if we calculate directly the copies equation from the gauge conditions (23), we obtain the following equations where ζ a and ζ i are the off-diagonal and diagonal components of the infinitesimal gauge parameter, respectively. From (26) we see that the first equation just involves the off-diagonal component of the gauge parameter while the second involves both. Simple manipulations of the second equation provide which shows that, once one has solved the first equation of (26), the second does not contribute with any extra information. This redundancy is the reason why only the first equation of (26) is considered as the copies equation for the MAG. A final comment is that the Faddeev-Popov operator is hermitian in this case, see [16,17] and references therein for more details. The decomposed Landau gauge The Landau gauge condition does not distinct the diagonal and off-diagonal sectors of the gauge connection. However, since we will work with decomposed fields, we also write the Landau gauge fixing relevant expressions in the Abelian decomposition. The result is the decomposed Landau gauge fixing action, given by It is immediate to obtain the Faddeev-Popov operator from (29). In components, it is given by Unlike the case of MAG, all components above in (30) contribute to the copies equations. If we again consider the diagonal and off-diagonal components of the infinitesimal gauge parameter, ζ i and ζ a , respectively, we can write the following copies equations In this case, both equations encompass all components of the infinitesimal gauge parameter. Hence, we cannot put away any of them and all components of the Faddeev-Popov operator are essential to the analysis. Another important remark is that the full Faddeev-Popov operator for the Landau gauge is also hermitian, see for instance [1,3]. If we adopt a matrix viewpoint, an hermitian operator is such that the elements of its diagonal are hermitian operators and all elements above the diagonal are hermitian conjugate of the elements below it. If we analyze the mixed components of (30) we can see that (∇ ai ) T * = ∇ ia . Interpolating gauge In order to provide a gauge fixing which interpolates among Landau and maximal Abelian gauges [23], we introduce a real interpolating parameter η and write the following gauge conditions Thus, it is clear that the gauge condition for the diagonal component of the gauge field is identical for the Landau and maximal Abelian gauges cases. Moreover, for the first equation of (32), the case η = 1 gives the Landau gauge condition while for η = 0, the MAG condition is achieved. Consequently, we can write the gauge fixing term as It is a simple exercise to verify that for η = 0, Eq. (33) reduces to Eq. (24), and for η = 1, it reduces to Eq. (29). 4 Eliminating the Gribov copies The Faddeev-Popov operator and Gribov ambiguities The gauge conditions presented in the last Section provide a way to interpolate among the Landau and maximal Abelian gauges. To analyze the Gribov problem in this gauge, it is fundamental to study the Faddeev-Popov operator in order to establish the copies equation and its main properties. The way we do this is completely analogous to the MAG (as showed in Sect. 3.1). Thus, by requiring gauge/BRST invariance of (32), the following operators are obtained These operators act on a gauge parameter pair (ζ a , ζ i ), exactly as in (31). First of all, as a consistency check, we must verify if, for the suitable choices of the parameter η, this operator reduces to the previous operators for Landau and maximal Abelian gauges. Hence, starting with η = 0, the first equation of (34) reduces immediately to (25), the second turns out to be the null operator and the last remain unaffected. Actually, the two last equations simply define the redundant condition (27) for the MAG. We conclude then that the choice η = 0 returns the Faddeev-Popov operator of the MAG, as expected. On the other hand, if we choose η = 1, we can see that, after some simple manipulations, the first equation of (34) reduces to ∇ ab = −∂ µ D ab µ − gf acb A c µ ∂ µ , which is exactly the purely off-diagonal components of the Faddeev-Popov operator for the Landau gauge. The second equation of (34) reduces to the second equation of (30). The third equation of (34) does not involve the interpolating parameter η, but once we choose η = 1, the first gauge condition of (32) turns out to be ∂ µ A a µ = 0, which means that we can substitute this in the third equation of (34) and obtain the same result for the Landau gauge (30). Finally, the fourth equation is unchanged. Summarizing, these are the components of the Faddeev-Popov operator for the Landau gauge. This concludes our consistency checks for now. Let us make a quick remark about the Faddeev-Popov operator. It is widely known that, in standard techniques employed to deal with the Gribov problem [1,3], the hermiticity of the Faddeev-Popov operator is essential. Since we can associate Gribov copies with zero-modes of the Faddeev-Popov operator, the knowledge about its spectrum is very welcome. A hermitian operator has only real eigenvalues, allowing to establish an order relation between them. In the Landau gauge, for instance, is through this analysis that it is possible to construct a region where the Faddeev-Popov operator is positive-definite. For this reason, it is possible to eliminate all infinitesimal Gribov copies from the path integral by the restriction of the integration to this domain. This technique makes the analysis of the Gribov problem a geometrical problem. On the other hand, this method cannot be employed to non-hermitian Faddeev-Popov operators. It is then not clear how to generate a region that will restrict the path integral. Nevertheless, the method developed in [5] and briefly reviewed in Sect. 2 does not require the definition of a region to perform the functional integration. Thus, we can apply it for gauges with non-hermitian Faddeev-Popov operators. In fact, as discussed in [5], in the case of hermitian Faddeev-Popov operators, the new method is equivalent to restrict the path integral to a region defined by the zero-modes of the corresponding FaddeevPopov operator. Getting back to the LMAIG Faddeev-Popov operator (34), its first decomposed operator can be rewritten as is the Faddeev-Popov operator for the MAG. As mentioned in Sect. 3.1, the operator defined by Eq. (36) is hermitian. It is possible to show that the operator ∇ ab −∇ ab is not hermitian, the details of the proof can be found at Ap. A. The purely diagonal component of the Faddeev-Popov operator is trivially hermitian. Now, only the mixed components are left and we will follow an analogous idea presented in Sect. 3.2. The whole idea is based on the fact that we can write the Faddeev-Popov operator in a matrix form and it has two blocks formed by the purely off-diagonal and diagonal components. Terms outside these blocks are the mixed ones and to analyze their hermiticity we must take their hermitian conjugate. Thus, transposing and taking the complex conjugate of the matrix it is clear that (∇ ai ) † = ∇ ia . In fact, since one involves the parameter η and the other does not, it is impossible to establish a hermiticity relation between these components. Thus, pictorially, we have that the full Faddeev-Popov operator, is not hermitian. Moreover, unlikely the MAG, it is not possible to eliminate some components of this operator to analyze the Gribov copies. See (31). This, and the fact that the LMAIG is an exclusively A-dependent gauge, are the leads allowing the direct elimination of the zero-modes within the method developed in [5]. Trivial set of auxiliary fields According to the method, it is possible to eliminate Gribov copies directly, by imposing a new constraint to the theory. This constraint is, essentially, the requirement that the copies equation is not obeyed. This is done by the introduction of a set of auxiliary fields, forming a BRST quartet, through a trivial term and a soft BRST breaking term. From now on we will deal exclusively with the interpolating gauge, so when we refer to the Faddeev-Popov operator, we are talking about the operator defined by Eq. (34). The trivial term is given by 5 where capital latin indices refer to the complete Lie algebra. The full decomposition of (38) in off-diagonal and diagonal components results in where terms involving the functional derivative with respect to A of ∇ ij and terms involving the functional derivative with respect to A j of ∇ ia and ∇ ai are not present because they vanish. The explicit form of (39) is A consistency check must be performed to verify if this trivial term interpolates among Landau and maximal Abelian gauges trivial terms [5]. To avoid many tedious algebraic steps along the text, we leave this proof for the Ap. B. Breaking Term We have now to introduce a soft BRST breaking term at the original action S 0 , given by The reason is the following: Since we can obtain the Gribov copies equation requiring the BRST invariance of the gauge condition, to ruin this equation we must break the BRST invariance. This breaking, however, is not arbitrary. Therefore, when we look for perturbative effects of the theory, the BRST invariance must be restored, and for this reason, we call this a soft breaking [30,31]. A soft breaking can be obtained by the introduction of a mass parameter γ which makes the dimension of this term lower than the spacetime dimension. See [5] for more details about this construction. The general form of the soft breaking term is where the parameters ξ, θ, χ and ζ i must be η-dependent in order to permit the interpolation of the breaking term of Landau and maximal Abelian gauges. Such interpolation is presented at Ap. C. Comparing eq.(42) with eq.(67) and eq.(68), we obtain with ζ 1 and ζ 2 being independent from η. We remark that, depending on the Ward identities, some of these parameters might be zero. Hence, the breaking term Ξ with the appropriated fixed parameters is We remark that this term not only breaks the BRST in a soft manner, but also ensures that the copies equation is ruined. As discussed in Sect. 2, this breaking term has some sort of "freedom". In order to write an action free of infinitesimal copies, we could just introduce the following term The point here is that the terms given by Ξ −Ξ permit a construction very close to the refined Gribov-Zwanziger action [14]. In fact, to make contact between Eq.(44) and the LCO formalism, we should write the mass terms as independent masses m i = ζ i γ 2 and deal with local composite operators and their condensation. An immediate difference between both methods relies on the gap equation, see Sect. 6. Gluon Propagator It is remarkable that the elimination of infinitesimal Gribov copies, as an apparent technicality, brings rich effects to the physical properties of non-abelian gauge theories. Of course, they play a fundamental role for a consistent quantization, but their elimination provides a profound change in the gluon and ghost propagators, specially at the infrared regime. This is a well-known feature for Landau and maximal Abelian gauges, see for instance [1,16]. In fact, the inclusion of dimension 2 condensates makes the analytic result of the propagators to stay in harmony with lattice results [12,13,34,35]. In this section, we compute the off-diagonal and diagonal gluon propagators for the interpolating gauge. As mentioned before, this provides a good way to test the free of copies action presented here, since this gauge can be implemented in the lattice. An interesting feature to study is the deformation of the propagators of the Landau gauge into the propagators of MAG, a property that could be investigated in the lattice. The full action S is given by For the gluon propagator, only the quadratic action S q (A) is required 6 , where α and β are gauge parameters. The actual interpolating gauge is obtained at the limit α = β = 0. Taking the Fourier transform of Eq. (47), we obtain the following expression, It is not a difficult task to obtain the diagonal and off-diagonal gluon propagators from (48). One has only to invert the corresponding wave operators in the usual way. The expressions for the diagonal and off-diagonal gluon propagators are, respectively, where the limits α = β = 0 were already taken. If we rename each mass term 7 which appears with a ζ i parameter in terms of independent masses, we can write these propagators as Since we are dealing with the interpolating gauge, we have to check the deformation of the propagators (49) and (50) among Landau and maximal Abelian gauges for η = 1 and η = 0, respectively. It is obvious that the diagonal gluon propagator is independent of the parameter η, which means that its form does not change for intermediary choices of η. It is known that this propagator is identical in Landau and maximal Abelian gauges [14,17], and the reason why it has the same form for a interpolation among them is that the gauge fixing condition for the diagonal components remains invariant as η varies. So, we have to analyze only the off-diagonal sector in order to provide an explicit test of its reduction to the known cases. For η = 0, the propagator (50) reduces to which is exactly the off-diagonal gluon propagator for the maximal Abelian gauge [17]. Choosing η = 1, we have which coincides with (49), as expected. Of course, the new features that all these calculations bring reside on different values for η than 0 and 1. Hence, since η ∈ [0, 1], we can choose η in such a way that an explicit continuous deformation of the off-diagonal propagator can be seen. Another remark is that for an arbitrary value of η different from 0 and 1, we have the propagators for a gauge with non-hermitian Faddeev-Popov operator. This is a very important feature because, in usual approaches [1,7,14], the construction of these propagators would be very difficult (if not impossible). We also remark that expressions (51) and (52) are much more related to the usual results of the refined Gribov-Zwanziger approach where the mass parameter are independent of the Gribov parameter (at least in a tree-level analysis). The deformation of the off-diagonal propagator is displayed in Fig. 1 (Both pictures are exactly the same, except for the viewpoint angle). From this picture, it is possible to see the 7 In order to match with the usual conventions, we must rename the terms involving ζ1 with a minus, i.e. −γ 2 ζ1 = m 2 1 . This is very important, because the opposite choice will lead to negative m 2 1 values. On the other hand, the terms γ 2 ζ2 are correctly identified with m 2 2 . explicit continuous deformation from the MAG off-diagonal gluon propagator to the Landau offdiagonal gluon propagator. A very well-known feature is that, for k = 0, the gluon propagator does not vanish, at both gauges. It is interesting, however, that this value has some kind of oscillatory behavior in the intermediary choices of η and this is explicit in the right plot of Fig. 1. There is an absolute maximum at η = 1/3 while η = 1 is the absolute minimum (obviously, the domain we are considering is η ∈ [0, 1]), which corresponds to the Landau gauge at zero momentum. As a curious fact, in the case of N = 2, there is a term in the denominator of Eq.(52) which vanishes. The consequence is that the the curve on the k = 0 plane loses the oscillatory behavior and becomes a constant with respect to η, as shown in Fig. 2 A few words about the gap equation As discussed in [5], the method applied in this work can provide a generalized gap equation. The reason why it "can" lies on the fact that we have some freedom in the choice of the breaking term, as discussed in Sect. 2. First, if we do not choose the mass terms which give rise to the refined Gribov-Zwanziger action, no modification to the gap equation emerges. We must remind that, if we opt for a minimal breaking of the BRST symmetry, these terms are excluded (at least the A 2 and cc which are not BRST invariant) and can be considered through the LCO formalism [14,15]. On the other hand, we can include these terms for two reasons: (i) they are permitted by power counting and dimensional analysis and (ii) we recover the refined Gribov-Zwanziger propagators independently from the gap equation form. We could include them with an explicit dependence on γ and, since the gap equation is obtained by minimizing the quantum action with respect to γ 2 , all these terms will contribute to it, see [5]. This is the difference to the usual gap equation which does not contain these terms. Alternatively, these terms could also be included with independent mass parameters (i.e., with no explicit dependence on γ) and the gap equation would not be affected. The main importance of these possibilities is that they can be very welcome, since the usual gap equation (at the Landau gauge, for instance) throws the theory right at the horizon, which is precisely the place where infinitesimal copies start to appear. The decision of how good is an alternative gap equation will rise with its physical effects and consistency checks. For obvious reasons, this analysis is left for future investigation. Conclusions In this work we applied the method developed in [5] to eliminate infinitesimal Gribov copies from the interpolating Landau -maximal Abelian gauge. We obtained an action free of copies, given by Eq. (46), which has the same structure of the refined Gribov-Zwanziger-type actions [14,15]. After that, with a suitable choice of the interpolating parameter, we extracted the diagonal and off-diagonal gluon propagators and showed that the results reduce to the well-known propagators for the Landau and maximal Abelian gauge fixings. Although the elimination of infinitesimal Gribov copies for the interpolating gauge is important and interesting by its own merit, it brings some new features for the general problem of dealing with Gribov copies. As discussed throughout Sect. 4.1, this gauge has a non-hermitian Faddeev-Popov operator, which means that no order relation can be established between its eigenvalues. Consequently, the possibility of constructing a well defined free of copies region in functional space is not evident. In this sense, the elimination performed here through the method developed in [5] opens a new door to the understanding of non-hermitian Faddeev-Popov operators. Moreover, the propagators computed in Sect. 5 are already in a form to be compared with a possible lattice simulation of the LMAIG. There are many issues that should be addressed now. All of them deserves investigation. However, each of them is quite extend and intricate, and therefore are beyond the goals of this work. Nevertheless, they are left for future investigation. To cite a few interesting topics to be investigated, we can start with the renormalizability problem of action (46). As in the case of the MAG, many complications and extended expressions, due to extra quartic ghost interacting terms, are expected. Another important issue is the Abelian and non-Abelian ghost propagators, another task that should demand a laborious amount of computations (a smart attack would be to start with the SU (2) case). A third problem to be studied is the comprehension of what could be, if any, an analogous Gribov region for this gauge and the interpolation between the known regions of Landau and maximal Abelian gauges. Finally, as discussed in Sect. 6, the effect of a possible alternative gap equation has to be taken under consideration. This last question opens the possibility of introducing the refined mass parameters directly on the gap equation as a function of the Gribov parameter instead of a independent local composite operators condensation. Obviously, we would start this study at the Landau gauge, which is the gauge where the Gribov problem and its effects is most understood. A (non-)Hermiticity of ∇ ab Let's define, using Eq. (35), an operator ∇ given by Since∇ is the MAG Faddeev-Popov operator, it is hermitian. Thus, if we prove that ∇ is hermitian, it is sufficient to say that ∇ is also hermitian. To study this possibility, let us consider the following expression 9 Now, we will consider the three terms of the rhs of Eq.(56) separately. The first term is which proves its hermiticity. The second term is given by from where we can see that it is also hermitian. Finally, the third term is written as Clearly, the third term is not hermitian. Obviously, the reason is the presence of the piece in Eq. (59). Since the sum of hermitian operators is a hermitian operator, we can see from expression (56) that ∇ would be hermitian only if (60) vanishes. The conclusion is that the purely off-diagonal components of the Faddeev-Popov operator of the Landau -MAG interpolating gauge is not hermitian, except for the MAG limit. In the case of the Landau gauge, this operator combines itself with the other sectors in order to provide another hermitian Faddeev-Popov operator. B Trivial terms -Maximal Abelian Gauge As discussed in Sect. 4.1, the Faddeev-Popov operator of the interpolating gauge has four different sectors: One purely off-diagonal, one purely diagonal and two mixed operators. The Landau gauge has this same feature and the comparison between the trivial terms of these gauges can be done term by term. Since we have seen that the operator given by Eq. (34) reduces to the Faddeev-Popov operator at the Landau gauge for η = 1, the expressions of the trivial term of the interpolating gauge, for η = 1, must also coincide with the trivial term of the Landau gauge (which must be decomposed for a comparison). This is easy to verify. On the other hand, the usual MAG [16,17] has only off-diagonal components for the Faddeev-Popov operator. In this sense, we have to be careful with the trivial term because, when we choose η = 0 for the interpolating parameter, the mixed and purely diagonal components of the Faddeev-Popov operator will not vanish and this will provide a different trivial term with respect to the usual MAG case. Here, we have to remember that all mixed components of the Faddeev-Popov operator of the usual MAG are eliminated from the very beginning of the analysis of the Gribov problem (due to the redundant condition), see Sect. 3.1. Hence, the trivial term of the MAG must have the form S MAG triv = − d 4 x ϕ ac µ ∇ ab ϕ bc µ − ω ac µ ∇ ab ω bc µ − ω ac µ (sA d ν ) (68) Both expressions are consistent with those obtained in [5].
9,653
2014-02-14T00:00:00.000
[ "Mathematics" ]
The Impact of Blended Online and Offline Learning on College Students : The internet is now part of our lives and online learning resources are becoming more and more available. Sooner or later these learning materials will be used by teachers and students in the classroom, which bodes well for the future of blended learning in all education and training. This paper finds another benefit of blended learning is that it makes it easier to assess students and collect meaningful data. With a wide range of experiences and learning materials available online, the use of technology in the classroom can increase student engagement and motivation. Instead of learning in a monotonous, traditional way, students are able to discuss and communicate in innovative and exciting ways, learning becomes fun, students can become more engaged and motivated, and the efficiency and effectiveness of learning can be greatly enhanced. Introduction Based on today's technological advances, modern education is being driven by the need to diversify education to meet the needs of more learners in the future, and the way education is being reformed. The blended learning model is a new way of learning for students, which combines the various advantages of different learning styles brought together, using traditional offline learning and new technologies in online platforms. It allows the teacher to play a leading role in guiding and inspiring the classroom, while fully reflecting the initiative, motivation and creativity of the students. Blended learning combines the resources of multiple channels on the Internet, information technology resources and traditional classroom teaching resources, and then applies them to learning, which in turn can produce an effective learning model. Blended learning is a disruptive and innovative approach to teaching and learning in the Internet age. It differs from traditional textbook lectures in that students will learn partly through online multimedia learning materials and partly through faceto-face instruction in the classroom. Whereas the current one-size-fits-all approach is unlikely to work for every student, blended learning empowers students to create their own learning path and learn in the mode that best suits them. Students have greater control over when and where they learn as online materials can be accessed from anywhere with an internet connection. A blended learning approach leads to a student-centred environment where students are able to acquire knowledge through individual effort and more meaningful learning contexts, while any gaps in understanding or areas that are still missing can be addressed in time through face-to-face teacher inspiration, guidance and question and answer sessions. With blended learning, students have more control over their learning, so they have more time in the classroom.Beside,if students have pre-recorded video lectures on homework rather than listening in class, they can have more time to answer questions or review material. Online behavioural tracking and assessment reports can also save students a great deal of time. Blended learning in education can enhance learning opportunities for students and is important for the advancement of education and the enhancement of individual student expertise and competence. Based on the former study on blended learning, it was found that students can improve their academic performance and professional competence through blended learning [1]. However, the impact of blended learning on college students has not been studied systematically. Therefore, this study will examine the positive aspects of students' selective and blended learning styles on student learning accordingly. 2. The Impact of Blended Learning on Studens' Learning Opportunities Provide Students with a Wide Range of Learning Resources Due to the rapid development of internet technology, traditionalist learning methods have become more advanced and sophisticated author argued that "there is strong evidence that schools are investing in advanced information technology to expand access to resources and to create more learning opportunities for student learning" [2]. This is not only about gaining greater access to resources but also about adding a broader reach to schools. The choice to adopt online learning has been made to meet the challenge of technological innovation while maintaining the benefits of the original traditional learning methods. Today's technology has led to many free and shared resources on internet platforms. In a modern society where everything relies on knowledge, blended mode learning based on internet platforms is a convenient way for students to access knowledge that is available and accessible to all. Author argued that "with the advent of the internet, access to learners around the world is increasing and creating more opportunities for learners to learn and explore problems with students from other countries, thus to come up with solutions and improve their skills [3]. Today's online learning offers a wealth of multimedia educational resources. This is not only a requirement at a particular stage in the development of information technology in schools but also a necessary way to develop in the information age. Adapt to Students' Diverse Learning Styles The main objective of the blended learning approach is that a mixed-mode approach to learning can help learners to have a quality learning experience and to achieve high quality learning gains as well as to promote personal improvement. In a blended learning model, students can choose on their own to learn face-to-face with a teacher in a traditional classroom model, i.e. offline, or they can choose to interact with their peers and classmates in this model. In the current course, as the teacher and students are taught face-to-face, the teacher can effectively observe whether the students have gained and grown in the lesson. During offline lessons, students who do not have self-control can be better disciplined to make the best of the class. This is because students may feel more secure by learning face-to-face with the teacher and experience more intuitive attention and targeted instruction. In offline teaching, there are usually dozens of students learning together. In this learning environment, individual students develop a sense of imitation and competition with each other, which drives the learning atmosphere in the classroom. Author argue that "in blended learning, students are allowed to develop their personalities holistically [4]. Moreover, it allows students to communicate and share, thus diversifying the learning process." Until now, students have also had the option to choose a new online learning mode. This is because this learning mode has become everyday and famous at a particular time. Moreover, in the future, it could become a regular mode of teaching in higher education and is currently being explored. In this blended learning model, students are thus offered a better choice of learning style At the same time, the choice of learning mode depends on the student's learning content and personal goals. The blended learning model accommodates a broader range of students, who can use a greater variety of learning styles, and fully satisfies the student's choice. Increase the Flexibility of Time and Place of Student Learning In recent years, blended learning model has gradually entered the application stage with the rapid development of information technology in education. The research and practice of blended learning models have contributed to the transformation and advancement of education and learning models, which will be essential to educational reform. Author argue that "online learning styles use electronic communication tools (e.g. email, video, topic discussion boards) to allow learners to clock in and out of learning at their convenience [3]. Synchronous technologies (e.g. webcasts, chat rooms, audio) are similar to face-to-face teaching strategies. No matter where the learner is, as long as they are connected to the internet, they can be the point of entry and reception of information, making access to or exchange of information more accessible and flexible. Online learning is so popular because it can offer more flexible content and access and can prompt learners to submit and clock in on learning and tasks at any time. It is often adapted to learners who cannot be implied to attend traditional faceto-face courses or who have a high volume of tasks more accurately submitted online. 3. The Impact of Blended Learning on Students' Learning Abilities Requirements for Independent Learning Ability As an essential component of learning competency analysis, self-directed learning has become an important research focus for researchers. Self-directed learning refers to the learning process in which learners learn and acquire knowledge through independent research. In order to promote the reform and progress of learning methods in the information technology environment, improve students' learning ability of independent learning, and develop students' ability to identify problems in learning actively, them and solve them. Author found that "in terms of students' performance in learning courses, students' learning in different Zhang Jie better-reflected students' different learning attitudes" [5]. The more evenly and effectively students studied online resources across different chapters, the more independent they were, and the more serious their attitudes were. The study found that students' learning performance in the current course was significantly and positively correlated with their grades in past semester courses, implying that practical students were autonomous and possessed specific learning abilities. The author has found that the more the compatibility, layout design, video images, sound quality and other such aspects of online platforms are now correspondingly enhanced, the more students' independent learning can be improved [6]. The study shows that students' autonomy and participation in online learning are very high. However, there is also a small number of passive learning states, which are inevitable. This would indicate that many learners have a positive attitude and perception of online learning and see it as a good way to learn. The scientific use of blended learning allows students to take control of their own learning and to develop their own independence and participation, as well as acquiring better learning skills, thus improving the quality of their learning and their overall development. Requirements for Self-control of Learning Self-management skills are formed, developed, and expressed by students in their self-education activities. Motivating students to self-manage, protecting their desire to learn to manage themselves and developing their awareness of self-management are significant in the learning process. Students should always balance learning and life for themselves when developing self-management skills. In blended learning, students are expected to need a proper understanding of self, not only in terms of self-management but also in terms of responsibility for their peers or group members. In blended learning conditions, there are more opportunities to develop self-management skills. Learning is not only a matter of learning but also provides solid conditions for personal management and developing self-education skills. Self-control is a prerequisite for achieving goals [7]. After setting goals, learners may face different temptations that can lead to procrastination situations in Newtown. Self-control is essential for learners' learning because it allows them to delay their momentary gratification in pursuing pleasure. In fact, according to some current research, people with high levels of self-control tend to be more aware of the connection to their goals and free from the problems associated with temptation [8] [9]. Thus, self-control in blended learning is critical in moderating the impact of future goals on the distal learning outcomes studied, including academic achievement and impact on the school. Requirement for Multi-task Management Ability Modern technology has complicated the situation for many learners as they use face-to-face contact via email and text messages to deal with tasks through blended learning, i.e. online learning mode combined with traditional learning modeIt has become the norm to check personal online messages while working on other tasks. Multitasking is very energy intensive, which is undeniably what researchers are talking about. Switching tasks is not only energy intensive, but frequent switching also tends to cause a loss of efficiency. Multitasking, however, allows learners to schedule by designing primary and secondary tasks and thus dividing them into stages. The result is achieved by accomplishing small goals. Author concluded that from the student's perspective, online learning in blended learning facilitates and is valuable for learners with multiple responsibilities and highly organised lives as individual students [10]. Online learning can therefore be an effective way for learners to develop their personal skills and help them to complete their other learning tasks. Through blended learning, learners' scheduling systems are optimised, and their scheduling systems are improved by taking stock of individual behaviour, observing their learning effectiveness and reviewing the rationality of task slicing by specifying and achieving goals, balancing individual competencies with task matching and scheduling tasks. Blended learning allows learners to develop multi-task management so that they can take on different tasks, split the stimuli and complete them one by one to achieve the final desired value. This will be one of the necessary ways to develop multitask management. Changes in Academic Assessment Methods In a blended learning model, there are new requirements and ways of assessing students. The corresponding assessment methods will be different, possibly online group assessment or online supervision, affecting the individual student's learning outcomes. The traditional learning model was simply an offline paper and pencil response type of assessment, a group working together on a task presentation, and so on. However, the rapid growth of the internet today has brought about more online examination methods. For example, completing a questionnaire online means learners can interact and upload information online using a handheld terminal, and the information is presented in multimedia. Such testing methods have become more diverse and can be divided into different categories. Different assessment methods affect each student differently, and online supervision assesses students based on their perceptions of the basics and questions and monitors progress on the questions. The online group is a workshop where students present their views in a discussion format. Author argues that based on online learning, students can use online tests as a necessary means of checking the effectiveness of student learning [6]. This new method of testing provides students with a clear picture of their learning outcomes and allows teachers to monitor the learning dynamics of their students. Furthermore, mixed-mode tests, i.e. online and offline assessments, these assessments can be used as an essential basis for student learning. They can help students in their learning process even if they review, and the results of these assessments can play an essential role in facilitating student learning. Impact on Student Academic Expectations Blended learning is an innovative learning system that combines the strengths of the formal classroom with the advantages of modern technology. Blended learning is a space for collaborative learning, where learners have to work hard and have the right attitude to learning. This is because it encompasses different modes of learning that are not only about learning knowledge but also about breaking through. There are certain expectations of achievement in academics. Online learning is about gaining a great deal of knowledge about learning and becoming proficient in internet technology. The author concluded that "students not only gain knowledge through blended learning but likewise enhance personal professionalism and self-motivation, self-responsibility for personal achievement" [4]. The expectation of offline learning can be experienced as face-to-face interaction helps develop a robust value system. Working together, sharing ideas, expressing emotions, socialising, etc., are more easily developed in traditional learning. Students can learn not only from books but also through interaction with teachers and peers. Students gain academic success, learn respect in small groups with their peers, learn more collaborative skills in the gym, and learn social skills during breaks. This is necessary for students to achieve more than just academic success. Nevertheless, compared to the old days when online or offline was not enough to accommodate all students, today's mixed online, and offline learning model allows students to have more opportunities, more learning resources and more freedom to adapt to their learning style, which will change their expectations of a learning experience. Impact on Student Satisfaction with Academic Performance Blended learning is an innovative learning model for a new era that brings together the strengths of multiple learning styles in today's educational context in order to optimise students' learning development. It relies heavily on traditional classroom learning and the new Internet platform of online learning. Blended learning is a learning method used by university students in the learning process. However, the difference lies in the way the learning experience is designed. Authors reported that different people have different understandings of blended learning [11] [12]. Learners can adapt and positively respond to academic achievement in this blended mode environment. For example, some students adapt to the online learning model, while others adapt to the learning style. However, other students enjoy the blended learning model and will increase their satisfaction with learning better and achieving better. Some people simply prefer traditional learning methods, but others are interested in online learning, while others use a combination of traditional and online learning methods to help them improve. The authors note that the online learning approach is a focus for continued research, however how far it progresses is likely to be determined by the learners who are adopting it [13]. Research has found that adapting to new methods while maintaining the original method's benefits positively impacts student performance. Author report that blended learning positively impacts learning and that post-assessment results are higher than pre-assessment [1]. This may be because different students have a different sense of experience with the course. Conclusion This study analyses the impact of online and offline blended learning models on the academic performance, professional competence and learning opportunities of university students, with the original meaning of blended learning being blended or combined learning, i.e. the combination of various learning styles. The concept of blended learning has been introduced in recent years in corporate training in developed Western countries. With the development of information technology, more and more attention is paid to online learning as a learning method, but sometimes there are problems with relying solely on online learning, such as low learning efficiency for certain content. As a result, many companies are turning to blended learning solutions. There is no specific definition of blended learning and people from different backgrounds have different understandings of blended learning. Blended learning is the use of multiple communication media to deliver knowledge and information within a single learning programme to optimise the efficiency of learning and the cost of the learning programme. It is concerned with optimising learning outcomes, using the right learning style for the right person at the right time, to meet the learning styles of different people (or learning communities) so that students can acquire the right knowledge and skills and it is a disruptive and innovative approach to teaching and learning that has emerged in the Internet age. Students can complete parts of their learning through online multimedia learning materials, but also through faceto-face tutorials in the classroom. Through blended learning, students can learn at their own pace. It is a mix of structured and unstructured learning. It is an approach to learning that combines distance learning with traditional learning, offering the positive aspects of each modality and maximising the overall effectiveness of learning. Online has a wealth of learning resources, and offline activities can consolidate and translate online knowledge learning. Online learning is not an adjunct or an addition to the overall learning activity, but necessary activity for learning. Offline learning is not just a replication of traditional classroom learning, but a more in-depth learning exploration and learning activity that builds on previous online learning. There is no uniform requirement or model for the reform of blended learning, but it has a unifying quest to make the most of the advantages of online and offline learning, thus transforming traditional learning for students and changing the way students use too much lecture in the classroom in order to avoid leading to a lack of initiative and cognitive engagement. The problem is that students' initiative in learning is low, cognitive engagement is inadequate, and learning outcomes vary too much between students. However, the primary disadvantage of blended learning is its lack of relevance, as the ultimate purpose of learning is to meet the development needs of the business, and the blended mode of teaching has a high degree of extensive use and is prone to a lack of practicality. This paper advances academic progress on the impact of blended learning on student learning and provides a favourable reference on the learning practices of blended learning. Therefore, there is a need to continue to conduct in-depth research on blended learning in the future.
4,722.2
2023-05-17T00:00:00.000
[ "Education", "Computer Science" ]
Glutathione S-Transferases in the Biosynthesis of Sulfur-Containing Secondary Metabolites in Brassicaceae Plants Plants in the Brassicaceae family have evolved the capacity to produce numerous unique and structurally diverse sulfur-containing secondary metabolites, including constitutively present thio-glucosides, also known as glucosinolates, and indole-type phytoalexins, which are induced upon pathogen recognition. Studies on the glucosinolate and phytoalexin biosynthetic pathways in the model plant Arabidopsis thaliana have shown that glutathione donates the sulfur atoms that are present in these compounds, and this further suggests that specialized glutathione S-transferases (GSTs) are involved in the biosynthesis of glucosinolates and sulfur-containing phytoalexins. In addition, experimental evidence has shown that GSTs also participate in glucosinolate catabolism. Several candidate GSTs have been suggested based on co-expression analysis, however, the function of only a few of these enzymes have been validated by enzymatic assays or with phenotypes of respective mutant plants. Thus, it remains to be determined whether biosynthesis of sulfur-containing metabolites in Brassicaceae plants requires specific or nonspecific GSTs. INTRODUCTION Glutathione S-transferases (GSTs) constitute a family of multifunctional enzymes that catalyze the nucleophilic attack of the sulfur atom of the tripeptide glutathione (GSH) on electrophilic centers of low-molecular weight compounds Labrou et al., 2015). GSTs were identified as stress response proteins that accumulated in response to biotic and abiotic stimuli. Many studies on plant GSTs have focused on their role in xenobiotic detoxification. In addition, some GSTs have been implicated in plant secondary metabolism, particularly in the formation of natural products containing carbon-sulfur bonds, including the sulfur-containing phytochemicals characteristic of Brassicaceae species Sonderby et al., 2010;Pedras et al., 2011;Bednarek, 2012;Dunbar et al., 2017). CONJUGATION OF GSH IS REQUIRED FOR THE BIOSYNTHESIS OF GLUCOSINOLATES Glucosinolates are sulfur-containing secondary metabolites produced by plants of the Brassicales order, and their core structure contains a β-D-thioglucose moiety connected to a sulfonated aldoxime and a variable side chain derived from amino acids, such as tryptophan, tyrosine, and methionine (Halkier and Gershenzon, 2006). The first two steps of glucosinolate biosynthesis are catalyzed by specific isoforms of CYP79 and CYP83 cytochrome P450 monooxygenases, which convert precursor amino acids to aldoximes and then to aci-nitro compounds. It has been postulated that these intermediates can react with an alkylthiol to form conjugates that can be converted to glucosinolates by the sequential activities of C-S lyase (SUR1), glucosyltransferases (UGTs), and sulfotransferases (SOTs) (Figure 1; Sonderby et al., 2010). Decreased glucosinolate accumulation in the phytoalexin deficient 2 (pad2) mutant, which has a reduced GSH biosynthesis rate, suggested that GSH is the alkylthiol that conjugates with the products of CYP83 activity (Parisy et al., 2007;Schlaeppi et al., 2008). In line with this hypothesis, upon engineering benzyl glucosinolate biosynthesis in Nicotiana benthamiana, it was found that expression of CYP79A2 and CYP83B1 led to an accumulation of S-(phenylacetohydroximoyl)-GSH, the predicted GSH conjugate (Geu-Flores et al., 2009). Introduction of SUR1, UGT74B1, and SOT18 into the engineered N. benthamiana line led to low level production of benzyl glucosinolate, but did not significantly reduce the S-(phenylacetohydroximoyl)-GSH level suggesting that this intermediate is not a substrate of SUR1. This was confirmed by additional expression of γ-glutamyl peptidase 1 (GGP1) or GGP3, enzymes cleaves γ-Glu from GSH conjugates, which resulted in depletion of the S-(phenylacetohydroximoyl)-GSH intermediate along with a significant increase in the rate of benzyl glucosinolate production in transgenic N. benthamiana (Geu-Flores et al., 2009. In addition, glucosinolate levels decreased and levels of the corresponding GSH-containing intermediates increased in an Arabidopsis ggp1 ggp3 double mutant (Geu-Flores et al., 2011). Collectively, these findings confirmed that GSH-conjugates are glucosinolate biosynthetic intermediates and raised the question of whether conjugation of GSH with products of CYP83 activity requires specific enzymatic activity. Candidate GSTs involved in this process have been proposed based on their co-expression with glucosinolate biosynthesis enzymes and on an analysis of metabolic and gene expression profiles of quantitative trait loci Wentzell et al., 2007;Hirai, 2009). It has been suggested that GSTF11 and GSTU20 are involved in aliphatic glucosinolate (AG) biosynthesis and that GSTF9 and GSTF10 contribute to indolic glucosinolate (IG) formation (Figures 1,2A). In addition, transcriptome analyses of Arabidopsis myb28 knock-out and MYB28-overexpressing cell cultures showed that GSTF11 and GSTU20 expression is regulated by the MYB28 transcription factor, which controls the AG biosynthetic pathway (Hirai et al., 2007). However, despite these correlations, no GST function in glucosinolate biosynthesis has yet been validated experimentally. For instance, successful engineering of benzyl glucosinolate or glucoraphanin (an AG) biosynthesis in N. benthamiana did not require any Arabidopsis GSTs (Geu-Flores et al., 2009;Mikkelsen et al., 2010). Moreover, introduction of GSTF11 increased the efficiency of glucoraphanin production by only 20% (Mikkelsen et al., 2010). Similarly, expression of Arabidopsis IG biosynthetic genes in yeast (Saccharomyces cerevisiae) showed that GSTF9 and GSTF10 are dispensable for conjugation of the product of CYP83B1 activity with GSH in this microorganism. Additional introduction of GSTF9 in the engineered yeast strain boosted the level of glucosinolate by only 25% (Mikkelsen et al., 2012). These results suggest that GSH conjugation in glucosinolate biosynthesis can occur spontaneously without GST activity or that tested GSTs are not specific for glucosinolate biosynthesis and can be replaced by GSTs from other organisms. However, overexpression of Arabidopsis enzymes in N. benthamiana or in yeast obscures their native temporal and spatial accumulation patterns. Opposite, fluorophore-tagged glucosinolate biosynthetic enzymes, including CYP83 monooxygenases that produce putative GST substrates, localized to specific tissues and cell types when expressed under their native promoters in Arabidopsis (Nintemann et al., 2018). Thus, it is likely that GSTs involved in glucosinolate biosynthesis are not specific with regards to their substrate preference or catalytic properties, but can be specific with regards to their localization, which can be not observed in glucosinolate-engineered strains. In addition to the missing functional validation, experiments have suggested that the GSTs that have been proposed to contribute to glucosinolate biosynthesis may have alternative in planta functions. For instance, in a yeast two-hybrid screen, GSTU20 interacted with Far-Red Insensitive 219, a jasmonateconjugating enzyme linked to phytochrome signaling, and a partial loss of GSTU20 function resulted in hyposensitivity to continuous far-red light. Moreover, under the same condition GSTU20 was differentially expressed in suppressor of phytochrome A-105 1 and constitutive photomorphogenic 1 mutant plants (Chen et al., 2007). To explain these phenotypes, it has been hypothesized that GSTU20 can bind, stabilize, or transport jasmonic acid or its derivatives within the cell. Another yeast two-hybrid screen indicated that GSTF10 interacts with Brassinosteroid Insensitive 1 (BAK1), a leucinerich repeat receptor-like kinase involved in brassinosteroid signaling and plant defense (Ryu et al., 2009). RNA interference (RNAi)-mediated down-regulation of GSTF10 and GSTF9 expression led to a more compact rosette shape, which is similar to the phenotype of weak bak1 mutant alleles. However, plants that underexpressed (via RNAi) or overexpressed GSTF10 showed wild type (WT)-like growth in the presence of brassinolide (a brassinosteroid) or brassinazole (an inhibitor of brassinosteroid biosynthesis), thus GSTF10 is probably not involved in brassinosteroid signaling. In addition to the compact rosette phenotype, GSTF10/9 RNAi plants had higher anthocyanin levels and a lower tolerance for NaCl or N-acetylcysteine, a pharmacological reagent that scavenges free radicals (Ryu et al., 2009). Similar to the RNAi line, a gstf9 mutant had a lower tolerance for NaCl and was defective in redox homeostasis (Horváth et al., 2015). It has also been shown that GSTF9 is induced in response to the gravity persistent signal (GPS), and gstf9 mutants displayed defective GPS responses in inflorescence stems, as well as in root skewing, waving, and curvature (Schenck et al., 2013). Collectively, these findings suggest that GSTF9 and GSTF10 contribute to redox homeostasis and responses to environmental stimuli, but it is unclear whether these putative functions depend on glucosinolate biosynthesis. GSTS ARE IMPORTANT FOR GLUCOSINOLATE METABOLISM Specific β-thioglucosidases, known as myrosinases, and glucosinolates constitute a binary defense system against generalist insects and pathogens (Hopkins et al., 2009;Pastorczyk and Bednarek, 2016). Upon tissue damage or in response to Frontiers in Plant Science | www.frontiersin.org environmental stimuli, glucosinolates can be hydrolyzed by myrosinases leading to the formation of unstable aglycones. Based on their side chain structure and the presence of specifier proteins, these aglycones can rearrange into different end products, including highly chemically reactive and biologically active isothiocyanates (ITCs), which can be harmful for the host plant (Wittstock et al., 2016). It has been shown that exogenous ITC application has negative effects on Arabidopsis growth (Hara et al., 2010;Urbancsok et al., 2017). Notably, Arabidopsis GSH-deficient mutants have been shown to be more susceptible to ITCs than WT plants suggesting that deactivation of ITCs in planta requires their conjugation with GSH (Urbancsok et al., 2018). As indicted by experimental evidence this reaction is spontaneous, but its efficiency can be significantly enhanced with GST-mediated catalysis, and leads to the formation of dithiocarbamate-type ITC-GSH adducts (Zhang et al., 1995). Enzymatic studies demonstrated that many Arabidopsis GSTs process benzyl-ITC, which is a model ITC used in in vitro enzyme assays (Wagner et al., 2002;Dixon et al., 2009). Moreover, in Arabidopsis, it has been shown that some GST genes are induced in response to external ITC application (Hara et al., 2010;Øverby et al., 2015). Overall, these results indicate that GSTs function in the detoxification of glucosinolate-derived ITCs in Brassicales plants. In addition to its role in ITC detoxification, conjugation with GSH can lead to the formation of novel products with important roles in plant fitness. During the immune response in Arabidopsis, Penetration 2 myrosinase (PEN2) metabolizes IGs to several end products, including indol-3-yl methyl amine (I3A), raphanusamic acid (RA), and 4-O-β-glucosyl-indol-3ylformamide (4OGlcI3F) (Figure 1; Bednarek et al., 2009;Lu et al., 2015). The reduced accumulation of these metabolites in GSH-deficient pad2 plants indicates that their formation is GSH dependent (Bednarek et al., 2009;Piślewska-Bednarek et al., 2018). In addition, the structures of I3A and RA suggest that they are derived from a dithiocarbamate-type adduct formed from indol-3-ylmethyl-ITC (I3-ITC), a product of indol-3-ylmethyl glucosinolate (I3G) hydrolysis ( Figure 2B). However, in contrast to aliphatic-or benzyl-ITCs, indolic ITCs are highly unstable, and their spontaneous conjugation with GSH is preceded by a release of a thiocyanate ion leading to products different from dithiocarbamates (Kim et al., 2008;Agerbirk et al., 2009). Thus, the formation of I3A and RA most likely requires a GST that can efficiently conjugate GSH with the labile I3-ITC formed by PEN2 myrosinase, and gene co-expression analysis pointed to GSTU13 as a candidate for this function (Piślewska- Bednarek et al., 2018). This selection was additionally supported by in vitro enzymatic assays, which indicated that among 35 tested Arabidopsis GSTs GSTU13 together with GSTU4 and GSTU6 had not only the highest activity against benzyl-ITC, but also the highest specificity toward this compound as compared with the other tested substrates (Wagner et al., 2002;Dixon et al., 2009). The reduced accumulation of I3A, RA, and 4OGlcI3F observed in gstu13 mutant plants confirms that GSTU13 is involved in biosynthesis of these compounds. Moreover, an analysis of the susceptibility of pen2 and gstu13 single and double mutants to selected fungal pathogens suggested that PEN2 and GSTU13 are part of the same immune pathway (Piślewska- Bednarek et al., 2018). Because PEN2, which localizes to the mitochondrial membranes, is actively delivered with a subpopulation of mitochondria to pathogen contact sites (Fuchs et al., 2016), in addition to its substrate specificity, spatial and temporal localization may also be critical for GSTU13 function. GSTS CONTRIBUTE TO PHYTOALEXIN BIOSYNTHESIS Apart from glucosinolates, Brassicaceae plants produce another group of sulfur-containing metabolites known as Brassicaceae phytoalexins. In general, phytoalexins are highly diverse, low molecular weight antimicrobial compounds that are produced in plants in response to infection. Phytoalexins produced by Brassicaceae plants are usually composed of an indole core and a side chain with one or two sulfur atoms (Pedras et al., 2011). Interestingly, it has been shown that biosynthesis of some indolic phytoalexins, including brassinin, is tightly linked with IG biosynthesis and metabolism (Figure 1). Brassinin is a phytoalexin produced by Brassica species that consists of an indole ring conjugated with an S-methylated dithiocarbamate group ( Figure 2B). Upon application of benzyl-ITC to the roots of turnip plants (Brassica campestris ssp. rapa), a benzyltype structural analog of brassinin was formed indicating that brassinin and related metabolites can be produced from IGs via corresponding ITCs (Monde et al., 1994). Similarly, upon application of labeled I3G to the leaves of salt cress (Thellungiella salsuginea), the label was incorporated into wasalexins, which are structurally related to brassinin (Figure 2B; Pedras et al., 2010). These results confirmed that IGs may serve as precursors to some Brassicaceae phytoalexins and raised the question of whether myrosinases are involved in the biosynthesis of these compounds. Transcriptome analysis of Brassica rapa combined with a comparative genomic approach to eliminate genes with direct orthologs in Arabidopsis, which does not produce brassinin, led to the identification of two Brassinin-Associated β-Glucosidases (BABGs), putative myrosinases that may hydrolyze IGs during biosynthesis of this phytoalexin (Klein and Sattely, 2017). Engineered expression of the IG pathway enzymes, the identified BABGs, and a dithiocarbamate S-methyltransferase, which catalyzes the last step in brassinin biosynthesis, in N. bethamiana resulted in accumulation of brassinin in transfected leaves confirming biosynthetic link between this compound and IGs. This link combined with the presence of the dithiocarbamate group in brassinin molecule suggests that biosynthesis of this phytoalexin involves the same I3-ITC-GSH adduct proposed as an intermediate in the PEN2 pathway ( Figure 2B; Bednarek et al., 2009). This in turn raised the question of whether GSTs are involved in the conjugation of I3-ITC with GSH during brassinin biosynthesis. Brassinin was produced efficiently in transfected N. bethamiana leaves suggesting that the conjugation step can be catalyzed by BrGSTF9, which was included in the engineered IG pathway, or by nonspecific GSTs from N. benthamiana (Klein and Sattely, 2017). However, a relatively high level of indole-3-carbinol, an I3-ITC degradation product, also accumulated in the engineered N. benthamiana suggesting that BrGSTF9 or non-specific GSTs are insufficient to conjugate unstable I3-ITC with GSH efficiently, thus a specific GST may be involved in brassinin biosynthesis. Unfortunately, transcriptome analysis did not identify a unique B. rapa GST that was induced upon pathogen inoculation (Klein and Sattely, 2017). The only identified sulfur-containing phytoalexin in Arabidopsis is camalexin, and production of this compound is dependent on sulfate nutritional status (Kruse et al., 2012). Reduced accumulation of camalexin in pad2 mutant plants suggested that GSH is the precursor to the thiazole ring present in its structure (Figure 2C; Parisy et al., 2007). Camalexin shares the first biosynthetic step, conversion of tryptophan to indole-3-acetaldoxime by CYP79B2/3 enzymes, with IGs. In the next step, indole-3-acetaldoxime is converted by CYP71A12 and CYP71A13 to indole-3-acetonitrile (IAN), and then a conjugate of GSH and IAN (GS-IAN) is formed as indicated by enhanced accumulation of GS-IAN in a double mutant line depleted of GGP1 and GGP3, which cleave γ-Glu from this intermediate (Figure 1; Geu-Flores et al., 2011;Klein et al., 2013;Müller et al., 2015). However, despite the identification of GGPs as enzymes processing GS-IAN the nature of the substrate that reacts with GSH to form this conjugate remains obscure. Geu-Flores et al. (2011) suggested that an unknown enzyme activates IAN before conjugation with GSH. In vitro assays showed that CYP71A12/13 monooxygenases can play this role by further oxidizing IAN to α-hydroxy-IAN and to dehydro-IAN, which can react spontaneously with GSH ( Figure 2C; Klein et al., 2013). Although the mechanism of in planta IAN activation remains unclear, it is likely that GSTs are involved in the subsequent biosynthetic step and respective enzymes have been searched for. It is known that camalexin biosynthesis is activated by the mitogen-activated protein kinase (MAPK) cascade, which includes MAPKK9 (Xu et al., 2008). Proteome analysis of constitutively active MAPKK9 DD transgenic plants showed that GSTF2, GSTF6, and GSTF7 accumulate to high levels during camalexin production (Su et al., 2011). To validate the putative function of these transferases, transgenic lines overexpressing GSTF2, GSTF6, or GSTF7 individually in the MAPKK9 DD background were generated, and a significant increase in camalexin production was observed in the GSTF6/MAPKK9 DD line indicating that GSTF6 contributes to camalexin biosynthesis. In addition, gstf6 knock-out seedlings showed a slight but significant reduction in camalexin production suggesting that GSTF6 along with additional GSTs participate in biosynthesis of this phytoalexin (Su et al., 2011). A candidate GST involved in camalexin biosynthesis is GSTU4, which is co-expressed tightly with CYP71A13 and PAD3 (Piślewska- Bednarek et al., 2018), however, the function of this enzyme has not yet been evaluated experimentally. In contrast to the conclusions of Su et al. (2011), additional expression of GSTF6, in engineered N. benthamiana line expressing CYP79B2, CYP71A13, GGP1, and PAD3 did not affect camalexin accumulation indicating that enzymatic catalysis is not required for GSH conjugation during biosynthesis of this phytoalexin or that N. benthamiana GSTs can replace GSTF6 (Møldrup et al., 2013). However, similar to the glucosinolate pathway, it is possible that GSTF6 specificity in camalexin biosynthesis results from its spatial and temporal expression pattern rather than from substrate specificity. Despite the reported defect in camalexin accumulation in gstf6 plants, experimental data suggests that GSTF6 plays alternative roles in anthocyanin biosynthesis and drought tolerance. Because GSTF6 transcript levels were highly elevated in transgenic plants overexpressing Production of Anthocyanin Pigment 1 (PAP1/MYB75), GSTF6 expression appears to be regulated by the PAP1 transcription factor that controls anthocyanin biosynthesis (Tohge et al., 2005). In addition, GSTF6, also known as Early Responsive to Dehydration 11, was identified as a gene that is induced strongly in response to dehydration (Kiyosue et al., 1993), a condition that may induce anthocyanin biosynthesis (Nakabayashi et al., 2014). These findings suggest that GSTF6 may act redundantly with GSTF12, also known as Transparent Testa 19, which has been postulated to facilitate transport of anthocyanins and proanthocyanidins from the cytosol into the vacuole (Kitamura et al., 2004). However, in contrast to gstf12, gstf6 mutant plants did not display any defects in anthocyanin accumulation (Wangwattana et al., 2008). CONCLUSION Recent experimental evidence indicated that GSH-conjugates are intermediates in the biosynthesis of sulfur-containing secondary metabolites in Brassicaceae plants, thus there has been a search for the GSTs responsible for the formation of these intermediates. Several candidate GSTs have been identified based on co-expression with enzymes involved in the corresponding biosynthetic pathways. From those, so far only GSTF6 and GSTU13 have been shown to be required for the formation of the corresponding end products. Metabolic engineering of the Brassicaceae biosynthetic pathways in other organisms suggests that GSTs from Brassicaceae plants, with the possible exception of those involved in the conjugation of unstable indolic ITCs, are generalists rather than specific in their catalytic properties and substrate specificity. However, distinct spatial and temporal distributions of enzymes linked with IG biosynthesis and metabolism suggest that the specificities of GSTs involved in biosynthesis of sulfur-containing phytochemicals may result from their expression patterns and from their cellular and subcellular localizations. Therefore investigation of GSTs involved in the production of sulfur-containing phytochemicals from Brassicaceae should also address these aspects in a greater detail. AUTHOR CONTRIBUTIONS PC drafted the manuscript. PB supervised the writing, revised the manuscript, and prepared its final version. FUNDING This work was supported by a National Science Center SONATA BIS grant UMO-2012/07/E/NZ2/04098.
4,435.2
2018-11-13T00:00:00.000
[ "Biology", "Environmental Science" ]
Pharmacological inhibition of PI3K class III enhances the production of pro- and anti-inflammatory cytokines in dendritic cells stimulated by TLR agonists The phosphatidylinositol 3-kinase (PI3K) pathway is known to down-regulate inflammatory cytokine responses in dendritic cells and macrophages stimulated with TLR agonists. This is due to class I PI3Ks causing the activation of Akt, which in turn inactivates GSK3, a kinase that promotes the transcription of IL-12 and represses that of anti-inflammatory IL-10. Using bone marrow-derived dendritic cells we find that whereas pharmacological inhibition of Akt or GSK3 has the expected effects on these cytokines, the widely used PI3K inhibitor wortmannin causes a paradoxical increase in the production of IL-10. Wortmannin inhibits all PI3K classes, including PI3K class III, involved in endosomal function and autophagy, for which specific inhibitors were until recently not available. Using inhibitors specific for PI3K class III vs class I, we show that whereas inhibition of class I PI3K has the expected opposing effects on IL-10 and IL-12 production, inhibition of class III PI3K enhances the production of both of these, plus further cytokines. This explains the paradoxical inhibition of IL-10 production by wortmannin. Introduction The phosphoinositide 3-kinase (PI3K) enzyme family is involved in several central aspects of cell and tissue biology, including cell survival and proliferation, metabolism, autophagy, and inflammation. All PI3Ks are composed of a C2 domain, a helical domain, and a catalytic domain [1]. The PI3K classification depends on the presence of additional protein domains, their interactions with regulatory subunits, and the 3phosphorylated phosphoinositides that they synthesise. Class I PI3Ks are formed by four different catalytic subunit isoforms, namely PI3Kα, PI3Kβ, PI3Kγ and PI3Kδ, which heterodimerise with different regulatory subunits. There are three isoforms of class II PI3K, namely PI3KC2α, PI3KC2β and PI3KC2γ. Lastly, there is only one catalytic subunit of class III PI3K called VPS34 (vacuolar protein sorting 34). In short, through inactivating GSK3, the PI3K/Akt pathway prevents excessive inflammatory responses after TLR activation. For the capacity of the pathway to downregulate IL-12, pharmacological evidence agrees with the evidence generated from gene-targeted mice [5,8,9,[13][14][15]. This includes evidenced obtained with wortmannin, the most widely used PI3K inhibitor, known to be free of the specificity problems affecting LY294002 in particular [12]. In contrast, for IL-10 upregulation, results obtained with wortmannin [16,17], often clash with the evidence based on genetically modified mice [14,15,18,19]. However, the results generated using a specific inhibitor of the catalytic subunit p110δ do agree with the data from genetically modified mice [14]. Thus, it Contents lists available at ScienceDirect International Immunopharmacology j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / i n t i m p seems likely that the effects of wortmannin on other targets, including non-class I PI3Ks, could explain these disagreements. Class III PI3K, VPS34, generates phosphatidylinositol 3-phosphate, PI(3)P [1]. VPS34 is active as part of at least two complexes with different cellular localizations and roles [20]. Thus VPS34 regulates membrane trafficking, autophagy, and it is also proposed to participate in amino acid sensing upstream of mTORC1 activation [20,21]. Whereas VPS34 is targeted by wortmannin and other pan-PI3K inhibitors such as 3-methyladenine, specific inhibitors for this kinase were described only in the last two years [22][23][24]. In this study, we make use of these new inhibitors to explore the impact of VPS34 inhibition on the cytokine responses of dendritic cells to TLR agonists. Our results help to explain the paradoxical effects of wortmannin on IL-10 production. Antibodies and reagents Antibodies against Akt and phosphorylated Akt (S473) were purchased from Cell Signaling Technology. Antibody to α-tubulin was from Santa Cruz Biotechnology. Secondary antibodies, anti-IgG and anti-IgM, both HRP-conjugated, were from Calbiochem and Invitrogen, respectively. Wortmannin was purchased from Sigma, Akt inhibitor VIII (Akt VIII) from Merck-Millipore, and GDC-0941 and SB216763 from ApexBio. SAR405 and VPS34IN-1 were purchased from the Division of Signal Transduction Therapy (DSTT) Unit at the University of Dundee. LPS and Pam3CSK4 were purchased from Sigma and InvivoGen, respectively. Generation of murine bone-marrow-derived dendritic cells (BMDCs) BMDCs were obtained by the method of Lutz et al. [25] as described in detail in [26]. Recombinant mouse granulocyte-macrophage colonystimulating factor (GM-CSF) was from PeproTech. All stimuli were added in medium containing 5 ng/mL GM-CSF. Immunoblotting Immunoblotting analysis was performed following standard procedures. BMDCs were lysed in PBS pH 7,2, 0.5% w/v Triton X-100 (Applichem), containing protease and phosphatase inhibitor cocktails from Santa Cruz Biotechnology. Lysates were resolved on SDS-PAGE and transferred onto polyvinylidene fluoride membranes from Merck-Millipore. Membranes were blocked in PBS, 0.1% w/v Tween 20 (Sigma) and 0.5% w/v BSA (Sigma), probed with the corresponding antibodies and developed with the SuperSignal™ West Pico Chemiluminescent Substrate (ThermoFisher). Measurement of cytokines BMDCs were treated with inhibitors 30 min before stimulation with TLR agonists. IL-10, IL-12p70, IL-6 and tumor necrosis factor alpha (TNF-α) were measured in cultured supernatants, after 18 h of BMDCs stimulation, using ELISA kits from BD Biosciences. Statistical analyses The intra-experiment statistical analyses were carried out by one-way analysis of variance (ANOVA), with a Tukey post-test. The inter-experiment statistics (i.e. putting together the results of repeated independent experiments) were carried out by the restricted maximum-likehood (REML) method [27], also with a Tukey post-test. Wortmannin causes a paradoxical increase in IL-10 production in BMDCs stimulated with TLR agonists Because the PI3K/Akt/GSK-3 sub-pathway is known to regulate the production of IL-10 and IL-12 in response to TLR agonists in myeloid cells [5][6][7][8][9][10][11], we chose to study how the inhibition of each of these kinases affects the production of IL-10 and IL-12 in BMDCs stimulated with LPS (Fig. 1). As expected, a specific inhibitor of GSK-3 (SB216763) increased IL-10 production whereas it decreased IL-12 production (Fig. 1A). Also as expected, the inhibition of Akt (by Akt inhibitor VIII) decreased IL-10 production and increased IL-12 production (Fig. 1B). However, the inhibition of PI3Ks by wortmannin, while increasing IL-12 production as expected, did not decrease IL-10 production, and it actually increased it, both after stimulation with LPS and with the TLR2 agonist Pam3CSK4 (Fig. 1C and D). This is similar to the increase in IL-10 production induced by wortmannin reported previously in macrophages [19]. VPS34 inhibition enhances the production of both IL-10 and IL-12 in BMDCs stimulated with TLR agonists Since wortmannin is a pan-PI3K inhibitor, we speculated that the paradoxical increase in IL-10 production caused by this drug may be due to inhibition of VPS34. In order to investigate this issue, we used two structurally unrelated inhibitors of this kinase, namely SAR405 and VPS34-IN1 [22,23]. We verified that the phosphorylation of Akt (S473) was abrogated by wortmannin, Akt inhibitor VIII and the PI3K class I-specific inhibitor GDC-0941, but not by the new VPS34 inhibitors ( Supplementary Fig. 2). The VPS34 inhibitors had only a minor negative effect on Akt phosphorylation; this is unlikely to be a direct effect on PI3K class I, since it has been shown that neither inhibitor affects significantly the activity of PI3K class I at the concentration used in our experiments (1 μM) [22,23]. In contrast to the PI3K class I-specific inhibitor (GDC-0941), which caused the expected decrease in IL-10 production, both VPS34 inhibitors increased IL-10 production in BMDCs stimulated with either LPS or Pam3SCK4 ( Fig. 2A). Simultaneous inhibition of PI3Ks class I and III (by the combined use of GDC-0941 and SAR405) had as the net effect an enhancement in IL-10 production. In other words, the combination of specific PI3K class I and III inhibitors imitated the effect of wortmannin. Hence the paradoxical effect of wortmannin on IL-10 production is likely explained by the inhibition of VPS34, which has a negative effect on this cytokine. We also evaluated whether the VPS34 inhibitors affect the production of IL-12. SAR405 and VPS34-IN1 increased IL-12 production, as did GDC-0941 (Fig. 2B). The effect of VPS34 inhibition was weaker than that of PI3K class I inhibition, a difference that may be at least partially explained by the enhanced production of IL-10, known to downregulate IL-12 in an autocrine manner [8]. The combination of PI3K class I and class III inhibition induced a large increase in IL-12 production in response to LPS or to Pam3CSK4, suggesting an additive effect of both classes of PI3Ks on the production of this cytokine. VPS34 inhibition enhances the production of further cytokines in BMDCs stimulated with TLRs agonists Finally, we assessed whether the effects of VPS34 are specific to IL-10 and IL-12, or extend to further cytokines. For this purpose, we analyzed the production of TNF-α and IL-6 in BMDCs stimulated with LPS and Pam3CSK4, in the presence of the PI3K class-specific inhibitors ( Fig. 2C and D). GDC-0941 did not affect the production of TNF-α or IL-6. This differed from the data obtained by Aksoy et al. [15] using BMDCs carrying a kinase-dead version of PI3Kδ, which suggests that different PI3K class I isoforms may influence TNF-α and IL-6 differently. More importantly, both PI3K class III inhibitors significantly increased the production of TNF-α and IL-6 elicited by either TLR agonist tested. We also analyzed the effects of the PI3K inhibitors on the secretion of the low levels of IL-1β elicited by TLR agonists in the absence of inflammasome activators (Supplementary Fig. 3). The VPS34 inhibitors, but not the class I-specific inhibitor, significantly increased the production of IL-1β induced by LPS; a similar enhancement had been previously reported in the presence of 3-methyladenine, which inhibits both PI3K class I and class III [28]. However, the potentiation of IL-1β output by VPS34 inhibitors was absent when Pam3CSK4 was used as a stimulus, suggesting that the situation for this cytokine is different than for conventionally secreted cytokines. Concluding remarks Taken together our results show that inhibition of VPS34 causes increases in the production of several conventionally secreted cytokines in BMDCs stimulated with TLR agonists. They also show that this enhancement, which affects both pro-and anti-inflammatory cytokines, becomes superimposed on the expected pro-inflammatory effects of inhibiting PI3K class I when an inhibitor targeting both PI3K class I and class III, such as wortmannin, is used. The mechanism underlying the observed effect of VPS34 inhibition is not obvious. VPS34 is necessary for TLR9 signaling, which starts in endosomes [29], but this cannot explain the enhancement of cytokine responses after VPS34 inhibition, nor explain effects in response to TLR family members (TLR2; for pro-inflammatory responses, TLR4) that signal from the cell surface. The mechanisms underlying our observation may well be complex, as VPS34 inhibition can be expected to have profound effects on the basic cellular functions of autophagy and vesicular trafficking [20]. When using 18 h or similarly long endpoints, as it is the case in our work and many others, such alteration in housekeeping cellular processes is likely to result in effects impacting on many cellular functions. PI3K class I and class III inhibitors both enhance the production of proinflammatory cytokines in TLR-stimulated BMDCs, but they have opposite effects on the production of IL-10. BMDCs were pretreated with inhibitors or vehicle (DMSO) for 30 min before stimulation with 10 ng/mL LPS or 200 ng/mL Pam3CSK4 as indicated. Inhibitors tested were GDC-0941 (1 μM, for PI3K class I), VPS34-IN1 (1 μM, for PI3K class III), SAR405 (1 μM, for PI3K class III), or a mixture of GDC-0941 and SAR405 (1 μM each; only in parts (a) and (b)). Eighteen hours later, IL-10 (a), IL-12p70 (b), IL-6 (c) and TNF-α (d) were quantitated by ELISA in the supernatants. No significant levels of cytokines were detected in BMDCs incubated in media without TLRs agonist. All data of results are given as means ± SD of triplicate wells. Results are representative of 3 independent experiments. Statistical significances are expressed as for Fig. 1. Therefore our results do not imply necessarily that VPS34 specifically controls the cytokine output of dendritic cells under physiological conditions. However, they do imply that the use of pan-PI3K inhibitors to explore the functionality of the PI3K pathway carries the risk of a confounding general enhancement in the cytokine output of cells as a result of VPS34 inhibition. Conflict of interest No conflict of interest declared.
2,837.8
2016-07-01T00:00:00.000
[ "Biology", "Medicine" ]
Molecular and Cellular Factors Associated with Racial Disparity in Breast Cancer Recent studies have demonstrated that racial differences can influence breast cancer incidence and survival rate. African American (AA) women are at two to three fold higher risk for breast cancer than other ethnic groups. AA women with aggressive breast cancers show worse prognoses and higher mortality rates relative to Caucasian (CA) women. Over the last few years, effective treatment strategies have reduced mortality from breast cancer. Unfortunately, the breast cancer mortality rate among AA women remains higher compared to their CA counterparts. The focus of this review is to underscore the racial differences and differential regulation/expression of genetic signatures in CA and AA women with breast cancer. Moreover, immune cell infiltration significantly affects the clinical outcome of breast cancer. Here, we have reviewed recent findings on immune cell recruitment in the tumor microenvironment (TME) and documented its association with breast cancer racial disparity. In addition, we have extensively discussed the role of cytokines, chemokines, and other cell signaling molecules among AA and CA breast cancer patients. Furthermore, we have also reviewed the distinct genetic and epigenetic changes in AA and CA patients. Overall, this review article encompasses various molecular and cellular factors associated with breast cancer disparity that affects mortality and clinical outcome. Introduction Breast cancer is the second most common cancer and a leading cause of cancer related deaths in women around the globe. In 2020, around 276,480 new breast cancer cases are expected to be diagnosed in the United States alone [1]. Breast cancer, like any other type of cancer, is a multifactorial disease, and can be induced by reproductive factors, genetic mutations, biological carcinogens, chemical hazards, environmental factors, and obesity [2][3][4]. Breast cancer is a complex, wide ranging heterozygous disorder with different molecular and clinical subtypes requiring distinct treatment plans. Breast cancer has five molecular subtypes depending on differential gene expression: Luminal A (hormone receptor + and HER2 − ), luminal B (hormone receptor + and HER2 − / + ), basal-like or triple-negative breast cancer (TNBC) (hormone receptor − and HER2 − ), HER2 enriched (hormone receptor − and HER2 − / + ), normal-like (hormone receptor + and HER2 − ) [5] (Figure 1). Emerging data on breast cancer incidence, overall survival and death rate show significant disparities in different racial groups [6,7]. For example, the breast cancer death rate is 41% greater in African-American (AA) women than in their Caucasian (CA) counterparts [8]. TNBC is a common molecular subtype among women with BRCA1 mutations; this type of cancer is more common in AA women [9]. Although the incidence rates of breast cancer in CA and AA women are similar, the mortality rate is still much higher in AA women. The higher mortality rate among AA women has led scientists ponder the role of racial differences between the two groups as an underlying cause of mortality. These differences may be attributed to socio-economic status, access to health care, postoperational care, food habits, biological factors and comorbidity. Int. J. Mol. Sci. 2020, 21, x FOR PEER REVIEW 2 of 17 BRCA1 mutations; this type of cancer is more common in AA women [9]. Although the incidence rates of breast cancer in CA and AA women are similar, the mortality rate is still much higher in AA women. The higher mortality rate among AA women has led scientists ponder the role of racial differences between the two groups as an underlying cause of mortality. These differences may be attributed to socio-economic status, access to health care, postoperational care, food habits, biological factors and comorbidity. In this review, we summarize a number of clinical factors that influence the outcome of racially disparate breast cancer patients. We have scrutinized the racial disparity in terms of tumor microenvironment (TME) composition, genomic aberrations, and cytokines and chemokines secretion. Knowledge of such racially disparate molecules in breast cancer progression will aid in developing novel targeted therapies and improving the clinical outcome of AA breast cancer patients. Racial Disparity in the Composition of Breast Cancer TME The TME between AA and CA women suffering from breast cancer differs significantly, contributing to higher mortality rates in the former population [10,11]. The breast cancer TME comprises different cells including fibroblasts, adipocytes, macrophages, and dendritic cells, and secretes different growth factors and cytokines which regulate the growth and development of tumor cells ( Figure 2). A dynamic cross-talk between stromal components and the tumor is indispensable for tumor progression and growth [12]. In AA and CA women with breast cancer, the largest fraction of leukocytes within the TME are macrophages and monocytes, including the M0, M1, and M2 subsets [13][14][15]. Importantly, these subsets of tumor-associated macrophages and monocytes not only support the growth of tumors, but also help in metastasis. AA women with breast cancer are reported to have a higher number of tumorassociated macrophages (TAMs) in the TME than their CA counterparts [15][16][17]. The TME transforms the infiltrating macrophages and converts their phenotype from M1 to M2, which instead of killing cancer cells, become TAMs, supporting tumor growth. These TAMs are protumorigenic, promoting tumor growth and angiogenesis, and further aiding in tumor invasion and metastasis though their secreted factors, thereby contributing to immune evasion of developing tumors [18]. In this review, we summarize a number of clinical factors that influence the outcome of racially disparate breast cancer patients. We have scrutinized the racial disparity in terms of tumor microenvironment (TME) composition, genomic aberrations, and cytokines and chemokines secretion. Knowledge of such racially disparate molecules in breast cancer progression will aid in developing novel targeted therapies and improving the clinical outcome of AA breast cancer patients. Racial Disparity in the Composition of Breast Cancer TME The TME between AA and CA women suffering from breast cancer differs significantly, contributing to higher mortality rates in the former population [10,11]. The breast cancer TME comprises different cells including fibroblasts, adipocytes, macrophages, and dendritic cells, and secretes different growth factors and cytokines which regulate the growth and development of tumor cells ( Figure 2). A dynamic cross-talk between stromal components and the tumor is indispensable for tumor progression and growth [12]. Higher infiltration of TAMs contributes to poor prognoses of breast cancer by supporting uncontrolled cancer progression and widespread metastasis [19]. M2 macrophages proliferate at a higher rate in AA breast cancer patients through different secreted cytokines and chemokines, compared to CA patients [20]. In addition, resting M0 macrophages and T follicular helper cells are slowly recruited to the tumors and alter the TME in AA breast cancer patients, leading to a decrease in overall survival and disease-free state (DFS) [14]. On the other hand, CA women with breast cancer show a higher proportion of tumor supporting M2 macrophages, resting CD4 + memory T cells, and mast cells [15]. Moreover, M2 macrophages are essentially protumorigenic, and memory CD4 + T cells are known to mediate a direct role in enhancing antitumor immunity, thereby augmenting DFS [14]. Additionally, mast cells are known to play both pro-and anti-tumor roles in the TME, depending on the response of stromal cells [14]. Regulatory T cells (Foxp3 + T cells) have been shown to cause immunosuppression, and are reported to promote the enrichment of protumoral proteins. Regulatory T cells increase the expression of surface molecules that prevent the mounting of immune responses against developing tumors [21]. Treg heterogeneity in terms of function and homeostasis makes it hard to reconcile the predictive value of the number of Treg cells in the TME to clinical outcome [22]. The MHC1 metagene displayed higher expression in AA ER + breast cancer patients compared to CA ER + cohort of patients [15]. Additionally, MHC1 genes play an important role in enhancing regulatory functions of T cells. Tumor infiltrating lymphocytes (TILs) within the tumor immune microenvironment (TIME) also differ between AA and CA breast cancer patients [23][24][25]. TILs are predominantly comprised of B cells, T cells, and NK cells, playing a crucial role in antitumor immune response [26]. AA women with the Basal/TNBC subtypes are reported to show more regulatory T cells (Tregs), while there are more CD8 + lymphocytes only in the luminal subtypes of breast cancer in AA women [27,28]. A higher number CD8 + T cells is indicative of a favorable response to neoadjuvant therapy in various molecular subtypes of breast cancer [25,29,30]. Furthermore, immunohistochemical staining of CD8 + T cells revealed that a higher percentage of CD8 + T cells are recruited in the tumor of AA breast cancer patients compared to CA women with breast cancer, which is suggestive of the mounting of strong immune response [27]. Other groups have also identified differential TIL in TNBC between AA and In AA and CA women with breast cancer, the largest fraction of leukocytes within the TME are macrophages and monocytes, including the M0, M1, and M2 subsets [13][14][15]. Importantly, these subsets of tumor-associated macrophages and monocytes not only support the growth of tumors, but also help in metastasis. AA women with breast cancer are reported to have a higher number of tumor-associated macrophages (TAMs) in the TME than their CA counterparts [15][16][17]. The TME transforms the infiltrating macrophages and converts their phenotype from M1 to M2, which instead of killing cancer cells, become TAMs, supporting tumor growth. These TAMs are protumorigenic, promoting tumor growth and angiogenesis, and further aiding in tumor invasion and metastasis though their secreted factors, thereby contributing to immune evasion of developing tumors [18]. Higher infiltration of TAMs contributes to poor prognoses of breast cancer by supporting uncontrolled cancer progression and widespread metastasis [19]. M2 macrophages proliferate at a higher rate in AA breast cancer patients through different secreted cytokines and chemokines, compared to CA patients [20]. In addition, resting M0 macrophages and T follicular helper cells are slowly recruited to the tumors and alter the TME in AA breast cancer patients, leading to a decrease in overall survival and disease-free state (DFS) [14]. On the other hand, CA women with breast cancer show a higher proportion of tumor supporting M2 macrophages, resting CD4 + memory T cells, and mast cells [15]. Moreover, M2 macrophages are essentially protumorigenic, and memory CD4 + T cells are known to mediate a direct role in enhancing antitumor immunity, thereby augmenting DFS [14]. Additionally, mast cells are known to play both pro-and anti-tumor roles in the TME, depending on the response of stromal cells [14]. Regulatory T cells (Foxp3 + T cells) have been shown to cause immunosuppression, and are reported to promote the enrichment of protumoral proteins. Regulatory T cells increase the expression of surface molecules that prevent the mounting of immune responses against developing tumors [21]. Treg heterogeneity in terms of function and homeostasis makes it hard to reconcile the predictive value of the number of Treg cells in the TME to clinical outcome [22]. The MHC1 metagene displayed higher expression in AA ER + breast cancer patients compared to CA ER + cohort of patients [15]. Additionally, MHC1 genes play an important role in enhancing regulatory functions of T cells. Tumor infiltrating lymphocytes (TILs) within the tumor immune microenvironment (TIME) also differ between AA and CA breast cancer patients [23][24][25]. TILs are predominantly comprised of B cells, T cells, and NK cells, playing a crucial role in antitumor immune response [26]. AA women with the Basal/TNBC subtypes are reported to show more regulatory T cells (Tregs), while there are more CD8 + lymphocytes only in the luminal subtypes of breast cancer in AA women [27,28]. A higher number CD8 + T cells is indicative of a favorable response to neoadjuvant therapy in various molecular subtypes of breast cancer [25,29,30]. Furthermore, immunohistochemical staining of CD8 + T cells revealed that a higher percentage of CD8 + T cells are recruited in the tumor of AA breast cancer patients compared to CA women with breast cancer, which is suggestive of the mounting of strong immune response [27]. Other groups have also identified differential TIL in TNBC between AA and CA [23,24]. In the early stage, BC (I-II) AAs have significantly higher numbers of TILs, but no difference was observed between ethnicities in stage III-IV TNBC patients. Conversely, a few reports have described minor or no differences between the two races in terms of TILs recruitment [15,31,32]. Also, the distribution of lymphocyte-predominant (>50% TIL), lymphocyte-moderate (10-50% TIL), and lymphocyte-poor (<10% TIL) cases are comparable between races [15,31,32]. Myeloid derived suppressor cells (MDSCs) are a heterogeneous pool of immune cells which are critical for tumor associated immune suppression [33][34][35]. The prime function of these cells in the TME is the suppression of T cells in an antigen-specific or nonantigen-specific fashion [36,37]. These cells are critical determinants in tumor reactive immune cell exhaustion or suppression, and are promising therapeutic targets against various cancers including breast cancer. Despite its significant role in tumor reactive immune cell manipulation, there is no correlation between racial disparity and MDSC recruitment in the TME. However, Apoliprotein E (ApoE), which influences the recruitment of MDSCs, is overexpressed in AA breast cancer patients than CA counterparts [15,38]. The degrees of Apo E signaling and T cell activation are two essential factors for regulating the TME and also important distinguishing factors between AA and CA TNBC from a prognostic standpoint [15]. Gene expression profiling of tumor epithelia and stroma of breast cancer from AA and CA patients revealed that genes overexpressed in AA population were related to biological processes that contribute to chemotaxis and angiogenesis [10]. There are a few reports on differences in gene expression levels of immune cells between AA and CA cohorts of breast cancer patients. Interestingly, in a Nigerian populace, the gene signature for cytotoxic cells was low in all subtypes of breast cancer except the basal subtype, while that for fibroblast cells was highest [17]. Biological signaling networks related to chemotaxis in tumor epithelia and neovascularization in the tumor stroma are significantly enriched in AA group compared to CA women with breast cancer. Overall, the stroma of AA breast cancer patients have higher inflammation and angiogenesis than those of CA patients. Phophoserine phosphatase-like (PSPHL), a gene overexpressed in tumor stroma of AA breast cancer patients, is known to alter the expression of several cytokines and growth factors which play important roles in Extra Cellular Matrix (ECM) remodeling [10,39]. PSPHL is overexpressed in breast cancer and plays a direct role in tumor-stroma crosstalk in the TME. Disparate overexpression of PSPHL between AA and CA populations has also been observed in prostate cancer and endometrial cancer, and may occur in other human cancers as well [40,41]. Other stromal genes dysregulated in AA women with breast cancer include Ras association domain-containing protein 1 (RASSF1A), Retinoic acid receptor beta (Retinoic acid receptor beta), Spermatogenesis associated 18 (SPATA18), and Sons of sevenless drosophila homolog 1 (SOS1). Microvessel density is another measure of angiogenesis, and serves as a prognostic marker in various cancers. The microvessel density in breast tumor specimens from different AA women was high compared to that in their CA counterparts. A higher density of macrophages is known to boost angiogenesis and the infiltration of macrophages. Higher infiltration of macrophages in the TME of AA breast cancer patients might cause higher microvessel density in these patients. A higher microvessel density in AA breast cancer patients also leads to poor clinical outcomes [10,42]. Cytokines and Chemokines Cytokines released in response to inflammation and immune reaction can function to inhibit or promote cancer. The presence of a differential cytokine response among AA and CA women suffering from breast cancer has been reported in several studies [13,43,44]. Cytokines can alter the TME and provide valuable information to guide therapeutic intervention [45]. Immune cells such as regulatory T cells, NK cells, myeloid cells, and adipose tissue resident macrophages infiltrate tumors and secrete cytokines that help in building an immune suppressive TME [45][46][47]. Depending on the tumor type, immune cells secrete various cytokines and chemokines that aid in the growth of cancer cells and help in immune evasion. The high proportion of TAMs in AA breast cancer patients might be a consequence of increased production of chemotactic chemokines and cytokines in the TME that attract the M2 macrophages [10]. Some of the crucial chemotactic factors secreted by the immune cells that attract macrophages are resistin, CCL2 (MCP-1), vascular endothelial growth factor (VEGF) and M-CSF-1 [48]. Macrophage-derived resistin in the TME further triggers the infiltration of newer macrophages and other immune cells in the protumor TME, as well as aggravating inflammation [49]. AA women show significantly higher levels of IL-6, resistin and IFN-γ secretion than CA women. IL6, secreted from adipocytes into circulation, has been shown to increase breast cancer risk and tumor size [44]. IL-6 is also produced by CD4 + Th2 cells that are predominantly involved in dampening antitumor immune responses [50]. IL-6 regulates insulin, resistin and estrogen, and thereby directly affects breast cancer development [45,51]. AA and CA breast cancer patients have resistin and IL-6 as the most differentially-expressed cytokines, with a relatively higher level of expression in AA patients [52,53]. Resistin and IL-6 expression is also positively correlated with serum levels in breast cancer patients. A number of studies have reported the elevated expression of IL-6 in AA breast cancer patients compared to CA patients [44,45,54]. Notably, the CXCL12/CXCR7/CXCR4 axis plays an important role in breast cancer growth and metastasis, but only CXCL12 has been associated with disparate expression. CA breast cancer patients show higher CXCL12 expression than their AA counterparts, and this correlates with poor prognoses [55]. In addition to patient data, a TNBC-derived cell line MDA-MB-468, derived from an AA breast cancer patient, when treated with resistin, has higher growth and aggressiveness compared to MDA-MB-231 cells derived from a CA patient. CD44, which is a marker for stemness, also increases significantly in resistin-treated MDA-MB-468 compared to MDA-MB-231 [52]. Resistin promotes the growth and aggressiveness of breast cancer cells through STAT3 activation, indicating a potential role of resistin, IL-6 and STAT3 in breast cancer racial disparity in AA women [45]. VEGF and syndecan are widely recognized as angiogenesis related signaling molecules and are reported to be overexpressed in AA compared to CA breast cancer patients [10]. Adipocyte derived pro-inflammatory cytokines such as IL-6, leptin and TNF-α, along with angiogenic factors, not only help in the development of breast tumors, but also promote more aggressive phenotypes. Higher levels of leptin caused by insulin secretion lead to the creation of an autocrine feedback loop which increases mitogenesis and decreases apoptosis in breast cancer cells [56]. Leptin also induces the secretion of pro-inflammatory cytokines like IL-6, TNF-α, IL-2 and IFN-γ [57]. Obese patients being studied for serum adiponectin (corrected for BMI) showed higher levels of IL-6 and C-reactive protein [58]. IL-6, along wih resistin and other cytokines, could be related to higher aggressiveness of TNBC in AA women. Unchecked growth of adipocytes might cause the release of monocyte chemoattractant protein-1 (MCP-1), which causes macrophage infiltration and the activation of resident macrophages [19,42]. Atypical chemokine receptor 1 (ACKR1) plays a pivotal role in immune regulation. AA women have a higher proportion of tumors that are ACKR1 negative compared with CA women [59]. ACKR1 positive tumors differ from ACKR1 negative tumors in their immune responses. ACKR1 expression in tumors is correlated with higher pro-inflammatory chemokines, i.e., CCL2/MCP-1. ACKR1 alleles specifically expressed in AA women likely drive these correlations, which help in the overall and relapse-free survival of patients with tumors showing higher expression of ACKR1 [59]. CCL7 is more elevated in AA women with breast cancer than in CA patients. CCL7 binds to CCR1, CCR2 and CCR3 and activates MAPK signaling, leading to EMT and TAM recruitment from endothelial leakiness [31,60]. Additionally, CCL7, CCL8 and CCL5 are also elevated in AA patients with TNBC [15,61]. CCL5 promotes breast cancer in a p53-dependent manner through CCR5, and antagonizing the CCL5 receptor inhibits CCR5-mediated angiogenesis [62,63]. Higher expression levels of CCL17 and CCL25 in breast cancer patients have been attributed to poor overall survival in the AA population, but no such correlation has been observed in the CA population. In addition, higher expression of CCL8 decreases overall survival only in CA breast cancer patients. However, CCL25 has served as an indicator of poor prognosis in AA breast cancer patients; its expression was correlated with improved overall survival in CA breast cancer patients [61]. Higher expression of CCL7, CCL11 and CCL20 in AA breast cancer patients has been shown to correlate with higher overall survival and better prognoses. Together, CCL17 and CCL25 in AA breast cancer patients decrease overall survival, while high CCL8 decreases overall survival in CA patients [61,64]. CCL7, CCL17, CCL20 and CCL25 are significantly more elevated in AA breast cancer patients compared to CA patients. Slightly higher expression of CCL8 and CCL7 has also been reported in AA tissues [61,64]. At gene level, AA women with breast cancer showed higher expression of several key cell cycle regulating genes, including CCNE2, CCNB1, CCNA1, CDKN2A and other tumor related genes CRYBB2, TMPO, AMFR, PSPHL, which directly impact the development and aggressiveness of tumors. A higher expression of interferon has been observed in AA breast cancer patients, indicating their ability to better respond to immunotherapy [15,17]. Tumor stroma also contributes to the expression of chemokines including CXCL10 and CXCL11 and other stromal protein PSPHL. CXCL10 and CXCL11 chemokines are canonical ligands for the CXCR3 receptor [10]. CXCL10, CXCL11 and ISG20 are interferon γ-regulated genes which affect the expression of a number of other genes in ER( + ) and ER( − ) breast tumors. In ER( + ) tumors, the presence of an interferon gene signature is an indicator of estrogen-mediated host immunity, and is involved in tumor development, growth, survival and metastasis [65][66][67]. In ER( − ) tumors, HLA-D family members HLA-DQA1 and HLA-DQB1 were the most differentially expressed at both the mRNA and protein levels [10]. The disparate distribution of different immune cells, cytokines and chemokines among AA and CA breast cancer patients has been catalogued for future experimental designs and the generation of hypotheses (Tables 1 and 2). Influence of Genetic and Epigenetic Factors on Breast Cancer Disparity Several studies have linked the differential expression of genes with downstream biological events that differ by ethnicity/race to poorer prognoses of the disease, but only a few have experimentally validated these correlations [68][69][70]. Antibody-based detection on a microarray identified several proteins that were differentially expressed in AA and CA breast cancer patients. Noninvasive, race-specific serum-based biomarkers are helpful in understanding why the burden among AA breast cancer patients is higher than among their CA counterparts. Three race-specific protein markers, i.e., VEGFR2, c-Kit, and Retinoblastoma (Rb), are overrepresented in tumors of AA breast cancer patients [71]. VEGFR2 protein is reported to affect breast cancer metastasis, prognosis and racial disparity [72,73]. High expression of VEGFR2 could be exploited as a therapeutic target against breast cancer in AA patients. c-kit is overexpressed in AA breast cancer patients and is involved in the growth and survival of BRCA1 mutated cells [71]. Lower serum levels of Rb protein were detected in AA breast cancer patients than other racial groups [71]. Rb acts as a checkpoint molecule between the G1 to S transition of the cell cycle, preventing the unchecked proliferation of potential tumor cells. Lower Rb expression causes unchecked proliferation of tumor cells [74]. In addition, CLCA2, modulated by p53 in response to DNA damage stimuli serves as a prognostic marker for TNBC only in AA patients [75]. A detailed analysis of these factors in terms of their effect on clinical outcome in racially disparate groups is presented below. Signaling Molecules Associated with Cellular Growth Cancer is the cumulative effect of a multistep processes and multiple mutations rather than a single gene event. The biological network of microdissected tumor epithelia and tumor stroma has been extensively studied and is believed to affect overall survival in breast cancer. ER-negative, high grade breast tumors of younger AA women showed increased expression of cyclin E [76]. Similarly, high grade, advanced tumors showed increased cyclin B expression [77], and breast tumors from AA women demonstrated increased expression of cyclin B [10]. Cyclin B is essential for mitosis and G2-M transition during the cell cycle. TNBC tumors have irregularities in the cell cycle gene expressions (high expression of p16, p53, and cyclin E, and low cyclin D1 expression) which might contribute to phenotypic changes in tumors of AA and CA women [43,78,79]. p16 binds with CDK4/6 in order to inhibit cyclin D binding, and its overexpression has been reported in tumors derived from AA breast cancer patients, in contrast to those from CA women [10,78,80]. Lactotransferrin (LTF) has been reported to show disparate expression among AA and CA breast cancer patients. It showed a more than eight-fold expression difference between an AA and CA cohort of breast cancer patients. It is implicated in a variety of functions including cellular growth, differentiation, inflammation, and in regulating immune response [81]. It also affects the MAPK and Akt signaling pathways and induces senescence or growth arrest [82,83]. The C4BPA gene encodes the alpha chain of the C4b-binding protein which inhibits the complement cascade; it has greater than six-fold greater expression in AA women [84]. Monoclonal antibodies have the ability to induce the complement system and stimulate complement-dependent cytotoxicity (CDC) that results in tumor cell clearance [85]. Higher C4BPA expression impedes the complement system and CDC, thereby helping in tumor cell immune escape and survival. Tumor suppressor gene p53 is a well-known for its role in DNA repair and inducing apoptosis [86,87]. In most human carcinomas, p53 gets mutated, and its mutational status is also an independent prognostic marker of AA breast cancer [88,89]. Breast cancer mitogen insulin-like growth factor 1 (IGF-1) levels increase in AA women after multiple pregnancies and promote breast cancer progression [90]. IGF-1 also enhances cell cycle progression in G1/S checkpoint-compromised cells. Metalloproteases such as ADAMTS15 have been reported to inhibit breast cancer cell migration [91], and their reduced expression could accelerate breast cancer progression in AA women. Recently, serum-derived exosomes have gained interest in the context of addressing racial disparity. Exosomes play a variety of important roles in breast cancer including tumor growth, metastasis, immunosuppression, and drug resistance [92]. Annexin 2 (ANX2) had been associated with angiogenesis and ECM modification [93,94]. Serum-derived exosomes from AA women TNBC patients exhibited higher ANX2 expression compared to those from CA women. ANX2 is associated with the aggressiveness of breast cancer and is considered as a molecular marker for different breast cancer subtypes [95]. In addition, an overexpressed ANX2 AA cohort was also associated with poor overall survival and poor disease-free survival [96]. Martin et. al. (2009) observed more than 400 differentially expressed genes in AA and CA populations with breast tumors [10]. An in silico data analysis of breast tumor patient samples revealed that ACTL8 and PGLYRP1 were differentially expressed between the AA and CA groups [97]. In addition, DNAJB8, a member of the heat shock protein (HSP) family that plays a key role in protein folding, showed higher expression only in AA samples [98]. AA breast cancer patients also show increased expression of STAT1 (which acts as a tumor suppressor) in the early stages of tumor initiation [99]. Crystallin β B2 (CRYβB2), a protein, constitutes a predominant fraction of vertebrate eye lenses and has been highlighted for its correlation in overall survival of AA population in a number of cancers. Both CRYβB2 and CRYβB2P1 are aberrantly expressed in AA breast cancer patients, but these genes stimulate tumor progression independently [8]. Together, these genes can be further tested experimentally in order to improve the disparate clinical outcome of AA breast cancer patients. Gene Mutations Genetic mutations, which are spontaneous or derived from any external factor like chemical exposure, UV/ion radiation exposure or hereditary factors, can also increase the risk of breast cancer. There are two types of mutations, i.e., genetic and somatic mutations. A germline variant is a change or mutation in a gene that is inherited from the parents. Somatic mutations occur due to genetic and environmental exposure, and cause different types of cancers. Mutations in various genes contribute to a high risk of breast cancer. Three well-known genes, BRCA1, BRCA2, and PALB2, often get mutated and increase the risk for breast and/or ovarian cancer [100]. Abnormal expression of BRCA1, BRCA2, and PALB2 has been observed in about 10% of breast cancer cases [101]. BRCA1 and BRCA2 act as tumor suppressor proteins, and patients with inherited mutations therein are more likely to develop aggressive breast cancers [102][103][104]. These proteins assist in DNA repair, and therefore, play an important role in the fidelity and stability of genetic material. Mutations in BRCA1 or BRCA2 can alter the function of these proteins, and the process of DNA damage repair is hampered. Therefore, cells are more likely to develop additional genetic alterations that can lead to the development of cancer. About 5-10% of breast cancers are caused by a gene mutation in BRCA1 and BRCA2 [105]. BRCA1 gene mutation in women accelerates breast cancer at a younger age [106]. Individuals who carry mutations in either their BRCA1 or BRCA2 genes can pass them on to 50% of their progeny. Some BRCA1 and BRCA2 mutations that are inherited increase breast cancer risk in women. Together, mutations in the BRCA1 and BRCA2 genes contribute to about 20-25% of inherited breast cancers [107]. In addition, genetic testing of breast cancer patients revealed that the population which is of African or Bahaman decent showed higher frequencies of BRCA1 and BRCA2 mutations [108]. AA women carrying mutations in the PALB2, RAD51C, and RAD51D genes are more susceptible to ER-negative breast cancer [107]. Like BRCA1 and BRCA2 gene mutations, PALB2 mutations are also associated with high risk of breast cancer. PALB2 is a tumor suppressor protein which interacts with BRCA1 and BRCA2 to repair DNA damage and breaks [109]. Apart from BRCA1 and BRCA2 gene mutations, ATM, CDH1, CHEK2, PALB2, PTEN, STK11, and TP53 mutations potentially increase breast cancer risk [110]. The top five genes most commonly mutated in AA and CA breast tumors are TP53, PI3KCA, GATA3, CDH1, and MLLT3 (n = 663, CA and n = 105 AA breast tumor samples). However, of these five genes, the mutation frequency of two genes, namely TP53 and PI3KCA, differs between CA and AA women. A higher percentage of AA women with breast cancer harbor TP53 mutations (42.9% AA vs 27.6% CA), whereas PI3K mutations were less commonly observed in AA than in CA women (20% AA vs 33.9% CA). Further, the risk of tumor recurrence is also higher in the AA than in the CA population. Such racial disparity in terms of tumor relapse is attributed to a breast cancer subtype, i.e., TNBC and the occurrence of TP53 mutations [7]. TP53 gene mutations are observed in about 50% of all breast cancers. There are more than 2500 different types of mutations documented in p53. p53 mutations are commonly found in AA cohorts of breast cancer patients, and they affects overall survival [111,112]. Shiao and colleagues (1995) observed more G:C to A:T transitions at non-CpG sites in black women compared to white; this phenomenon contributes to poor survival in AA breast cancer patients [111]. p53 status may predict survival independently with adjusting stage, tumor grade, and subtype, which is useful to identify AA women in the high risk category for breast cancer mortality [88]. Some genes get mutated occasionally, including BRIP1, MLH1, MLH2, MRE11A, NBN, PALB2, PTEN, RAD50, RAD51C, and SEC29B [113]. Mutations in the CHEK2, ATM, ERCC3, and FANCC genes are linked to a moderate risk of ER-positive breast cancer. RECQL gene mutations present a moderate risk of all types of cancer [107]. Both CHK1 and CHK2 have important functions in cell cycle regulation, and have been found to be mutated more frequently in AA breast cancer patients [114]. Additionally, breast cancer patients predisposed to somatic mutations like CHEK2, ATM, BARDI, PALB2, RAD51C, TP53, PTEN, MLH1 and MLH2 are also at higher risk of developing breast cancer [108]. Genomic Alterations Copy number alterations are frequent in breast cancer. The identification of copy number alterations specific to a breast cancer subtype defines the mechanism of disease initiation and progression. Chromosomes 1q and 8q show high gain events, while chromosomes 8p, 10q, 11q, 12q, and 16q have a higher frequency of loss events [115]. Loo et. al. (2011) observed higher frequencies of gain and loss events in breast tumors of AA women than CA women [116]. The frequency of gain of copy number in the 13q31-13q34 chromosomal region was observed to be twice as high in TNBC from AA women than from CA women. Previously, Melchor et al. (2009) reported the close association of 13q31-13q34 chromosomal region amplification with TNBC [117]. DNA Methylation Epigenetic changes are known to play important roles in various cancers. Racial disparity in molecular epigenetic markers between AA and CA breast cancer patients has also been documented. Epigenetic technological advancements have played a crucial role in improving the clinical outcome of various human cancers. DNA methylation is a very common epigenetic phenomenon. Tumor suppressor genes like p16, BRCA1, GSTP1, TIMP-4, and CDH1 contribute to breast cancer progression and growth. The promoter regions of these genes were frequently observed to be methylated [124]. Racial differences are closely associated with altered DNA methylation in breast tumors [125][126][127][128]. Although very little is known about the causal factors of hypermethylation, it is hypothesized that these events strongly influence breast cancer progression. There are a few reports suggesting that a low level of folate in breast tissue and increased alcohol consumption can result in hypermethylation of p16 gene [129]. Mehrotra et al. (2004) [130] reported that epigenetic modification, like DNA methylation, among ER-negative tumors of AA women below 50 years of age showed higher cyclin D2 promoter methylation than in CA women. Similarly, RASSF1A, a tumor suppressor gene that controls numerous checkpoints of cell cycle and apoptotic pathways [131], was methylated more in AA than in CA breast cancer patients [130]. Additionally, the RARβ and HIN-1 genes were also frequently found to be methylated in breast tumors of AA women [130]. Conclusions In this review article, we have comprehensively discussed the molecular pathways that promote breast cancer incidence and mortality among AA patients. Breast cancer results in higher mortality among AA women than among their CA counterparts. Moreover, AA women are at higher risk of developing more aggressive breast tumors, even at young ages, than white women. Recently, TME has been shown to be a potential therapeutic target against solid tumors including breast cancer. Emerging data on the distinct TME composition between AA and CA breast cancer patients warrants further study in order to develop the TME as a novel therapeutic target and thereby improve clinical outcomes, especially among AA breast cancer patients. Furthermore, the differential expression of cytokines and chemokines also significantly affects the clinical outcome of AA breast cancer patients, and could be used as a potential prognostic marker and therapeutic target. In addition, genetic mutations also influence the clinical outcome of AA breast cancer patients. Finally, racially disparate epigenetic modifications have recently been reported in breast cancer, demanding further investigations to improve the therapeutic strategies and clinical outcome of the disease. Nonetheless, available data on the composition of TME and molecular/cellular changes in terms of gene and protein alterations in breast cancer patients show a close association with racial differences. Overall, our detailed analysis will help in designing novel treatment strategies to improve the survival and quality of life among breast cancer patients. Author Contributions: All authors contributed in writing and drafting the manuscript. M.C., D.A. and R.K.G. conceived and reviewed the final version of the manuscript. All authors read and approved the final version of the manuscript. Conflicts of Interest: The author declares no potential conflict of interest.
8,337
2020-08-01T00:00:00.000
[ "Biology", "Medicine" ]
SAUNAS. I. Searching for Low Surface Brightness X-Ray Emission with Chandra/ACIS We present Selective Amplification of Ultra Noisy Astronomical Signal (SAUNAS), a pipeline designed for detecting diffuse X-ray emission in the data obtained with the Advanced CCD Imaging Spectrometer (ACIS) of the Chandra X-ray Observatory. SAUNAS queries the available observations in the Chandra archive and performs photometric calibration, point-spread function modeling and deconvolution, point-source removal, adaptive smoothing, and background correction. This pipeline builds on existing and well-tested software including CIAO, VorBin, and LIRA. We characterize the performance of SAUNAS through several quality performance tests and demonstrate the broad applications and capabilities of SAUNAS using two galaxies already known to show X-ray-emitting structures. SAUNAS successfully detects the 30 kpc X-ray superwind of NGC 3079 using Chandra/ACIS data sets, matching the spatial distribution detected with more sensitive XMM-Newton observations. The analysis performed by SAUNAS reveals an extended low surface brightness source in the field of UGC 5101 in the 0.3–1.0 keV and 1.0–2.0 keV bands. This source is potentially a background galaxy cluster or a hot gas plume associated with UGC 5101. SAUNAS demonstrates its ability to recover previously undetected structures in archival data, expanding exploration into the low surface brightness X-ray Universe with Chandra/ACIS. Introduction The Advanced CCD Imaging Spectrometer on the Chandra X-ray Observatory (Chandra/ACIS; Weisskopf et al. 2000) provides an effective balance between angular resolution and sensitivity for the study of diffuse galactic hot gas emission, with its field of view (FOV) of up to 16.9 × 16.9 arcmin 2 and 0 492 of spatial resolution.Stacking multiple observations made over Chandra's 25+ yr mission is one of the keys to obtaining the deepest observations of the Universe in X-rays.However, in most cases, the position of the target on the detector changes within the observations, introducing serious challenges to acquiring a meaningful combined image.The point-spread function (PSF) broadens and becomes more ellipse-shaped with increasing off-axis angle,7,8 necessitating an elaborate deconvolution scheme and hampering the ability to exploit the full capabilities of the archive.Consequently, Chandra observations are underexplored to date in studies advancing the low X-ray surface brightness domain. Future studies of low X-ray surface brightness emission (∼10 −8 -10 −11 s −1 cm −2 arcsec −2 and beyond) enabled by data processed to enhance detection of low-count regions could advance progress in several currently open questions relevant to galaxy evolution, including the origins of diffuse soft X-ray emission in galaxies and feedback involvement (Henley et al. 2010;Kelly et al. 2021). Lambda cold dark matter cosmology predicts that filaments of diffuse gas from the cosmic web will accrete during their infall onto protogalactic dark matter halos (White & Rees 1978;White & Frenk 1991;Benson & Devereux 2010), where gas is heated to approximately the halo virial temperature (T > 10 6 K).This plasma, further shaped by energy injection from active galactic nuclei (AGNs; Diehl & Statler 2008), supernovae (SNe), and stellar winds (Hopkins et al. 2012), is detected as diffuse soft X-ray band emission around galaxies (Mulchaey 2000;Sato et al. 2000;O'Sullivan et al. 2001;Aguerri et al. 2017).The origins and evolution of hot gas halos are important open questions in astrophysics, as halos are both the aftermath and active players of gas feedback processes, which modulate the star formation efficiency in galaxies (Binney 1977;Rees & Ostriker 1977;Silk 1977;White & Rees 1978;White & Frenk 1991).The largely unexplored realm of extreme diffuse gas emission, likely associated with large departures from equilibrium (Strickland et al. 2004), is likely to preserve a unique historical record of these events.Such emission is also likely to be disregarded in studies using standard pipelines that are not optimized for preservation of statistically significant but low surface brightness detections. This project is the first in a series that will study the hot gas halos around galaxies using X-ray observations from Chandra. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. The first step is to test the pipeline to reduce the Chandra/ACIS data products, named Selective Amplification of Ultra Noisy Astronomical Signal (SAUNAS).This paper describes the SAUNAS pipeline processing of data from the Chandra Data Archive9 and benchmarks it to previous works.In particular, we focus on the comparison of results between our analyses and those from other investigations for two well-detected X-ray sources characterized in the literature: NGC 3079 and UGC 5101.The latter has complex and extended X-ray emission, previously unexplored and only revealed by the current work. This paper is organized as follows.The SAUNAS pipeline is described in Section 2. The selection of published results for the SAUNAS performance comparison is discussed in Section 3.1.The benchmark analysis is presented in Sections 3.2 and 3.3.The discussion and conclusions are presented in Sections 4 and 5, respectively.We assume a concordance cosmology (Ω M = 0.3, Ω Λ = 0.7, H 0 = 70 km s −1 Mpc −1 ; see Spergel et al. 2007).All magnitudes are in the AB system (Oke 1971) unless otherwise noted. Observational Challenges From an observational perspective, measuring diffuse X-ray halo properties in galaxies involves at least four technical challenges. 1. Detection.The outskirts of X-ray halos are extremely faint (10 −8 -10 −11 s −1 cm −2 arcsec −2 ).Separating the faint emission associated with sources from that of the X-ray background (Anderson & Bregman 2011) within such low-count regimes is an extraordinarily challenging task.Statistical methods that assume a normal (Gaussian) distribution may not produce accurate results.2. Deblending.AGNs and X-ray binary stars (XRBs) are typically unresolved point sources that may contribute to the same X-ray bands where the hot gas halos are expected to emit (from ∼0.3-0.5 to 1.2-2 keV).While in principle, the detection of hot gas halos in nearby galaxies may not require very high spatial resolution observations or spectral capabilities, the separation of such emission from that of point sources does require them.High spatial resolution observations reduce systematic contamination in low surface brightness regimes. 3. PSF contamination.The distribution of diffuse emission is easily confused with the scattered, extended emission of the unresolved bright cores that contaminate the outskirts of the target through the extended wings of the PSF of the detector (Sandin 2014(Sandin , 2015)).Most studies do not correct for this type of scattering effect, although a few works, such as Anderson et al. (2013), have explored the combined stacked hot gas halo emission of 2165 galaxies observed with ROSAT (0.5-2.0 keV), convolving the combined surface brightness profiles by the PSF model to take into account the dispersion of light.4. Reproducibility and accessibility.The methodologies for calibration, detection, and characterization of X-ray emission have substantial differences between studies. Due to the Poissonian nature of the X-ray emission, most studies employ different types of adaptive smoothing in their analysis.These software methods tend to be custommade and infrequently made publicly available.Likewise, the final data products (final science frames) are seldom offered to the community. The SAUNAS methodology presented in the current paper attempts to address most of these points by (1) correcting the PSF in the images, (2) separating the emission of point sources from that of diffuse extended ones, and (3) providing a quantitative metric to determine if a detection is real or not.These two points implemented in SAUNAS are the major difference from other existing codes for detection of extended X-ray emission, such as vtpdetect (Ebeling & Wiedenmann 1993) or EXSdetect (Liu et al. 2013), as they do not attempt to deconvolve the observations using dedicated PSF models or to separate diffuse emission from point sources. SAUNAS Pipeline SAUNAS generates two main products: (a) PSF-deconvolved X-ray adaptively smoothed surface brightness maps and (b) signal-to-noise ratio (S/N) detection maps.The X-ray adaptively smoothed surface brightness maps provide the flux and luminosity of the hot gas X-ray halos, while the S/N detection maps provide the probability that the flux associated with each region on those maps is statistically higher than the local X-ray background noise. SAUNAS creates these products in four major steps (see Figure 1): (1) preprocessing of the archival Chandra X-ray observations using the Chandra Interactive Analysis of Observations10 (CIAO) software (Fruscione et al. 2006; see Section 2.2.1), (2) statistical resampling of the X-ray detection events by bootstrapping, 11 (3) PSF deconvolution of the event maps using the Bayesian Markov Chain Monte Carlo Lowcounts Image Reconstruction and Analysis (LIRA; Donath et al. 2022b; see Section 2.2.3),12 and (4) adaptive smoothing using VorBin (see Section 2.2.4). 13SAUNAS requires a few user-input parameters, including the location of the target (α, δ), FOV, and energy band.The main steps of the pipeline are described in the following subsections. CIAO Preprocessing First, the data are preprocessed using CIAO in the following way. 1.All available Chandra/ACIS observations containing the user-supplied sky coordinates are identified using find_chandra_obsid.The data sets and their best available calibration files are automatically downloaded using download_chandra_obsid and download_ obsid_caldb.2. The raw observations are reprocessed using chan-dra_repro (v4.16).To avoid oversubtraction of both the source and background counts necessary for the statistical analysis, the particle background cleaning subprocess is set (check_vf_pha) to "no." See the main CIAO manual14 for more information on this step.3.All the available ACIS data sets are merged into a single events file (merge_obs).This product serves as the phase 1 (first pass) observation file and is used to identify emission regions and to determine the source spectra needed for PSF construction.4. The phase 1 merged observation file is used to define the angular extent of detected emission sufficient for basic spectral characterization.The spectral information is used in the step following this one.The VorBin (Cappellari & Copin 2003) library generates a map of Voronoi bins from which a surface brightness profile is constructed. The preliminary detection radius (R lim,0 ), defined as the radial limit having a surface brightness equal to 10% of the surface brightness at the central coordinates, is computed.If R lim,0 is undefined due to a low central surface brightness, the presence of detectable emission is unlikely.For such cases, R lim,0 is arbitrarily set to onefourth of the FOV defined by the user.The events inside this detection radius are used to construct a spectrum employed in the next step to define the deconvolution kernel (e.g., PSF) appropriate for this target.The choice of a 10% limit is an optimal compromise based on the analysis of Chandra/ACIS observations: including as much emission as possible from the source enhances the spectra used to generate the PSF.However, including a region that is too large reduces computational efficiency.Note that the spectrum derived in this step serves the sole purpose of informing PSF construction and is not intended for physical characterization of the gas. 5. CIAOʼs task simulate_psf, in combination with the spectral information provided by the previous step, is used to generate a PSF representative of each observing visit to the target.The PSF modeling is dependent on the spectra of both the source and the background region, as well as the target position within the detector (off-axis angle).The latter is unique to each visit.The preliminary detection radius defines both the circular ( < R R lim,0 ) and annular ( < < R R R 2 lim,0 lim,0 ) apertures used to measure the source and background spectra, respectively (specextract).The aspectblur15 is set to 0.25 and the number of iterations to 1000 per data set.6.Finally, the individual event files and PSFs corresponding to each visit are cropped to a cutout, with the preferred energy range selected. The outputs from the preprocessing procedure with CIAO described above are (1) the detected event maps (named obsid_Elow-upper_flt_evt.fits, where low and upper refer to the energy range limits and obsid is the observation ID identification in the Chandra archive), (2) the exposure time maps (obsid_Elow-upper_flt_expmap.fits), (3) the flux maps (obsid_Elow-upper_flt_flux.fits), and (4) the PSF (obsid_Elow-upper_flt_psf. fits).This set of intermediate files is used in the remaining steps of the SAUNAS pipeline to generate the final maps. To enhance the robustness of the adaptively smoothed mosaics and to reduce contamination from nonsignificant signal in the background, the X-ray events are resampled via replacement (bootstrapping) as an additional (and useroptional) step before deconvolution.Bootstrapping is especially well suited for inferring the uncertainties associated with an estimand-such as the median surface brightness in cases for which the Gaussian standard deviation regime does not apply or parametric solutions are too complicated or otherwise unknown.Bootstrapping effectively reduces the leverage that single events or very low-count sources may have in the background of the final mosaics by accounting for the photonnoise uncertainties in the PSF deconvolution and Voronoi binning steps through a nonparametric approach, allowing for a better assessment of the uncertainties in the final simulations. In our application, bootstrapping generates N ∼ 100 (hereafter N boot ) new X-ray event samples from the observed sample, preserving size (flux) and permitting events to be repeated.While the number of bootstrapping simulations is set to 100 by default as a compromise between precision and computational resources, N boot can be defined by the user in SAUNAS.Each resampled list of events is translated into an image, which is fed into the next step, PSF deconvolution (Section 2.2.3). LIRA PSF Deconvolution The LIRA (Connors et al. 2011;Donath et al. 2022b) package deconvolves the emission from sources in X-ray data.Through the use of LIRA, SAUNAS removes the contamination from AGNs and XRBs, which can be significantly extended and easily confused with a diffuse halo if the PSF is not accurately corrected.LIRA uses a Bayesian framework to obtain the best-fit PSF-convolved model to the observations, allowing the user to evaluate the probability that a detection is statistically significant.LIRA was designed to provide robust statistics in the low-count Poissonian regimes representative of faint extended halos, the primary science focus of our project. As detailed in Section 2.2.1, the PSF models are generated specifically for each target, taking into account their location in the detector and their spectral energy distributions, on a pervisit basis.SAUNAS deconvolves data from individual visits, using these PSF models as input into LIRA.Discrete hard-band emission is produced primarily by point sources, including AGNs (Fabbiano et al. 1989;Fabbiano 2019), young stellar objects, and mass transfer onto the compact stellar object within XRB pairs (Wang 2012).Because these point sources contaminate the soft-band emission, they are excised from the data.They are identified using the Chandra Source Catalog (Evans et al. 2010) and then removed from the event file by deleting events that lay within the cataloged positional uncertainty ellipse of the source. The Python implementation of LIRA is used to deconvolve the X-ray event files, thus minimizing the effects of the off-axis dependency associated with Chandra's PSF, such that data from different visits can be combined in a later stage.LIRA accepts five input arrays: (a) counts (number of events), (b) flux (in s −1 cm −2 pixel −1 ), (c) exposure (s cm 2 ), (d) PSF, and (e) a first approximation to the background (counts).The first four inputs are generated by the CIAO pipeline (Section 2.2.1), while the initial baseline background is set to 1.The number of LIRA simulations is set to 1000 (n_iter_max), in addition to 100 initial burn-in simulations (num_burn_in).To speed up the process,16 SAUNAS splits the LIRA simulations into parallel processing blocks (defined by the number of bootstrapping simulations), to be combined after the deconvolution process has finished.While 1000 LIRA simulations are run on each of the N ∼ 100 bootstrappingresampled images described in Section 2.2.2,only the last LIRA realizations (those produced after the deconvolution process has stabilized) for each resampled image are used (hereafter N stable ), which typically is equal to ∼100.To save computational resources, N stable is adapted based on the number of bootstrapping simulations so that the deconvolved data set consists of a maximum of N = N boot × N stable = 1000 deconvolved images (posterior samples). Adaptive Voronoi Smoothing The deconvolved data cubes, hereafter referred to as "Bootstrapping-LIRA" realizations, serve as a proxy of the probability density distribution of the true X-ray emission on a pixel-per-pixel basis at the Chandra/ACIS spatial resolution (a minimum of 0 492 pixel −1 , depending on the binning set by the user).To facilitate the detection of extended, low surface brightness structures such as hot gas halos-with apparent sizes substantially larger than the spatial resolution limit for the galaxies-the use of spatial binning enhances the detection of regions with very low S/N. Voronoi binning (VorBin; Cappellari & Copin 2003) is applied to each of the N posterior samples in the deconvolved data cube.This process generates N Voronoi tessellation maps, each one differing from the other because they were calculated from the Bootstrapping-LIRA realizations.This data set is a Voronoi map data cube representing the probability density distribution of the surface brightness of the target. A consequence of this binning approach is the loss of spatial resolution in the faintest regions of the image (halos, background) compared to the brightest regions (i.e., the galactic cores).This loss is caused by the fact that the Voronoi technique varies the bin size in order to achieve a fixed S/N in the resulting map.As we are primarily interested in mapping the large-scale halo structures, this loss in spatial resolution does not significantly impact our science goals. A surface brightness map is created by calculating the median across one of the axes of the Voronoi data cube.To prevent background emission from contaminating the final image, the scalar background level is determined individually for each realization of the Bootstrapping-LIRA data cube.All sources, both resolved and unresolved, must be meticulously masked prior to measuring the background level to prevent systematically oversubtracting the background in the final mosaics.The source masking and background correction process are conducted iteratively. 1.After the LIRA deconvolution process and before the Voronoi binning is performed, point sources from the Chandra Source Catalog (CSC 2.0;17 likely XRBs, SNe, AGNs) that lay in the image footprint are removed from the associated event file.Point-source removal prevents the associated emission from impacting the adaptive Voronoi maps and resulting in diffuse contamination that could be confused with a gas halo component.2. A secondary mask is generated using CIAO's routine vtpdetect. 18This mask identifies the regions with detectable extended X-ray emission that are removed from the maps before measuring the background level.A mask is generated for each CCD of each visit through independent analysis.The masks are then combined into a single master extended source mask.3.If a source was detected in the preliminary surface brightness profile generated as a part of the CIAO preprocessing step (see Section 2.2.1, step 4), then those pixels with < R R lim, 0 are also masked before the background assessment.4.After removing all the masked pixels using the masks from the three previous steps, the first approximation of the background level (B 0 ) is made by measuring the median value of the unmasked sigma-clipped (σ = 3) pixels.The background value is then subtracted from the Voronoi-binned maps. Once the individual observations have been backgroundcorrected, all the flux maps are combined using mean weighting by the respective exposure times.Finally, a refined background value (B 1 ) is calculated using the combined observations by repeating the process described above.The noise level is then estimated from the background distribution as the ratio between the median background level and the lower limit of the 1σ error bar (equivalent to the 15.8th percentile).The final background-subtracted, PSF-corrected, and Voronoibinned surface brightness maps are derived by using a median of the background-corrected Bootstrapping-LIRA realizations.The final mosaics and the noise level are used to generate three different frames to be stored in the final products: (1) an average adaptive X-ray surface brightness map, (2) a noise level map, and (3) an S/N map. Quality Tests This section presents the results from a series of quality tests designed to evaluate specific aspects of the output mosaics generated with SAUNAS. 1. Identify the fraction of false-positive and false-negative detections (Section 2.3.1). Estimate the flux conservation of the deconvolution/ Voronoi binning process (Section 2.3.2) 3. Quantify the quality of SAUNAS performance as compared to that of other methods (arestore; Section 2.3.3). False-positive/False-negative Ratio For quality assessment, SAUNAS is tested using two different models varying the exposure time to reduce the photon flux and the detectability conditions: 1. a model of an idealized edge-on galaxy with two lobes emerging from a jet (double-jet model) and 2. a shell-like structure with a central bright source (cavity model). The models are created as combinations of Gaussian 2D probability distributions (astropy.convolution.Gaus-sian2DKernel) with different ellipticities and rotations as described in Table 1.Following PSF convolution, a synthetic observed events map is generated using a random Poisson distribution (numpy.random.poisson). The double-jet model includes the emission from three sources: the galactic disk, a bright core, and the lobes.The range of surface brightnesses is ∼10 −6 -10 −8 s −1 cm −2 arcsec −2 , excluding the considerably brighter (3-5 orders of magnitude brighter) peak surface brightness of the core.Its morphology mimics the predominant structure observed in double-jet radio galaxies such as Centaurus A (Hardcastle et al. 2007). The other test simulation, a cavity model, contains a hollow shell with a central bright source.This model provides an important pipeline test for the reconstruction of cavities found in the intergalactic medium.The detection of cavity rims seen in projection against the diffuse emission from the hot intracluster and/or intergalactic medium is challenging.These large bubbles potentially provide a useful record of interactions between AGNs and the intergalactic medium, in which the expansion of the associated radio lobes excavates the surrounding medium (Pandge et al. 2021).Our test model is designed to be particularly challenging: an X-ray cavity with a dominant central source representing an AGN (Blanton et al. 2001;Dunn et al. 2010).The surface brightness background level of both models is fixed at 5 × 10 −9 s −1 cm −2 arcsec −2 , and the equivalent exposure time is assumed to be flat and varying from t = 10 exp 8 to t = 10 exp 4 s cm 2 .For reference, t = 5 10 exp 5 s cm 2 equals ∼10 ks at the 0.3-1.0keV band. 19he synthetic data are generated using the real PSF associated with the Chandra/ACIS data sets of NGC 3656 (Arp 155, PID: 10610775, PI: G. Fabiano; Smith et al. 2012).This PSF, which displays the characteristic ellipsoid pattern of off-axis ACIS observations, is selected as a worst-case scenario, given its extreme ellipticity due to its off-axis position in the detector array.The readout streak20 is visible as a spike departing from the center of the PSF at a position angle of approximately −70°(north = 0°, positive counterclockwise). The simulated observed events are passed to the SAUNAS pipeline for processing, followed by a comparison between the detected (3σ) maps and truth models.The quantitative quality test includes identification of the fraction of pixels that were incorrectly identified as false negatives and false positives. Figure 2 demonstrates the deconvolution and smoothing process for a mock galaxy with t = 5 10 s exp 6 having both diffuse X-ray emission and an extended PSF.The position angle selected for the model galaxy (Table 1) is selected specifically to offer a nontrivial test for the PSF deconvolution method.By using a position angle of 45°, the resulting convolved image displays two elongated features with apparently similar intensity (center left panel in Figure 2; PSF-convolved source): one real and one created by the PSF.If the PSF elongated feature is removed in the final images, we can conclude that the image reconstruction was successful. After Poisson sampling (see simulated observation panel in Figure 2), the resulting events map is equivalent to the processed CIAO event files.The events map shows broad emission for the core of the galaxy model in which the disk is indistinguishable.The two lobes are still present but Note.Columns: (1) name, (2) component, (3) size, (4) surface brightness, (5) eccentricity, (6) position angle. considerably blended with the emission from the inner regions. The events are then processed using SAUNAS (LIRA deconvolution, bootstrapping, and VorBin steps). The results from the PSF deconvolution (LIRA-deconvolved panel in Figure 2) show a removal of most of the PSF emission, recovering the signal from the disk of the galaxy and removing the PSF spike emission.However, a significant amount of noise is still visible, and the background level is difficult to estimate (lower left panel of Figure 2). After applying the bootstrapping and Voronoi binning methods, the resulting final corrected mosaic (final smoothed mosaic panel, Figure 2) clearly shows the signal from the X-ray lobes, the disk, and the central bright core over the background.The 2σ and 3σ contours show the detected features following the calibration procedures described in Section 2, demonstrating complete removal of the PSF streak in the final mosaics (at a 99.7% confidence level).The original shape and orientation of the disk are recovered, with the flux correctly deconvolved into the bright core of the model galaxy.Due to its dim brightness, the jet that connects the lobes with the main disk is notably distorted in the final mosaic but still visible at a 2σ confidence level.For this test, the fraction of pixels unrecovered by the pipeline that were part of the model sources (false negatives) is 3.2%.On the other hand, the fraction of misidentified pixels that were part of the background (false positives) is 4.0%.The maps of false positives and false negatives for this test are available in Appendix B. The test for the cavity model is repeated, sampling different equivalent exposure times.The results are shown in Appendix B. Figure 3 presents a comparison of the falsepositive and false-negative fractions as a function of the equivalent exposure time and model.For equivalent exposure times higher than t = 10 exp 6 s cm 2 , the false positives and false negatives are lower than 5%-10%.These fractions increase toward shorter exposures as expected, showing a notable increase to 20% of false negatives (true source emission that is unrecovered by SAUNAS) at approximately t = 5 exp 10 5 s cm 2 .The reason for this increase is the lack of detection of the dimmer outer regions in contrast with the brighter core (the lobes in the case of the double-jet model and the outer shell in the cavity model).Interestingly, the fraction of false positives does not increase substantially even at extremely low equivalent exposure times, remaining stable at ∼10% down to t < 10 exp 4 s cm 2 .This result demonstrates that even in cases of extremely short exposure times, SAUNAS is not expected to generate false-positive detections, which is a critical requirement for our study. Flux Conservation In an ideal scenario, the total flux of the events processed by SAUNAS should be equal to the total flux in the preprocessed frames by CIAO.In practice, the baseline model assumptions during the deconvolution process may affect the total flux in the resulting frames.LIRA assumes a flat background model that-combined with the counts in the source-tries to fit all the events in the image.However, deviations from this ideal scenario (nonuniform background, regions with different exposure time) generate differences between the input and output flux.In order to understand the impact of flux conservation in LIRA-deconvolved images, we must (1) analyze the relative difference of flux before and after deconvolution and (2) determine if the residuals of the deconvolution process generate any systematic artificial structure (i.e., photons may be preferentially lost around bright sources, generating holes in the image or erasing the dim signal from halos). Total flux conservation is tested by measuring the ratio between the total flux in the input frames (those obtained at the end of the CIAO preprocessing; see Section 2.2.1) divided by the total flux in the final, SAUNAS-processed frames.We perform this test on real (UGC 5101; see Section 3.3) and synthetic observations (Section 2.3.1).The results are shown in Figure 4. A total flux loss of ∼5% is detected in the SAUNASprocessed frames when compared with the preprocessed event maps by CIAO.The results are consistent in real observations (recovered flux ratio of 95.0% ± 1.7%) and in synthetic observations ( -+ 95.4 2.4 2.7 %).Using different simulations, we determined that this small flux loss is independent of the size of the FOV (in pixels), remaining stable at ∼5%.For the total area of the images analyzed, 5% of lost flux is negligible and well within the stochastic uncertainty of typical photometry (see the error bars in the profiles described in Figure 5).We consider a flux conservation ratio lower than 100% (i.e., 90%-99%) as erring on the side of caution from a statistical perspective: the bias of LIRA to lose flux implies that SAUNAS will not generate false-positive detections of hot gas halos. Quality PSF Deconvolution Test While Section 2.3.2 reported on the conservation of total flux in the image as a whole, this section discusses whether SAUNAS introduces unwanted artificial structures (fake halos or oversubtracted regions) in the processed maps.For this test, two additional types of test sources are used: (1) a point source and (2) a circular extended source.Both of these sources have been previously combined with a Chandra/ACIS PSF.To provide context, the results of LIRA are compared with those from CIAO/arestore. 21he results are displayed in Figure 10 (point source) and Figure 11 (circular extended source) and detailed in Appendixes A and B. To quantify the quality of the different deconvolution methods, radial surface brightness profiles of the truth (nonconvolved) model, the convolved simulated observations, and the resulting deconvolved maps are constructed.The profiles show that arestore tends to oversubtract the PSF, generating regions of negative flux around the simulated source.In the point-source case scenario, arestore oversubtracts the background by more than 5 × 10 −8 s −1 cm −2 pixel −1 , while LIRA recovers the background level with 5 times fewer residuals.The superiority of LIRA over arestore to recover diffuse structures is even more obvious in the extended source scenario (Figure 11): arestore shows a clear ringlike oversubtraction region around the source, dipping the background level to 10 −7.8 s −1 cm −2 pixel −1 as compared to the real (truth model) level of 10 −7 s −1 cm −2 pixel −1 .LIRA fits the background level significantly more faithfully, at a level of ∼10 −7.2 s −1 cm −2 pixel −1 . We conclude that LIRA deconvolution results are better suited for the detection of diffuse X-ray emission, such as extended hot gas halos, compared to other PSF correction techniques, such as CIAOʼs arestore.Despite the model limitations described in Section 2.3.2,SAUNAS suppresses false-positive extended emission detections without overfitting the PSF while recovering the true morphologies of X-ray hot gas distributions.Thanks to the modularity of SAUNAS, future updates of the LIRA deconvolution software will be automatically implemented in our pipeline, improving the quality of the processed frames. Sample Selection We identified two astrophysical targets of interest for testing the pipeline: 1. NGC 3079, a highly inclined barred spiral galaxy with a prominent Fermi bubble (Hodges-Kluck et al. 2020; the primary benchmarking target; see Section 3.2), and 2. UGC 5101, an ultraluminous IR galaxy that is undergoing a galactic merger (Sanders et al. 1988;Imanishi et al. 2001; the secondary benchmarking target; see Section 3.3). The targets used to demonstrate SAUNAS's capabilities were selected because they were known a priori to have extended soft X-ray emission detected by telescopes other than Chandra (NGC 3079), and the characterization of the extended emission was well documented with a detailed methodology that could be replicated in the published research.Insisting that the data come from a different platform provides a truth model independent of systematic effects inherently associated with Chandra.Finally, these specific targets were selected in order to test SAUNAS against simple and complex emission structures associated with the different morphologies (a disk galaxy and an interacting system). NGC 3079 Large-scale bipolar winds and Fermi and radio bubbles are examples of extended structures observed around the center of the Milky Way in multiwavelength observations, including radio (MeerKAT, S-PASS), microwave (Wilkinson Microwave Anisotropy Probe), mid-infrared (Midcourse Space Experiment), UV (XMM), X-rays (Chandra, XMM-Newton, ROSAT), and gamma rays (Fermi Large Area Telescope) (Sofue 1995;Bland-Hawthorn & Cohen 2003;Finkbeiner 2004;Su et al. 2010;Carretti et al. 2013;Heywood et al. 2019).While the presence of these structures is well known in our own galaxy, Li et al. (2019) reported the first nonthermal hard X-ray detection of a Fermi bubble in an external galaxy, NGC 3079 (α = 150°.491, δ = +55°.680, D = 18.68 ± 1.32 Mpc, 11 04 kpc −1 ; Springob et al. 2005), using Chandra observations.Further works in the X-ray and UV using XMM-Newton and the Galaxy Evolution Explorer (GALEX) revealed a 30 kpc long X-ray galactic wind cone in NGC 3079 (up to 60 kpc in the far-UV; Hodges-Kluck et al. 2020) potentially associated with material that has been shocked by Type II SNe. The length of the X-ray wind cone of NGC 3079 ( ~¢ R 3 , 16.3 kpc) contrasts with that of the bubble found by Li et al. detector window, and as a consequence, these Chandra/ACIS observations were only used for point-source identification on NGC 3079 and subsequent masking for XMM-Newton. Additionally, the available Chandra observations were much shallower (124.2 ks, with only 26.6 ks of usable exposure time due to contamination) than those of XMM (300.6 ks).Despite Figure 6 in Hodges-Kluck et al. (2020) showing signs of faint extended emission in the Chandra/ACIS data sets, the authors did not attempt to characterize it.Because ancillary X-ray observations from XMM-Newton are available for this object, NGC 3079 is an ideal case for benchmarking the low surface brightness recovery capabilities of the SAUNAS pipeline. To detect the X-ray galactic wind in NGC 3079, the same bandpass (0. Mimicking the methodology in the original article, an amplitude of ±20°is set for all the cones around their central axis.Surface brightness profiles are generated from the reprocessed Chandra observations, providing a direct comparison with previous results. The results show that the extended X-ray wind emission is detectable using Chandra observations, up to a limit of 4.1 kpc in the northeast filament) at a confidence level of 95% (2σ).The filament in the southwest of the galaxy is shortest at R ∼ 16-20 kpc.Interestingly, the XMM observations reveal a slightly larger extent in the X-ray emission on the west side (40 kpc) compared to the east side (30-35 kpc) according to Hodges-Kluck et al. (2020). 22The average limiting surface brightness (95% confidence level) is m = -+ -1.66 10 0.5 0.5 10 s −1 cm −2 arcsec −2 .Limiting surface brightness reaches its lowest limit when combining all the filaments, suggesting that the observations are limited by noise and not by systematic effects (if dominated by systematic gradients, a lower S/N would result from combining all the regions). UGC 5101 UGC 5101 (z = 0.039, D = 161.8Mpc, 0.784 kpc arcsec −1 ; Rothberg & Joseph 2006) is an irregular galaxy that is undergoing a potential major merger.This object has previously been identified as a Seyfert 1.5 (Sanders et al. 1988), a low-ionization nuclear emission-line region galaxy (Veilleux et al. 1995), and a Seyfert 2 galaxy (Yuan et al. 2010).UGC 5101 has a very extended optical tidal tail (∼40 kpc) to the west from the nucleus, with a second semicircular tidal tail that surrounds the bright core of the galaxy with a radius of 17 kpc (Surace et al. 2000).Radio (Lonsdale et al. 2003), IR (Genzel et al. 1998;Soifer et al. 2000;Imanishi et al. 2001;Armus et al. 2007), and X-ray observations with Chandra and XMM-Newton (Ptak et al. 2003;González-Martín et al. 2009) suggest the presence of a heavily dust-obscured AGN in the nucleus of this galaxy. The total exposure time and other information relevant to the Chandra/ACIS observations of UGC 5101 are provided in Table 2.The diffuse X-ray emission of UGC 5101 has been previously analyzed in the literature.Huo et al. (2004) found evidence for an inner hot gas halo of 8.7 kpc (10 4) and an outer halo of 14.3 kpc (17 0).Grimes et al. (2005) found that 95% of the 0.3-1.0keV emission is enclosed in the inner 8.75 kpc galactocentric radius (10 5).Smith et al. (2018Smith et al. ( , 2019) ) analyzed the Chandra/ACIS observations, finding that the 0.3-1.0keV emission has a size of 24 0 × 14 2 (∼19.1 × 11.3 kpc, position angle of 90°) and a total X-ray luminosity of = L log 41.6 X erg s −1 .Given these known robust detections, we employ SAUNAS in the characterization of the low surface brightness emission from UGC 5101.Three bandpasses are used to ensure a direct comparison to the analyses by Smith et al. (2019): soft (0.3-1.0 keV), medium (1.0-2.0 keV), and hard (2.0-8.0 keV).The flux conservation ratio after PSF deconvolution in this exposure is 96.0%± 0.02% in the three bands.The processed X-ray emission maps are presented in Figure 6, in comparison with the optical/near-IR observations from the Hubble Space Telescope (HST), as well as ancillary radio observations for reference.The PSFs and unprocessed events of the UGC 5101 observations in the three bands analyzed are available in Figures 17 and 19 in Appendixes C and D, respectively. The results are summarized in Figure 6.The analysis of the Chandra/ACIS observations with SAUNAS reveals that even after PSF deconvolution, the soft X-ray emission of UGC 5101 still shows extended emission around its core.The 0.3-1.0 and 1.0-2.0keV bands present X-ray emission with an elongated morphology, with a characteristic bright plumelike structure in the core, oriented in the north-south direction (μ soft = 1-2 × 10 −8 s −1 cm −2 arcsec −2 ), very similar to the results of Smith et al. (2018).In contrast, the hard band only shows a bright core in the center, compatible with an unresolved source.In the soft band, the diffuse X-ray emission is detectable down to levels of Both soft-and medium-band emissions are centered over the main core of UGC 5101, showing the same orientation as observed by Smith et al. (2019).The soft-band emission extends up to 25″ (20 kpc) to the north and 17″ (13.5 kpc) to the south (3σ). The spatial distribution of X-ray emission around UGC 5101 is generally comparable to that detected in previous works (Smith et al. 2019).However, at approximately 40″-60″ radius to the northeast (α, δ = 143°.980, +61°.363), the SAUNAS map reveals a diffuse bridge connecting with UGC 5101 at a ∼2σ level (μ soft ∼ 6.2 × 10 −10 s −1 cm −2 arcsec −2 in the soft band).For clarity, we will refer to this extended emission as X1.confidence level with a comparable angular area to UGC 5101 but with a maximum surface brightness 20-30 times lower than the main object (see Figure 7).has been discussed previously in the literature not as part of the UGC 5101 system but rather as a potential higher-z galaxy cluster (Clerc et al. 2012;Koulouridis et al. 2021) in need of spectroscopic confirmation. Observations of the Giant Metrewave Radio Telescope (GMRT) 150 MHz all-sky radio survey23 (Intema et al. 2017; see bottom left panel in Figure 6) confirm the detection of an adjacent source centered over the recovered X-ray emission, with a surface brightness of μ = 10 −4 Jy arcsec −2 .The GMRT flux maps are shown as contours in Figure 6, revealing a peak of radio emission over the center of X1 in addition to UGC 5101.GALEX UV observations provide a near-ultraviolet flux of 5.14 ± 0.15 × 10 −6 Jy (Seibert et al. 2012) but only upper limits in the far-ultraviolet band (9.8 × 10 −6 Jy).Recent JWST observations (GO 1717; PI: Vivian U.; MIRI) of UGC 5101 were inspected for this work, but they suffer from extreme saturation of the bright core of the galaxy, and the outer X-ray-emitting region lies outside the footprint, so they were discarded for this study.While investigating the nature of this extended X-ray emission is beyond the scope of this paper focused on the presentation of the SAUNAS pipeline, we briefly discuss the main hypotheses (hot gas plume or high-z galaxy cluster) in Section 4. Limitations We have demonstrated the SAUNAS methodology to be successful in recovering dim, extended surface brightness X-ray features under low-S/N conditions through performance tests using both synthetic (Section 2.3) and real (Section 3) X-ray data sets. There are, however, several limitations of SAUNAS in its current form that will be addressed in future versions of the pipeline.Among them, SAUNAS does not attempt to provide a quantitative separation between extended sources, such as a segmentation map.Deblending of extended X-ray sources is one of the main objectives of a complementary code, EXSdetect (Liu et al. 2013), using a friend-of-friends algorithm.Other specialized pipelines for X-ray observations, such as CADET, based on machine-learning algorithms, allow for the identification of specific source morphologies, such as X-ray cavities (Plšek et al. 2024).The potential combination of SAUNAS for generating low surface brightness detection maps with existing morphological identification and segmentation software will be explored in the future. Another limitation of the SAUNAS pipeline is the precision of the PSF.The generation of the Chandra/ACIS PSFs depends on multiple factors, including, but not limited to, the position of the source on the detector, the spectral energy distribution of the source, or the specific parameters fed into the MARX simulation software (like the aspect blur).For example, LIRA deconvolution software only accepts one PSF for the whole image, and as a consequence, the shapes of sources at large distances from the center of the image might be inaccurate.This phenomenon can cause residuals if observations present bright sources at high angular distances from the center of the source, since the deconvolution will be based on the PSF at the center of the observation but not at the location of the secondary contaminating source.As an attempt to quantify this effect, we estimate in Figure 8 the variation of the PSF size (R 90% , radius that contains 90% of the flux of a point source) versus angular separation to the source using CIAO psfsize_srcs,24 based on the Chandra/ACIS observations of UGC 5101.The results show that the PSF increases by a factor of ×2 in ∼2′ (×10 in ∼10′).In our science cases, no bright object was observed in the environment of the main sources (NGC 3079, UGC 5101), so the main contributors to the scattered light are the sources for which the PSF was calculated.However, observers must be wary of strong residual PSF wings from nearby sources at ∼2′ and longer distances.While a complete analysis of the uncertainties of the PSF in Chandra is outside the scope of the current paper, we refer to the Appendix in Ma et al. (2023) for a review in the field. NGC 3079 The analysis of the Chandra/ACIS observations in the field of NGC 3079 revealed signs of X-ray wind out to galactocentric distances R ∼ 30 kpc, compatible with previous observations using XMM-Newton (Hodges-Kluck et al. 2020).While XMM-Newton is able to trace the extended X-ray emission out to larger distances (∼40 kpc) in some directions, some considerations must be made in order to compare XMM-Newton results with the benchmark study provided here. XMM-Newton observations of NGC 3079 combine an ∼11 times longer exposure time (300.6 ks) than the usable time in Chandra/ACIS (26.6 ks) observations.2. XMM-Newton has a larger effective area (4650 cm 2 at 1 keV) than Chandra (555 cm 2 ), at the expense of a lower spatial resolution25 (XMM-Newton/FWHM = 6″ versus Chandra/FWHM = 0 2).While the aperture is smaller, proper masking of point sources improves detectability of dim structures by reducing the background noise.3. The analysis of the X-ray emission by Hodges-Kluck et al. (2020) is based on the inspection of the quadrant stacked images with a certain signal and radial threshold (see their Figure 4, center panel).The methodology they use to calculate the limiting radius of the diffuse X-ray emission is not clearly stated in their analysis, making a direct and accurate comparison of results difficult. Despite the differences of the detection methods, we conclude that SAUNAS is able to recover extended, low surface brightness X-ray emission using Chandra/ACIS X-ray observations of NGC 3079, in excellent agreement with the deeper exposure taken by XMM-Newton. 10 1.3 1.5 9 s −1 cm −2 arcsec −2 , 0.3-1.0keV) located in the northeast of the UGC 5101 merging galaxy.X1 has been previously detected in X-rays by Smith et al. (2019), but its emission was not discussed or treated as part of UGC 5101ʼs outskirts.Other works (Clerc et al. 2012;Koulouridis et al. 2021) tentatively classified X1 as a potential background galaxy cluster, but this feature remains unconfirmed, as spectroscopic observations are unavailable.X1 is detected also in GMRT 150 MHz observations as a secondary source adjacent to UGC 5101, confirming the existence of a feature at this location.Two main hypotheses regarding the nature of X1 are that 1. X1 is part of the extended X-ray-emitting envelope of UGC 5101, and 2. X1 is a background source, potentially the extended envelope of a higher-z object, such as a massive earlytype galaxy or a cluster. Although the X-ray emission in the soft and medium bands of X1 is adjacent to that of UGC 5101, and both objects have a dominant emission in the soft band compared to the medium and hard (see Figures 6 and 7), the emission could still be part of a hot gas halo at higher-z.In fact, the center of the Chandra/ ACIS X-ray emission overlaps remarkably well with that of a background galaxy.Figure 9 shows the HST Advanced Camera for Surveys (ACS) imaging (bands) centered over X1, with the soft-band X-ray emission contours overlapped for reference.The peak of X-ray emission is coincident with the position of a background galaxy (WISE J093555.43 +612148.0).Unfortunately, WISE J093555.43 + 612148.0does not have spectroscopic or photometric redshifts available. While resolving the nature of X1 is beyond the scope of this paper, we conclude that the test performed with the Chandra/ ACIS observations of UGC 5101 using SAUNAS demonstrates the pipeline's capabilities in successfully producing adaptively smoothed, PSF-deconvolved X-ray images in different bands.The image reduction process presented here allows for a better calibration of the background to recover details at both high resolution and high surface brightness (inner core structure of the merging galaxy) as well as extended ultralow surface brightness regions, such as the previously unknown extended emission around UGC 5101. Conclusions In this paper, we have presented SAUNAS, a pipeline to detect extended, low surface brightness structures on Chandra X-ray observations.SAUNAS automatically queries the Chandra Archive, reduces the observations through the CIAO pipeline, generates PSF models, and deconvolves the images, identifying and masking point sources and generating adaptatively smoothed surface brightness and detection S/N maps for the sources in the final mosaics.We have demonstrated through tests of simulated data and comparisons to published results that the SAUNAS pipeline distinguishes itself from other existing X-ray pipelines by meeting the following main objectives: 1. generating X-ray detection maps for extended sources in a consistent, statistically reproducible way and 2. providing a modular framework for reduction of Chandra/ACIS observations focusing on the detection of faint extended sources, simplifying the access to X-ray archival observations for multiwavelength studies. Our approach to meeting these objectives is to assess the statistical probability that signal in low-count areas is real.This strategy can both produce detections of previously overlooked diffuse emission and minimize false-positive detections of extended hot gas emission.In Section 3, we compare SAUNASprocessed archival Chandra/ACIS data to published results.This section demonstrates that the proposed methodology succeeds in recovering the extended emission detected in a selection of local Universe targets.While the CIAO pipeline provides a canonical and highly efficient procedure to reduce the Chandra observations, the secondary analysis of the resulting event files is usually performed in an independent way by the observers.Such a situation results in two suboptimal consequences: (1) most X-ray studies are focused on single objects or very small samples (three or four objects), and (2) most studies develop their own procedure to correct the PSF effects (if considered), generate smoothed maps, and determine the significance of emission over the background.Planned future work includes an analysis of the extended emission of nearby galaxies using Chandra/ACIS archival data and releasing the tools to the astronomical community.In this first article, we made the processed maps available26 for the community through the Zenodo open repository. A benefit of the automated functionality provided by this tool is its provision of straightforward access to high-level archival Chandra products and facilitation of their use in multiwavelength studies.In future works of this series (A. S. Borlaff et al. 2024, in preparation), we will explore the X-ray emission of a sample of targets using the SAUNAS pipeline, focusing on the evolution of lenticular galaxies based on Chandra/ACIS data in combination with HST and Spitzer observations.The serendipitous discovery presented in this work in one of the galaxies studied, UGC 5101, an ongoing merger galaxy, demonstrates that the combination of multiwavelength legacy archives, such as those of Chandra, GMRT, and HST, may already hold the information to disentangle the impact of the different evolutionary drivers in galaxies. arestoreʼs PSF oversubtraction.Given that the main aim of SAUNAS is the detection of extended sources, we extend the analysis from Appendix A to SAUNAS processing of an extended source model. Figure 11 shows the result from this analysis.A simulated source with a central surface brightness of μ = 10 −3 s −1 pixel −1 and a background level of μ = 10 −7 s −1 pixel −1 is convolved with the same PSF used by the point-source tests described in Appendix A. The resulting event file of convolved data is then processed by SAUNAS and deconvolved by a standard application of CIAO/arestore.A comparison of the associated surface brightness profiles provides both quantitative and qualitative assessments of the different light reconstruction methods. The top right panel of Figure 11 shows that the methodology adopted in SAUNAS produces a result that is more closely aligned with our science-driven requirements.Proper treatment of the fainter regions surrounding objects is a critical factor for the detection of faint extended emission, such as hot gas X-ray halos around galaxies.While SAUNAS produces a wellbehaved profile that smoothly transitions to the background level at large radii, CIAO/arestore manufactures an oversubtracted background region surrounding the object, similar to its treatment of point sources (Appendix A). Figures 12 and 13 show the results of the false-positive/ false-negative quality test described in Section 2.3.1 for the double-jet model.In Figures 14 and 15, the equivalent results are shown for the cavity model.Each row represents different equivalent exposure times, from t = 5 10 exp 7 s cm 2 to t = 5 10 exp 4 s cm 2 .We refer to the captions in the figures for details.Table 1).The top row shows the simulated event images.The middle row shows the final recovered surface brightness maps after processing with SAUNAS.Dashed contours represent the 3σ and dotted contours the 2σ detection level of X-ray emission.The bottom row represents the false-positive (red) and false-negative (blue) detection maps for each simulation (see Section 2.3.1).In the columns from left to right, the equivalent exposure times are t = 10 exp 7 , 5 × 10 6 , and 10 6 s cm 2 .See the labels in the panels.Color bars represent the number of events per pixel (event images) and the surface brightness flux (final mosaics).Table 1).The top row shows the simulated event images.The middle row shows the final recovered surface brightness maps after processing with SAUNAS.Dashed contours represent the 3σ and dotted contours the 2σ detection level of X-ray emission.The bottom row represents the false-positive (red) and false-negative (blue) detection maps for each simulation (see Section 2.3.1).In the columns from left to right, the equivalent exposure times are t = 5 10 exp 5 , 10 5 , and 5 × 10 4 s cm 2 .See the labels in the panels.Color bars represent the number of events per pixel (event images) and the surface brightness flux (final mosaics).Table 1).The top row shows the simulated event images.The middle row shows the final recovered surface brightness maps after processing with SAUNAS.Dashed contours represent the 3σ and dotted contours the 2σ detection level of X-ray emission.The bottom row represents the false-positive (red) and false-negative (blue) detection maps for each simulation (see Section 2.3.1).In the columns from left to right, the equivalent exposure times are t = 10 exp 7 , 5 × 10 6 , and 10 6 s cm 2 .See the labels in the panels.Color bars represent the number of events per pixel (event images) and the surface brightness flux (final mosaics). Figure 1 . Figure 1.SAUNAS pipeline flowchart.From left to right: SAUNAS precalibrates the Chandra observations by first using Chandra X-ray Center (CXC)/CIAO, which generates the event files, extended source masks, and PSFs.The events in each individual visit are first resampled via bootstrapping and then deconvolved using LIRA.Voronoi binning is applied to each deconvolved observation and merged into a single flux map after sky background correction. Figure 2 . Figure 2. SAUNAS analysis test on a synthetic data set.Top left: underlying distribution of the simulated test source.Top right: PSF of the simulated observation.Center left: simulated underlying distribution of the test source convolved by the PSF.Center right: simulated observed events based on the PSF-convolved distribution.Bottom left: LIRA PSF-deconvolved average posterior image.Bottom right: adaptively smoothed final mosaic.Dashed contours represent the 3σ and dotted contours the 2σ detection level of X-ray emission.The equivalent exposure time for this test is t = 5 10 exp Figure 3 . Figure 3. Fraction of false positives and false negatives in the SAUNAS detection maps derived from two truth models as a function of the equivalent exposure time (cm 2 s).Blue symbols and lines represent the fraction of false negatives, while red represents the fraction of false-positive detections in the mock maps.Cross symbols correspond to the double-jet model, and filled circles represent the cavity model (see Table1).Vertical dashed lines indicate the median equivalent exposure times for the analyzed real observations in their respective bands. (2019) using Chandra observations ( ¢ R 0.75  , 4.1 kpc).Hodges-Kluck et al. (2020) argued that the sensitivity of the longest Chandra observations in the soft X-ray band (E < 1 keV) is affected by the molecular contaminant buildup on the Figure 4 . Figure 4. Flux conservation in SAUNAS frames.The histogram represents the probability distribution of the ratio between the recovered flux after SAUNAS processing and the total flux of the input, preprocessed frames. 3-2.0 keV) as in Hodges-Kluck et al. (2020) is used.The available Chandra/ACIS observations of NGC 3079 are detailed in Table 2.Each visit was reprocessed with independent PSF deconvolution, and then the visits were combined for Voronoi binning.Observations 19307 and 20947 were processed but discarded due to the presence of very largescale gradients and unusually high background levels in the detectors where the main emission from NGC 3079 is located.After processing the remaining observations (2038 and 7851) with SAUNAS, extended emission observed by Chandra is compared to the results from XMM-Newton.The PSFs of the 2038 and 7851 observations and their unprocessed events are available in Figures 16 and 18 in Appendixes C and D, respectively.Following the results from Figure 2 in Hodges-Kluck et al. (2020), four angular cone regions display diffuse emission: northeast (θ = 40°), southeast (θ = 110°), southwest (θ = −140°), and northwest (θ = −60°) (θ is measured counterclockwise, and north corresponds to 0°; see Figure 5). Figure 5 . Figure 5. Extended X-ray wind cones in NGC 3079, recovered in the Chandra/ACIS observations using SAUNAS.(a) Broadband (Chandra: 0.3-2.0keV) surface brightness profiles of the four filaments identified by Hodges-Kluck et al. (2020) using XMM-Newton and GALEX observations.Top to bottom: all filaments, northeast, southeast, southwest, and northwest.Radial detection limits are in the panels (95% confidence level).(b) SAUNAS-processed image showing 2σ contours (black; shown in white in panel (c)) with filament sectors in yellow.The radial detection limit indicated in panel (a) for each of the four filaments is shown as solid yellow sectors, while that of "all filaments" is shown as dashed yellow, following the methodology found in Hodges-Kluck et al. (2020).The thick dark red circle in (b) shows the maximum detection limit found with XMM-Newton, compatible with our results.(c) Comparison of the optical morphology (Pan-STARRS gri) of NGC 3079 with the extended X-ray emission. Figure 7 displays surface brightness profile analysis results and associated comparisons with X1.The central surface brightness of X1 is μ soft =-−1 cm −2 arcsec −2 in the soft band and μ medium = -−1 cm −2 arcsec −2 in the medium band.The emission of X1 is detectable at a 3σ Figure 27 in Figure 27 in Smith et al. (2019) shows a hint of what might be emission jutting to the northeast of UGC 5101 where we see X1 but at a considerably lower detectability.The X1 feature Figure 6 . Figure 6.Diffuse X-ray emission of UGC 5101 as detected with SAUNAS/Chandra in the 0.3-1.0keV band (top), 1.0-2.0keV band (center), and 2.0-8.0 keV band (bottom).Left: HST/ACS color image (red: F814W; green: F435W+F814W; blue: F435W).Right: SAUNAS map of the diffuse X-ray emission, corrected for PSF effect, point sources, and background.Solid contours represent 3σ detections and dotted contours the 2σ detection level of X-ray emission, represented in white (left panels) and black (right panels) for contrast.Solid red contours show GMRT 150 MHz data.The white dashed ellipse represents the previous detection limits reported by Smith et al. (2019) of UGC 5101 in the same band. Figure 7 . Figure 7. Surface brightness profiles of the diffuse X-ray emission of UGC 5101 and the extended diffuse northeast source (X1) detected with SAUNAS/Chandra in the 0.3-1.0 and 1.0-2.0keV bands.Radially averaged surface brightness profile (blue upward triangles: 0.3-1.0keV band; purple downward triangles: 1.0-2.0keV band).Shaded areas represent the 1σ and 2σ error bars.Solid blue and dashed purple vertical lines represent the 2σ detection limits for the 0.3-1.0keV and the 1.0-2.0keV bands.Blue and purple stars show the average surface brightness of the northeast extended emission X1, represented at the measured galactocentric distance from UGC 5101. Figure 8 . Figure 8. Variation of the Chandra/ACIS PSF size as a function of the angular separation to the center of the FOV.Vertical axis: radius enclosing 90% of the flux from the PSF at 1.0 keV, based on the observations of UGC 5101.Horizontal axis: angular separation to the center of the source, approximately the center optical axis.The horizontal dotted lines mark the PSF sizes that correspond to ×2, ×5, ×10, and ×20 the PSF size at its center (×1). Figure 11 . Figure 11.Extended source PSF deconvolution test.Top left panel: emission from a circular source object with a central surface brightness of μ = 10 −3 s −1 pixel −1 and a background level of μ = 10 −7 s −1 pixel −1 convolved with a reference Chandra/ACIS PSF (NGC 3862, α = 176°.2709, δ = +19°.6063; ObsID: 514).Top right panel: surface brightness profiles of the ground-truth (non-PSF-convolved) test source (red solid line), PSF-convolved source (gray dashed-dotted line), SAUNASdeconvolved image (blue dashed line), and CIAO/arestore-deconvolved (orange dotted line) images.The horizontal dotted line represents the sky background of the model, and the vertical dotted line represents the radial limit of the circular test source (R = 15 pixels).Bottom left panel: SAUNAS-deconvolved image.Bottom right panel: CIAO/arestore-deconvolved image.Note that the convolved image (events map) and the CIAO/arestore-deconvolved image have been processed using Voronoi binning for visualization purposes of the surface brightness.See the legend and color bar in the figure. Figure 12 . Figure12.SAUNAS processing test using the double-jet model as a function of the equivalent exposure time (see Section 2.3.1 and Table1).The top row shows the simulated event images.The middle row shows the final recovered surface brightness maps after processing with SAUNAS.Dashed contours represent the 3σ and dotted contours the 2σ detection level of X-ray emission.The bottom row represents the false-positive (red) and false-negative (blue) detection maps for each simulation (see Section 2.3.1).In the columns from left to right, the equivalent exposure times are t = 10 Figure 13 . Figure 13.(Continuation of Figure 12) SAUNAS processing test using the double-jet model as a function of the equivalent exposure time (see Section 2.3.1 andTable 1).The top row shows the simulated event images.The middle row shows the final recovered surface brightness maps after processing with SAUNAS.Dashed contours represent the 3σ and dotted contours the 2σ detection level of X-ray emission.The bottom row represents the false-positive (red) and false-negative (blue) detection maps for each simulation (see Section 2.3.1).In the columns from left to right, the equivalent exposure times are t = 5 10 Figure 14 . Figure14.SAUNAS processing test using the cavity model as a function of the equivalent exposure time (see Section 2.3.1 and Table1).The top row shows the simulated event images.The middle row shows the final recovered surface brightness maps after processing with SAUNAS.Dashed contours represent the 3σ and dotted contours the 2σ detection level of X-ray emission.The bottom row represents the false-positive (red) and false-negative (blue) detection maps for each simulation (see Section 2.3.1).In the columns from left to right, the equivalent exposure times are t = 10 Figure 19 . Figure 19.Event maps of the UGC 5101 observations before processing with SAUNAS.Left to right: UGC 3079 on the 0.3-1.0keV band, UGC 3079 on the 1.0-2.0keV band, and UGC 3079 on the 2.0-8.0 keV band.The binning (pixel scale) for the UGC 5101 images is 1 × 1 (0 492 pixel −1 ).Solid black contours represent the 3σ and dashed contours the 2σ detection level of X-ray emission. Table 1 Photometric and Structural Properties of the Synthetic Test Models
14,281.4
2024-05-30T00:00:00.000
[ "Physics" ]
Aphanius arakensis, a new species of tooth-carp (Actinopterygii, Cyprinodontidae) from the endorheic Namak Lake basin in Iran Abstract A new species of tooth-carp, Aphanius arakensis sp. n., is described from the Namak Lake basin in Iran. The new species is distinguished by the congeners distributed in Iran by the following combination of characters: 10–12 anal fin rays, 28–32 lateral line scales, 10–13 caudal peduncle scales, 8–10 gill rakers, 12–19, commonly 15–16, clearly defined flank bars in males, a more prominent pigmentation along the flank added by relatively big blotches in the middle and posterior flank segments in females, a short but high antirostrum of the otolith that has a wide excisura, and a ventral rim with some small, drop-like processes, and 19 molecular apomorphies (17 transitions, two transversions) in the cytochrome b gene. It was suggested based on the phylogenetic analysis that the new species is sister to Aphanius sophiae from the Kor River and that Aphanius farsicus from the Maharlu Lake basin is sister to Aphanius arakensis plus Aphanius sophiae. A noticeable feature of the Aphanius diversity in Iran is the conservatism of the external morphology as well as morphometric and meristic characters, while distinctive differences are present in genetic characters, otolith morphology, and male color pattern. Transformation of the latter was probably driven by sexual selection. Introduction Aphanius is the only representative of the Cyprinodontidae (Teleostei, Cyprinodontiformes) in Eurasia. The genus occurs in coastal (brackish) and landlocked (freshwater to saline) water bodies in the Mediterranean and Persian Gulf basins from Iberian Peninsula as far eastwards as Iran and Pakistan (Wildekamp 1993). Aphanius species diversity is highest in the endorheic basins of the mountainous regions of central Anatolia and the Iranian plateau (Coad 2000;Hrbek and Meyer 2003, Hrbek et al. 2006, Esmaeili et al. 2012. Though central Anatolia is believed to represent the center of Aphanius speciation (Wildekamp et al. 1999), a high number of Aphanius species also occurs in Iran. Apart from the widely distributed A. dispar (Rüppell, 1829), seven endemic Aphanius species have been described from Iran to date, namely A. ginaonis (Holly, 1929) from the Genow hot spring near the Persian Gulf; A. isfahanensis Hrbek, Keivany & Coad, 2006 from the endorheic Esfahan basin; A. farsicus Teimori, Esmaeili and Reichenbacher, 2011 from the endorheic Maharlu Lake basin [A. farsicus is a replacement name for the previous A. persicus (Jenkins, 1910) because this name has been recognized as a homonym of the fossil A. persicus (Priem, 1908) (Gaudant 2011, Teimori et al. 2011; A. sophiae (Heckel, 1849) from the endorheic Kor River Basin; A. vladykovi Coad, 1988 from the upper reaches of the Karoun basin; A. mesopotamicus Coad, 2009 from the Tigris-Euphrates drainage; and the recently re-established A. pluristriatus (Jenkins, 1910) from the Mond River drainage. In addition to the species listed above, Lebias punctatus and Lebias crystallodon were originally described from the Nemek Deria near Shiraz by Heckel (1846Heckel ( -1849. Berg (1949) and Coad (1996) considered L. punctatus to be a synonym of A. sophiae but at that time most of now valid species distributed in Iran were thought to be synonyms of the widely distributed A. sophiae. Coad (1996) strongly suggested that the type locality of L. punctatus is not the Lake Maharlu but some other lake nearby as a name Nemek Deria is a very common name in Farsi for a salt lake. However, later, the Kotschy's itinerary in southern Iran in 1841 and 1842 was studied in detail based on botanical labels and it was clearly shown that collections by Kotschy studied by Heckel indeed came from a lake now called Maharlu (Edmondson and Lack 2006). This aspect is not in the focus of this very paper; we tentatively consider L. punctatus to be a synonym of A. sophiae until a proper examination of the extant syntypes of Lebias punctatus is done. A number of isolated Aphanius populations that might deserve species status have been reported from endorheic drainages in Iran, but have not yet been investigated in detail (Coad and Abdoli 2000;Hrbek et al. 2006;Esmaeili et al. 2010). They were commonly identified as A. sophiae (Heckel, 1849) (Coad and Abdoli 2000;Kamal et al. 2009); however, it was shown that the true A. sophiae is restricted to the endorheic Kor River basin near Shiraz (Fars Province) (Coad 2009;Esmaeili et al. 2012). This study describes a newly discovered Aphanius population from the Namak Lake basin in northern central Iran (Fig. 1). The specimens were collected in 2007 because they appeared to be different from other Iranian Aphanius species by a specific coloration. Here it is shown that the population from the Namak Lake basin in fact represents a new species, Aphanius arakensis. Our study is based on a total-evidence approach including morphometric and meristic characters, otolith morphology, and molecular data. Material and methods Institutional acronyms: ZM-CBSU, Zoological Museum of Shiraz University, Collection of Biology Department; ZSM, Zoological State Collection, Munich. Morphological analysis Based on the morphometric schemes introduced in Holcik et al. (1989) and Doadrio et al. (2002), 18 morphometric parameters were measured using a Vernier calliper and recorded to the nearest 0.5 mm. The standard length was measured from the most anterior part of the snout to the base of the caudal fin rays. In total, 21 relative variables were calculated from the measurements (Table 1). Scales removed from the left side of each fish, from the 3rd or 4th row below the dorsal fin, were mounted between microscope slides, and length and width of scales were measured to the nearest 0.1 mm by using a scale reader (Xerox 320). For each individual, scale length and scale width measurements were averaged to obtain a single length value and a single width value per individual and relative width and length of scales were calculated following Esmaeili (2001). The meristic characters were counted under a stereomicroscope and consist of the numbers of (i) dorsal (ii) pectoral (iii) pelvic and (vi) anal fin rays, (v) lateral line series scales, (vi) caudal peduncle scales (the numbers of scales along the caudal peduncle, i.e. from the base of the last anal fin ray to the base of the caudal fin rays in a direct line), (vii) gill rakers and (viii) flank bars of males. Two posteriormost rays in dorsal and anal fins were calculated as one ray. For examination of otolith morphology fish skulls were opened ventrally in order to remove the right and left otoliths. Otoliths were cleaned from tissue remains in 1% potassium hydroxide solution for 3-6 h, washed several times and finally rinsed in distilled water for 12 h. Otolith morphology was analyzed under a stereo microscope. In addition, five or six otoliths from each population were examined by a scanning electron microscope (SEM) with a LEO 1430 VP at ZSM. Univariate analysis of variance (ANOVA, with Duncan's post hoc test, p < 0.05) was used to test the significance of phenotypic differences among species and also between sexes. Canonical discriminant analysis (CDA) was used for multivariate analyses in order to document the classification success of the groups. The statistical analyses were carried out using PASW 19.00 (SPSS Inc 2011) and PAST (Hammer et al. 2001: PAlaeontological STatistics, version 1.81). Laboratory protocols and molecular analyses Total genomic DNA was extracted according to phenol/chloroform procedures (Sambrook et al. 1989). A 900 base pairs (bp) fragment of the cytochrome b gene was successfully amplified via PCR using the primers (forward: Glu-F, 5' -AACCAC- CGTTGTATTCAACTACAA-3'; reverse: ThrR, 5'-CCTCCGATCTTCGGATTA-CAAGACCG-3' (Machordom and Doadrio 2001). Amplification was performed in a thermal cycler programmed as follows: initial 94°C for 3 min, 35 cycles at 94°C for 50 s, 56°C for 45s, 72°C for 1 min, followed by a final extension at 72°C for 5 min. Sequencing was performed by Macrogen company, South Korea. Cytochrome b nucleotide sequences were edited with BioEdit and aligned through Geneious pro v5.4 (Drummond et al. 2011). Additional Aphanius sequences were obtained from the NCBI GenBank (http://www.ncbi.nlm.nih.gov) and included in the analyses (see above). The achieved cytb sequences for the here studied Aphanius populations were deposited in GenBank under numbers JX154880-JX154898. Maximum likelihood-based phylogenetic relationships were estimated by using the program SeaView version 4 (Gouy et al. 2010). The best-fit model of nucleotide substitution was obtained using the program JmodelTest 0.1.1 (Posada 2008). Accordingly, the GTR + I + G model (= General Time Reversible model + proportion of Invariable sites + Gamma-shaped distribution of rates across sites) was chosen. Maximum parsimony based phylogenetic relationships were estimated using the program SeaView version 4 (Gouy et al. 2010) with 100 heuristic searches using random additions of sequences and implementing the Close-Neighbor-Interchange (CNI) on random tree algorithm. To test this phylogeny, bootstrap method using 2000 replication was used. To document the degree of homoplasy and degree to which potential synapomorphy is exhibited on the tree, the Consistency Index (CI) and the Retention Index (RI) were calculated by using the parsimony model within the Mesquite system for phylogenetic computing (Maddison and Maddison 2011). The Neighbor Joining (NJ) distance-based phylogenetic relationships were estimated by using the computer program Geneious pro v5.4 (Drummond et al. 2011). The HKY85 model (Hasegawa et al. 1985) of molecular evolution was used with gamma distributed among site rate variation. There were a total of 771 positions in the final dataset. Results Aphanius arakensis sp. n. urn:lsid:zoobank.org:act:D9995F4C-AF0A-4791-9D80-D759EFEDA569 http://species-id.net/wiki/Aphanius_arakensis Figure 2A Diagnosis. The new species is distinguished by the congeners distributed in Iran by the following combination of characters: 10-12 anal fin rays, 28-32 lateral line scales, 10-13 caudal peduncle scales, 8-10 gill rakers, 12-19, commonly 11-13, clearly defined flank bars in males, a more prominent pigmentation along the flank added by relatively big blotches in the middle and posterior flank segments in females, a short but high antirostrum of the otolith that has a wide excisura, and a ventral rim with some small, drop-like processes and 19 molecular apomorphies (17 transitions, two transversions) in the cytochrome b gene. Description of the holotype. The males of the new species reach approximately 32 mm SL and have 12-19 flank bars, the females are usually larger than the males and reach approximately 34 mm SL. The morphometric characters are summarized in Table 1. Compared to the other examined Aphanius species, A. arakensis sp. n. shows higher mean values of the minimum body depth, width and length of scales, distances between the pectoral and pelvic fins and the interorbital distance, but significantly lower mean values for the eye diameter and the caudal peduncle length (differences are statistically significant, p < 0.05). The meristic characters are summarized in Table 2. The dorsal fin is characterized by a somewhat curved superior border, and has 11-14 rays; the anal fin shows a round superior border and includes 10-12 rays; the pectoral fin is rounded and consists of 14-18 rays; the pelvic fin is relatively short, positioned just anteriorly to the anal fin and comprises 6-8 rays. The caudal fin is rounded; the caudal peduncle possesses 10-13 scales. The number of lateral line series scales is 27-32. However, the ANOVA analysis reveals that only the numbers of lateral line series scales and caudal peduncle scales (in males and females), as well as the numbers of flank bars (in males), significantly differ from the values obtained for the other examined species. Moreover, there is a significant correlation between SL and numbers of flank bars (Pearson Correlation r = 0.455, p < 0.05*). The otolith is rounded-trapezoid and characterized by a very wide excisura, a medium-sized and pointed rostrum, and a quite short antirostrum. The ventral and dorsal rims are slightly curved; the ventral rim may bear small irregular processes; the dorsal rim may show a fine crenulation; the posterior rim is steep (Fig. 3W-Aa). The flank bars in males (Fig. 2a) are narrow and the interspaces are broader than the bars. The first bar is located above the operculum, while the posteriormost bar is located at the base of the caudal fin; the interspaces are wider at the caudal peduncle than in the anterior body part. Dorsally, the head is gray and the body is dark due to a strong melanophore pigmentation. The ventral body portion does not usually show any dark pigmentation. The dorsal, anal and caudal fins have white margins; the first rays of the dorsal fin are dark. The pectoral fins are somewhat yellowish. The pelvic fin is yellowish. Most specimens are characterized by dark blotches at the base of the dorsal and anal fins. Females (Fig. 2b) are characterized by a grayish pigmentation of the back. The lateral flanks of the body are covered by dark pigmentations; series of blotches are present from the middle of the body to the caudal peduncle. The ventral part of the head and belly are light. The chin and sides of the head are speckled with melanophores. Below the eye there is a line of relatively dark melanophores. All fins are white. Distribution and habitat. The species has been collected from a small natural shallow pond (Fig. 4) in the Namak Lake basin, 5 km south east of the city of Arak (Fig. 1). This pond, which is about 6 x 4 m in size, is fed by the drainage of a nearby natural spring. During sampling, the water body was almost stagnant and water temperature was 23°C. There was no vegetation in the pond, but the surrounding area was covered with Juncus sp. and Typha sp. The bottom of the pond was generally muddy with small gravels. The habitat was in a bad condition due to anthropogenic pollution. Around collection time, the new Aphanius species was the only fish observed living in the pond. In addition, the new species can be found in several springs located in close proximity to the type locality (Fig. 5). Etymology. The species name refers to the city of Arak, which is located in close proximity to the type locality. Arak is the capital of the Markazi province in northcentral Iran. A proposed common name is Arak tooth-carp. Farsi name is Kapour-edandandar-e-Arak. . Natural shallow pond and type locality of Aphanius arakensis sp. n., in the Namak Lake Basin, 5 km SE of Arak city, Iran (see Fig. 1). Phylogenetic relationships The parameters for the maximum likelihood are ln(L) = -85.11.91237, gamma shape parameter of 1.000, proportion of invariant sites of 0.097 and parsimony = 1556. The maximum parsimony phylogeny has a CI of 0.462 and RI of 0.747. The initial tree for the maximum likelihood analysis was obtained by the BIONJ algorithm. The trees of the maximum likelihood and maximum parsimony phylogenies (Fig. 6) are not significantly different in topology (Templeton test, P > 0.05). They support the hypothesis that Aphanius arakensis diverged from the clade leading to the present-day A. sophiae and is sister to this species. Moreover, A. farsicus is sister to A. arakensis + A. sophiae; sister to these taxa is A. isfahanensis, and sister to all previously mentioned species is A. vladykovi. The same topology (Templeton test, P > 0.05) is observed for the tree of the Neighbor Joining (NJ) distance-based analysis. Table 4 shows the estimation of evolutionary divergence between the sequences of the new species and its relatives. Probable reasons for morphological similarities between endemic Aphanius species Several endemic Aphanius species are known that are soundly circumscribed by genetic differentiation and specific otolith morphology (see below), whereas they differ only weakly (or only in multivariate space) with regard to morphometry and meristics. Examples are A. isfahanensis from central Iran, A. sophiae and A. farsicus from southern Iran (Hrbek et al. 2006, this study); another example from the Mediterranean area is A. baeticus from Spain (Doadrio et al. 2002). Aphanius arakensis sp. n., from the Namak Lake basin represents yet another example for a species that is difficult to distinguish from its relatives based on external characters (with the exception of the features mentioned above). It is likely that the overall morphological similarity between these taxa are a result of the similar habitats, in which the various endemic Aphanius species are thriving. Thus, common environmental variables may have acted as a stabilizing selection on morphological characters (see also Hrbek et al. 2006). This offers an explanation as to why speciation events in Aphanius have affected genetic characters, rather than morphology, and why rapid genetic diversification can occur with little morphological change in this taxon (see also Adams et al. 2009). Probable reasons for otolith differences between endemic Aphanius species Otolith morphology is known to support the distinctive taxonomic state of several Aphanius species (Reichenbacher et al. 2007(Reichenbacher et al. , 2009a and A. sophiae (Fig. 3) to show that these species are clearly different with regard to otolith morphology. Also A. arakensis shows clear divergence of its otolith morphology in comparison to the other inland Aphanius species, in particular with regard to the weakly pronounced antirostrum (Fig. 3W-Aa). Notably, the otoliths of A. vladykovi are most distinctive in comparison to those of the other studied species as they are characterized by a long ventral part, angular overall shape and long rostrum (Fig. 3R-V). This uniqueness of the A. vladykovi otoliths corresponds well to our and previous phylogenetic analyses, which have established A. vladykovi as being sister to all other Iranian inland species that diverged approximately 10 Ma ago (Hrbek et al. 2003). As a result, Aphanius likely has a higher rate of divergence in otolith morphology than in overall morphology. This difference in divergence rate may be related to the function of the otoliths as parts of the inner ear. In general, otoliths provide a mechanism for measuring motion and position of the head relative to gravity (Manley et al. 2004). However, it is quite important for a fish to know from where a sound is coming, so as to be able to distinguish between different sounds and pick out the biologically most relevant sounds (Popper et al. 2005). In addition, differences in otolith morphology are related to the balance and orientation of a fish (Popper et al. 2005). This means that differences in otolith morphology can reflect changes in intraspecific communication and behavior in fishes, that may have acted as evolutionary pressures. Role of coloration pattern (flank bar numbers) in Aphanius diversification Coloration and flank bar numbers are significant characters for the identification of Aphanius species, in particular for the identification of male individuals. Among the allopatric Iranian Aphanius species, males of A. arakensis have the largest number of flank bars, and flank bars are non-overlapping, whereas the number of flank bars is lowest in A. sophiae. Also the flank bars of the central Anatolian Aphanius species vary in thickness and number between species (Hrbek et al. 2002). However, the mechanisms underlying male flank bar variation have not been studied. We hypothesize that flank bar patterns play an important role in sexual selection, and thus represent important factors in the evolutionary history and speciation of Aphanius. Sexual selection has long been believed to promote species divergence among groups of animals (see Kraaijeveld et al. 2010 for a review). Sexual selection may facilitate speciation because it can cause rapid evolutionary diversification of male mating signals and female preferences (Boughman 2001). Divergence in these traits may then contribute to reproductive isolation. Several studies indicate that fishes can adapt to variation in underwater light environments by changing their colour, most likely as a result of a more effective intraspecific communication (Boughman 2001(Boughman , 2002Fuller 2002;Seehausen et al. 2008). Adding support to this interpretation is provided by studies on cichlids from the Victoria Lake (Seehausen et al. 2008) and African elephant fishes (Leal and Losos 2010). These studies indicate that variation in male nuptial coloration due to specific light conditions in different environments can result in ecological, phenotypic, genetic and behavioral differentiation. Additionally, color contrast with the visual background was found to be more important for effective intraspecific communication than color brightness (Fuller 2002). Thus, our conclusion is that the specific male flank bar patterns in different Aphanius species may have evolved as a response to different light regimes prevalent in respective habitats for increasing contrast and optimizing intraspecific communication. It can therefore be suggested that sensory-driven speciation might have played a prominent role in Aphanius speciation. Conclusion The noticeable features of the present-day diversity of the endemic Aphanius species in Iran include high genetic divergence and clear differences in otolith morphology, but only weak differences in general external morphology, morphometry and meristics. These patterns are probably caused by different rates of evolution in the mentioned characters that may be linked to the similarity of the individual environments, intraspecies communication, and vicariance events. It is likely that additional Aphanius species are present in remote areas of Iran, especially in the Zagros and Alburz Mountains.
4,825.8
2012-08-17T00:00:00.000
[ "Biology" ]
Effect of Angiotensin II on Bone Erosion and Systemic Bone Loss in Mice with Tumor Necrosis Factor-Mediated Arthritis Angiotensin II (Ang II) is the main effector peptide of the renin-angiotensin system (RAS), which regulates the cardiovascular system. The RAS is reportedly also involved in bone metabolism. The upregulation of RAS components has been shown in arthritic synovial tissues, suggesting the potential involvement of Ang II in arthritis. Accordingly, in the present study, we investigated the role of Ang II in bone erosion and systemic bone loss in arthritis. Ang II was infused by osmotic pumps in tumor necrosis factor-transgenic (TNFtg) mice. Ang II infusion did not significantly affect the severity of clinical and histological inflammation, whereas bone erosion in the inflamed joints was significantly augmented. Ang II administration did not affect the bone mass of the tibia or vertebra. To suppress endogenous Ang II, Ang II type 1 receptor (AT1R)-deficient mice were crossed with TNFtg mice. Genetic deletion of AT1R did not significantly affect inflammation, bone erosion, or systemic bone loss. These results suggest that excessive systemic activation of the RAS can be a risk factor for progressive joint destruction. Our findings indicate an important implication for the pathogenesis of inflammatory bone destruction and for the clinical use of RAS inhibitors in patients with rheumatoid arthritis. Introduction Rheumatoid arthritis is a chronic inflammatory disorder that can cause painful swelling and bone erosion in the inflamed joints [1]. The accumulation of joint damage results in long-lasting pain and deformity of the affected joints [2]. Persistent systemic inflammation in rheumatoid arthritis can also cause tissue damage in organs such as the lungs, heart, eyes, and bone [3]. Increased inflammatory cytokines affect bone metabolism throughout the body and decrease bone mass and strength, leading to increased risks of osteoporosis and fracture [4]. Joint deformities impair activities of daily life and thus exacerbate osteoporosis in patients with rheumatoid arthritis. This highlights the importance of resolving joint damage and systemic bone loss issues in these patients. No Significant Effect of Ang II Administration on the Severity of Inflammatory Cell Infiltration in TNFtg Mice. No Significant Effect of Ang II Administration on the Severity of Inflammatory Cell Infiltration in TNFtg Mice To assess the effect of Ang II on arthritis, exogenous Ang II (1.44 mg/kg/day) or water (H 2 O) was administered by osmotic pumps to the WT and TNFtg mice for 4 weeks. The treatment with Ang II did not significantly alter body weight ( Figure 1B), but did induce hypertension ( Figure A2). We monitored the severity of paw swelling in each limb during the experimental period. We found that the TNFtg mice exhibited severe swelling of the paws and that the severity of clinical arthritis was not affected by the Ang II infusion. The arthritis score and number of arthritic limbs at the age of 16 weeks are presented in Figure 1C,D. To analyze the inflamed joints histologically, we performed hematoxylin and eosin (H&E) and Safranin O staining to determine the inflammatory cell infiltration and cartilage damage. In WT mice, Ang II administration did not cause any detectable histological changes ( Figure 1E,F). TNFtg mice exhibited massive inflammatory cell infiltration, and Ang II administration did not affect the severity of inflammation in these mice, which is consistent with the arthritis score results ( Figure 1C). Additionally, the severity of cartilage damage, represented by decreased staining of the cartilage matrix, was not affected by the administration of Ang II ( Figure 1E,G). Exacerbation of Bone Erosion by Ang II Administration in TNFtg Mice We then examined the impact of Ang II on the erosive bone changes of the ankle. Bone erosion around the talus was quantified using micro-computed tomography (CT) and 3D image analysis software. The micro-CT analysis revealed that the destructive bone change was significantly more severe in the Ang II-infused TNFtg mice than in the H 2 O-infused TNFtg mice (Figure 2A). This aggravated bone erosion was revealed by the following quantitative analyses: the bone volume (BV) of the talus, the reduction rate of BV, and the eroded volume per repaired volume (Ev/Rpv) of the talus ( Figure 2B-D). Tartrate-resistant acid phosphatase (TRAP)-stained images showed slightly increased osteoclast formation in the joints of the Ang II-infused TNFtg mice compared to those in the H 2 O-infused TNFtg mice ( Figure 2E). Quantitative histological analyses also revealed increased bone erosion and slightly enhanced osteoclast formation around the talus ( Figure 2F,G). These findings suggest that Ang II, imported to the joints from circulation, served as an osteoclast-activating factor in the arthritic joint, resulting in enhanced bone erosion without affecting the clinical severity of arthritis. No Detectable Changes in the Trabecular and Cortical Bone Parameters with Ang II Administration Since both systemic inflammation and excess of Ang II have been reported to decrease the mass of systemic bones [4,11], we examined the bone properties of the tibia and vertebra and determined whether Ang II could synergistically enhance inflammation-mediated bone loss in the Ang II-administered arthritic mice. We assessed the tibia trabecular bone (secondary spongiosa, Figure 3A), the tibia cortical bone (midshaft of the tibia, Figure 3B), and the trabecular bone of the spine (fifth lumbar vertebra, Figure 3C) using micro-CT. The tibia trabecular bone tended to be decreased in the arthritic mice compared to that in the WT mice, although the difference was not statistically significant in the current set of experiments. The bone reduction rates with Ang II infusion were comparable between WT and TNFtg mice at approximately 30% ( Figure 3D), indicating that the synergistic effect of inflammation and Ang II on osteopenia was not noticeable. In the tibia cortical bone, the presence of arthritis significantly increased bone loss, but the reduction rates with Ang II infusion were comparable between WT and TNFtg mice ( Figure 3E). In the vertebral trabecular bone, the presence of arthritis did not significantly affect the bone volume ( Figure 3F). Ang II administration tended to decrease bone volume by approximately 10%, but there was no significant difference in the reduction rates between WT and TNFtg mice ( Figure 3F). The other analyzed parameters of the trabecular and cortical bones also indicated no significant effect of Ang II administration on bone properties ( Figure A3). Collectively, these findings suggest that both inflammation and Ang II tended to decrease bone mass, but there was no apparent synergistic effect on the osteopenic phenotype. Since both systemic inflammation and excess of Ang II have been reported to decrease the mass of systemic bones [4,11], we examined the bone properties of the tibia and vertebra and determined whether Ang II could synergistically enhance inflammation-mediated bone loss in the Ang IIadministered arthritic mice. We assessed the tibia trabecular bone (secondary spongiosa, Figure 3A), the tibia cortical bone (midshaft of the tibia, Figure 3B), and the trabecular bone of the spine (fifth lumbar vertebra, Figure 3C) using micro-CT. The tibia trabecular bone tended to be decreased in the arthritic mice compared to that in the WT mice, although the difference was not statistically significant in the current set of experiments. The bone reduction rates with Ang II infusion were comparable between WT and TNFtg mice at approximately 30% ( Figure 3D), indicating that the synergistic effect of inflammation and Ang II on osteopenia was not noticeable. In the tibia cortical bone, the presence of arthritis significantly increased bone loss, but the reduction rates with Ang II infusion were comparable between WT and TNFtg mice ( Figure 3E). In the vertebral trabecular bone, the presence of arthritis did not significantly affect the bone volume ( Figure 3F). Ang II administration tended to decrease bone volume by approximately 10%, but there was no significant difference in the reduction rates between WT and TNFtg mice ( Figure 3F). The other analyzed parameters of the trabecular and cortical bones also indicated no significant effect of Ang II administration on bone properties ( Figure A3). Collectively, these findings suggest that both inflammation and Ang II tended to decrease bone mass, but there was no apparent synergistic effect on the osteopenic phenotype. Effect of AT1R Deficiency on the Severity of Inflammatory Cell Infiltration in TNFtg Mice Since an excess of exogenous Ang II accelerated inflammatory bone destruction (Figure 2A), we investigated whether endogenous Ang II could play a role in bone destruction using AT1R-deficient arthritic mice generated by crossing TNFtg mice with AT1R-knockout (AT1R −/− ) mice. AT1R deficiency did not significantly alter body weight ( Figure 4A). We found that the severity of clinical arthritis ( Figure 4B,C) and the extent of inflammatory cell infiltration ( Figure 4D,E) were comparable between TNFtg and TNFtg/AT1R −/− mice. In addition, the severity of cartilage damage was not affected by AT1R deficiency (Figure 4D,F). Int. J. Mol. Sci. 2020, 21, x FOR PEER REVIEW 7 of 20 the trabecular bone of the spine (the fifth lumbar vertebra) (C). (D) Bone volume per total volume (BV/TV) and reduction rate of the tibia trabecular bone. (E) Cortical thickness (Ct.Th) and reduction rate of the tibia midshaft. (F) Bone volume per total volume (BV/TV) and reduction rate of the fifth lumbar vertebral trabecular bone. Values are the mean ± SEM. n.s., not significant. *, p < 0.05. Effect of AT1R Deficiency on the Severity of Inflammatory Cell Infiltration in TNFtg Mice. Since an excess of exogenous Ang II accelerated inflammatory bone destruction (Figure 2A), we investigated whether endogenous Ang II could play a role in bone destruction using AT1R-deficient arthritic mice generated by crossing TNFtg mice with AT1R-knockout (AT1R −/− ) mice. AT1R deficiency did not significantly alter body weight ( Figure 4A). We found that the severity of clinical arthritis ( Figure 4B,C) and the extent of inflammatory cell infiltration ( Figure 4D,E) were comparable between TNFtg and TNFtg/AT1R −/− mice. In addition, the severity of cartilage damage was not affected by AT1R deficiency (Figure 4D,F). Influence of AT1R Depletion on Bone Erosion in TNFtg Mice We next examined whether the deletion of AT1R could reduce bone destruction in the arthritic mice. Micro-CT analysis of the ankle joints revealed that TNFtg/AT1R −/− mice exhibited the same extent of severe bone loss as TNFtg mice ( Figure 5A,B). Additionally, the BV reduction rate and the erosive volume (Ev/Rpv) of the talus in the TNFtg/AT1R −/− mice were comparable to those in the TNFtg mice ( Figure 5C,D). These findings indicate that AT1R deficiency did not alleviate the destructive bone changes in the inflammatory joints of the arthritic mice. Histological analyses revealed that the extents of bone erosion and osteoclast formation were comparable between TNFtg and TNFtg/AT1R −/− mice ( Figure 5E-G). Effect of AT1R Deficiency on Bone Properties of the Trabecular and Cortical Bones in TNFtg Mice AT1R −/− mice were previously reported to exhibit an increased trabecular BV and increased trabecular number and connectivity [16]. To examine the effect of AT1R deficiency on the bone volume of systemic bones in the arthritic condition, we analyzed the bone properties of the tibia and vertebra of the TNFtg arthritic mice using micro-CT ( Figure 6A-C). The TNFtg mice exhibited a significant reduction in BV/TV of the tibia and the AT1R deficiency modestly alleviated the bone loss caused by arthritis, even though the difference between TNFtg and TNFtg/AT1R −/− mice was not statistically significant ( Figure 6D). A similar insignificant tendency was observed in the vertebral trabecular bone ( Figure 6F). In the tibia cortical bone, AT1R deficiency did not show any protective effect on bone loss ( Figure 6E). The other analyzed parameters of the trabecular and cortical bones also indicated no significant effect of AT1R deficiency on bone properties ( Figure A4). These findings suggest that the inhibition of endogenous Ang II has a limited protective effect on bone loss in arthritic mice. AT1R −/− mice were previously reported to exhibit an increased trabecular BV and increased trabecular number and connectivity [16]. To examine the effect of AT1R deficiency on the bone volume of systemic bones in the arthritic condition, we analyzed the bone properties of the tibia and vertebra of the TNFtg arthritic mice using micro-CT ( Figure 6A-C). The TNFtg mice exhibited a significant reduction in BV/TV of the tibia and the AT1R deficiency modestly alleviated the bone loss caused by arthritis, even though the difference between TNFtg and TNFtg/AT1R −/− mice was not statistically significant ( Figure 6D). A similar insignificant tendency was observed in the vertebral trabecular bone ( Figure 6F). In the tibia cortical bone, AT1R deficiency did not show any protective effect on bone loss ( Figure 6E). The other analyzed parameters of the trabecular and cortical bones also indicated no significant effect of AT1R deficiency on bone properties ( Figure A4). These findings suggest that the inhibition of endogenous Ang II has a limited protective effect on bone loss in arthritic mice. Discussion In this study, we sought to clarify the impact of excessive Ang II and inhibition of the endogenous RAS on bone erosion and systemic bone loss in a TNF-mediated arthritic condition. We Discussion In this study, we sought to clarify the impact of excessive Ang II and inhibition of the endogenous RAS on bone erosion and systemic bone loss in a TNF-mediated arthritic condition. We found that the administration of Ang II enhanced destructive bone changes in inflammatory joints without affecting the severity of inflammation. There was no noticeable synergistic effect of Ang II administration and inflammation on osteopenia of the tibia and vertebra in mice. Further, we found that AT1R deficiency had a minimal protective effect on bone erosion and systemic bone loss in the arthritis model. Interestingly, we observed that the administration of Ang II aggravated joint destruction in the arthritic mice. Ang II has been reported to enhance systemic bone loss in murine osteoporosis models [11,17]. Ang II induces RANKL expression in osteoblasts and subsequently enhances osteoclastogenesis, resulting in systemic bone loss [6,11]. However, no previous studies have explored the role of the RAS in the development of bone erosion in an arthritis model. In rheumatoid arthritis, inflammatory cytokines such as TNF increase RANKL expression in synoviocytes and subsequently promote osteoclastic differentiation and activation, resulting in erosive bone changes in joints [2]. Our results demonstrate that excessive Ang II could exacerbate the TNF-induced inflammatory joint destruction associated with increased osteoclast formation. The current study has important clinical implications for the management of rheumatoid arthritis. Our findings suggest that in patients in whom the local effect of Ang II is upregulated via increased imported Ang II from circulation, joint destruction can be promoted as a consequence of systemic RAS activation. Systemic activation of the RAS can be observed in several pathological conditions, such as renal artery stenosis, congestive heart failure, cardiac hypertrophy, chronic kidney disease, and obesity [18,19]. Such pathological conditions could be risk factors for progressive joint destruction in inflammatory arthropathies. Although Ang II appeared to promote bone erosion in inflamed joints, its effects on systemic bones, represented by the tibia and vertebra, were found to be very limited. There are several possible explanations for this. Firstly, in the arthritic joints of mice, other inflammatory cytokines such as IL-1 and IL-6 are highly produced [2,20]. In addition to TNF, these other osteoclast-activating factors might play important synergistic roles in the Ang II-promoted bone erosion in joints. Secondly, the expression of AT1R was significantly increased in the arthritic joints ( Figure 1A). This could contribute to hyper-responsiveness to Ang II, resulting in increased osteoclastic bone destruction in the joints. Thirdly, the exposure period to Ang II (4 weeks in this study) might be too short for this effector to exert an osteopenic effect on systemic bones. Indeed, a previous study showed a significant osteopenic effect of excessive RAS activation in 6-month-old Tsukuba hypertensive mice that were continuously exposed to excessive Ang II via transgenes encoding human renin and human angiotensinogen [17]. Since the expression of AT1R was increased in the arthritic joints of the TNFtg mice ( Figure 1A), we assumed that AT1R deficiency would ameliorate bone erosion in this arthritis model. Contrary to our expectation, AT1R deficiency did not significantly improve the erosive bone changes in the TNFtg mice. These data indicate a limited role of the local RAS during the process of joint destruction in this arthritic model. The RAS might modulate bone mass only under pathological conditions with excessive systemic activation. Analyses of AT1R-deficient arthritic mice with excessive Ang II would be needed to verify this concept. The limitation of our study is that the precise mechanisms through which Ang II enhances bone erosion are unclear. We have tested the effect of Ang II on osteoclast differentiation in murine primary bone marrow-derived macrophage cultures. Ang II stimulation did not promote osteoclast formation in the mono-culture of bone marrow-derived macrophage ( Figure A5A,B), whereas Ang II enhanced osteoclast formation in the co-culture system with osteoblasts ( Figure A5C,D). These data suggest that Ang II promotes osteoclast formation indirectly via stromal cells. In support of this notion, Ang II has been previously reported to induce RANKL expression in stromal cells [17]. Various cells can express RANKL in arthritic joints, synovial cells, osteoblasts, or osteocytes, which might be attributed to the Ang II-mediated bone erosion. Other possibilities are that Ang II regulates angiogenesis in arthritic joints or that Ang II modulates cellular functions via the Ang II type 2 receptor, which reportedly regulates inflammation in the arthritic synovium [21]. Further research will be required to clarify the underlying mechanisms. Another possible limitation of this study is the relatively small sample sizes which may have insufficient statistical power to detect a small difference in some comparisons. For instance, a statistically significant difference was not detected in the trabecular BV/TV of the tibia between H 2 O-treated WT (n = 6) and H 2 O-treated TNFtg (n = 4) mice ( Figure 3D), although there is a statistically significant difference between WT (n = 9) and TNFtg (n = 12) mice in the AT1R −/− strain ( Figure 6D). Post-hoc power analyses have shown that a larger sample size would be needed to detect a substantial difference in Figure 3D. Therefore, future studies with larger sample sizes would be necessary to detect a small but significant difference. We previously reported that the RAS is involved in vascular damage and that AT1R blockers have potent vascular protective effects in an arthritis model [22]. Therefore, in patients with rheumatoid arthritis complicated by RAS-dependent hypertension, the blockade of the RAS might be beneficial not only to reduce blood pressure and vascular damage but also to prevent bone erosion. In conclusion, this study provides novel insights into the pathophysiological function of Ang II in the regulation of inflammatory bone destruction. In patients with rheumatoid arthritis, the systemically activated RAS in concurrent pathological conditions could be involved in the progression of joint destruction in conjunction with increased local expression of AT1R. The effects of pharmacological inhibition of the Ang II-mediated pathway on bone erosion remain unclear but warrant further clinical examination. Mice Human TNFtg mice (C57BL/6 background) were obtained (#1006; Taconic Biosciences, Hudson, NY, USA). The TNFtg heterozygous mice spontaneously develop arthritis on the fore and hind paws at approximately 8 weeks of age, and arthritis progresses with age [23]. AT1R-knockout mice (AT1R −/− ; C57BL/6 background) were obtained (#002682; The Jackson Laboratory, Bar Harbor, ME, USA) [24] and crossed with TNFtg mice to generate AT1R-deficient arthritic mice. Age-and sex-matched littermates were used as control mice. All mutant mice were maintained in the animal facility of Kawasaki Medical School (Okayama, Japan) and were housed in a group (2-5 mice per cage) and maintained at 22 • C under 12 h light/12 h dark cycles with free access to water and standard laboratory food. All animal experiments were approved by the Institutional Safety Committee for Recombinant DNA Experiments (Nos. 14-40, 14-41, and 19-27, which are approved on 3/13/2015, 3/13/2015, and 10/17/2019, respectively) and the Institutional Animal Care and Use Committee of Kawasaki Medical School (Nos. 17-129, 18-057, and 18-130, which are approved on 2/1/2018, 4/1/2018, and 2/1/2019, respectively). All experimental procedures were conducted in accordance with institutional and NIH guidelines for the humane use of animals. Ang II Infusion Model Twelve-week-old WT and TNFtg male mice were randomly divided into two groups that were infused with either water (H 2 O) or Ang II, which was dissolved in H 2 O. Ang II was administered by osmotic pumps to WT (n = 6) and TNFtg mice (n = 7) from 12 to 16 weeks of age. H 2 O was administered by osmotic pumps to WT (n = 6) and TNFtg mice (n = 4) as controls. The mice were anesthetized, and an osmotic pump containing 100 µL of either H 2 O or Ang II (Sigma-Aldrich, St. Louis, MO, USA) was implanted subcutaneously as previously described [22,25]. Ang II was continuously infused at a dose of 1.44 mg/kg/day from 12 to 16 weeks of age. Arterial blood pressure was measured by the tail-cuff method with a pulse transducer (BP98-A; Softron, Tokyo, Japan), as reported [26]. Mice were monitored for signs of arthritis in a blinded manner, and each limb was individually scored on a scale of 0-4. Scores were assigned based on the extent of erythema or swelling present in each limb, assigning a maximum score of 16 per mouse, as described previously [27,28]. Mice were monitored until the age of 16 weeks, and then serum, hind limb, and spine (the fifth lumbar vertebra) samples were collected. Micro-Computed Tomography (CT) Analysis Bone samples were fixed in 4% paraformaldehyde (PFA) in phosphate-buffered saline for 2 days, and PFA-fixed bone samples were immersed in 70% ethanol. Three-dimensional microarchitecture of the talus, tibia, and spine was evaluated by using a micro-CT system (Ele Scan mini; Nittetsu Elex, Tokyo, Japan) with an X-ray energy of 45 kVP (145 µA), as described previously [29,30]. The voxel resolution of all bone images was 15 µm. The bone properties of the tibia and the fifth lumbar vertebra, and bone erosion of the ankle (talus bones) were analyzed using analysis software (TRI/3D-BON; Ratoc System Engineering Co. Ltd., Tokyo, Japan). The analyzed region of the tibia trabecular bone comprised 67 slices of secondary spongiosa adjacent to the primary spongiosa (starting 0.5 mm from the distal border of the growth plate), that of the vertebra comprised the entire fifth lumbar vertebral body area (approximately 140 slices), and that of the tibia cortical bone comprised 33 slices of the midshaft (1 mm proximal region from the tibiofibular junction). The micro-CT parameters of the tibia and spine were described according to international guidelines [31]. The talus bones were evaluated using BV and Ev/Rpv for quantitative measurements of bone erosion [32]. Ev/Rpv on the whole talus was calculated automatically with the software according to the software program (TRI/3D-BON). We set the concave surface search range up to 0.15 mm, and the absorption surface extraction radius of curvature was 960 µm or less as described previously [33]. Histological Analysis The hind limbs were decalcified in 10% EDTA (pH 7.2) at 4 • C for 4 weeks and subsequently embedded in paraffin. Sections (3 µm) were stained with hematoxylin and eosin (H&E) and Safranin O. The severity of inflammation and cartilage damage around the talus bone was scored on a scale of 0-4 under blinded conditions as described previously [27,28]. TRAP staining was performed to visualize osteoclast formation, and the sections were counterstained with methyl green. Histological analyses were performed using a BZ-X analyzer (Keyence, Osaka, Japan). The eroded surface per bone surface (ES/BS) and the number of osteoclasts per bone surface (N.Oc/BS) around the taluses were determined. Real-Time Quantitative Polymerase Chain Reaction (qPCR) qPCR was performed as described previously [30,34]. Total RNA was extracted from the right ankle joint using RNAiso Plus (Takara Bio, Shiga, Japan) and solubilized in ribonuclease (RNase)-free water. Complementary DNA (cDNA) was synthesized using the Prime Script RT reagent Kit (Takara Bio). qPCR reactions were performed using SYBR Green PCR Master Mix (Takara Bio) with the StepOnePlus Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA). Gene expression levels relative to Gapdh were calculated by the ∆∆Ct method and normalized to control samples obtained from the WT mice. The qPCR analysis used the following primers: 5 -taccagctctgcggctct-3 and 5 -gccagccatttt ataccaatct-3 for Agtr1a (AT1R); 5 -atcaagaaggtggtgaagca-3 and 5 -gacaacctggtcctcagtgt-3 for Gapdh. All qPCR reactions yielded products with single peak dissociation curves. Statistical Analysis All values are given as the mean ± standard error of the mean (SEM). A two-tailed unpaired Student's t-test was used to compare two groups, and a one-way analysis of variance (ANOVA) followed by Tukey's post-hoc test was used to compare three or more groups by using GraphPad Prism 5 (GraphPad Software, San Diego, CA, USA). p values lower than 0.05 were considered statistically significant. Supplementary Methods Additional Supporting Information can be found online in the Supplementary Materials tab for this article. in the left ankle joint specimens was determined. Original magnification ×40. A tissue specimen processed without the primary antibody is presented as a negative control. Detailed methods are described in the Supporting Information. Figure A2. Systolic blood pressure in wild-type (WT) and tumor necrosis factor-transgenic (TNFtg) mice administered angiotensin II (Ang II) and water (H2O). Ang II was administered by osmotic pumps to WT and TNFtg mice from 12 to 16 weeks of age. Values are the mean ± SEM. *, p < 0.05 (WT (Ang II) vs. WT (H2O)). n.s., not significant (TNFtg (Ang II) vs. TNFtg (H2O)).
5,943
2020-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Self-motion facilitates echo-acoustic orientation in humans The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory–motor interactions, and on possible optimization strategies underlying echolocation in humans. Summary The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory-motor interactions, and on possible optimization strategies underlying echolocation in humans. Introduction Some animals, like bats and toothed whales, are known to use echolocation for orientation and navigation purposes. They actively emit precisely timed acoustic signals and analyse the resulting echoes to extract spatial information about their environments. This allows them to compensate for a lack of visual stimuli due to nocturnal darkness or murky waters in their habitats [1,2]. Some blind humans also use echoes from selfgenerated sounds to represent their spatial environment with high precision (for reviews, see [3,4]). It has been shown that using echolocation humans can detect obstacles [5][6][7], discriminate between objects of different texture or size [8], localize a soundreflecting surface [9] and estimate its distance [8,10]. There is 2014 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited. Subjects Eight sighted subjects participated in the study (23.5 ± 2.2 years of age (mean ± s.d.), one female). All subjects had hearing thresholds of less than 10 dB hearing level for both ears for all tested frequencies (250-8000 Hz in octave steps). To provide a proof of concept for our VEAS presentation and to compare our sighted subjects' performance with that of blind echolocation experts, we also tested two blind professional echolocation teachers. Both were blind since at least infancy, taught themselves to echolocate during childhood, and since then have been using echolocation on a daily basis. Stimuli and apparatus Subjects gathered spatial information about their environment for orientation by listening to echoes of their own vocalizations. All experiments were conducted in VEAS using the BRIRs of a real corridor with a constant width of 2.5 m, a length of 27 m and a height of 4 m (cf. figure 1a) Figure 1. (a) Illustration of the virtual corridor. Echo-acoustic orientation performance was tested at two positions on the midline of the corridor (positions M1 and M2 at rear wall distances of 75 and 700 cm, respectively) and two positions 75 cm from the left lateral wall (positions L1 and L2 at rear wall distances of 75 and 700 cm, respectively). (b) Illustration of Experiment 1. In a 2AIFC paradigm, subjects were asked to discriminate between a leftward and a rightward deviation from the virtual corridor's longitudinal axis (0 • ). the ceiling of the corridor were made of concrete and the flooring consisted of PVC. The recording of the BRIRs is described later. During the experiments, subjects were seated in a sound-attenuated anechoic chamber with a size of 2.0 × 2.0 × 2.2 m (Industrial Acoustics Company GmbH, Niederkrüchten, Germany). The walls of the chamber were lined with 20-cm acoustic wedges, which decreased the level of echoes by at least 40 dB for frequencies higher than 500 Hz. To present a specific position and orientation in VEAS, the subjects' vocalizations were recorded anechoically with a headset microphone (Sennheiser HS2-EW, Wedemark, Germany), convolved in real time with the respective BRIR, and then presented via headphones (K701, AKG Acoustics GmbH, Vienna, Austria). Parts of the headphones' ear cups were removed, which allowed for undisturbed perception of the direct sound from the mouth to the ears via the free field and of the echoes via headphones at the same time. The BRIRs presented to the subjects had a length of 2.7 s and were all derived from the BRIRs recorded in the real corridor, while compensating for the frequency response characteristics of the microphone and the modified headphones. The headphones and the microphone were connected to a personal computer (PC with WINDOWS 7) with an external soundcard (MOTU Audio 24I/O, Cambridge, MA, USA) via wireless transmitter and receiver systems (Sennheiser EW 172 G3 and EW 300 IEM G3, Wedemark, Germany). On the PC, a real-time convolution kernel (Soundmexpro, Oldenburg, Germany) was run under MATLAB. The overall delay of the convolution engine was 3.3 ms. In order to guarantee the correct time delays for the echoes, the first 3.3 ms of the BRIRs were cut. Since the direct sound from the mouth to the ears was not simulated and all virtual reflecting surfaces were at a distance of at least 75 cm from the subject (corresponding to an echo delay of at least 4.4 ms), no important information was lost when cutting the first 3.3 ms of the BRIRs. The authentic reproduction of the corridor's acoustics was verified by measuring the BRIRs of the VEAS using the same recording set-up and procedure as for the original BRIR acquisition. Additionally, two blind echolocation experts validated the VEAS presentation perceptually: they took part in the experiments in VEAS and afterwards they went to the original real corridor to compare the acoustic impressions. Both blind experts successfully solved the echo-acoustic tasks in VEAS without any specific training and they confirmed that the acoustic impression in VEAS and in the real corridor were highly consistent. Acoustic recordings For the BRIR recordings, a head-and-torso simulator (HATS, B&K 4128C, Brüel & Kjaer Instruments, Naerum, Denmark) was used as the core part of a custom built mobile recording set-up. The HATS was attached to a computer-controlled turntable which was mounted on a small wooden cupboard with wheels. The recording was controlled via a notebook computer connected to an external soundcard (ProFire 610, M-Audio, Willich, Germany). The corridor was acoustically excited with a 10-s logarithmic sine sweep with a frequency range from 200 to 20 000 Hz. The sweep was created with MATLAB (The MathWorks, Inc., Natick, MA, USA), amplified (Amplifier A-109, Pioneer Electronics, Willich, Germany) and transmitted to the loudspeaker in the mouth of the HATS. The emitted signal and its reflections were then recorded via the microphones in the ear-canals of the HATS and amplified with a Brüel & Kjaer Nexus conditioning amplifier. Playback and recording were implemented with SoundMexPro (HörTech GmbH, Oldenburg, Germany). The loudspeaker in the mouth of the HATS was calibrated by filtering the playback signal with a respective compensation impulse response. This guaranteed that the frequency response of the loudspeaker had a flat spectrum, while the sound emission characteristics of the HATS (as a function of frequency, azimuth and elevation) were preserved. With an overall height of 180 cm, the recording set-up was appropriate to simulate a human adult in upright position who actively vocalized with his mouth and perceived his own vocalizations both directly from the mouth to the ears (direct sound) and from the mouth via reflections to the ears (echo). To extract the BRIRs, the emission and the binaural recording were cross-correlated and afterwards filtered to compensate for the logarithmic sweep. This procedure was used to acquire the BRIRs for two positions on the midline of the corridor (positions M1 and M2 at rear wall distances of 75 and 700 cm, respectively; cf. figure 1a) and two positions in close proximity of 75 cm to the left lateral wall (positions L1 and L2 at rear wall distances of 75 and 700 cm, respectively). At each position, measurements were conducted for 90 orientations with an angular resolution of 4 • . Finally, the BRIRs were interpolated (by linear interpolation of the linear magnitude spectra and unwrapped phase spectra) to increase the angular resolution from 4 • to 0.2 • . Procedure for Experiment 1 In a two-alternative, two-interval, forced-choice (2AIFC) paradigm, subjects were trained to discriminate echo-acoustically between two orientations in VEAS, which were symmetrically centred around the virtual corridor's longitudinal axis (cf. figure 1b). Each trial started with a 50-ms, 1-kHz tone pip to indicate the beginning of a 2-s exploration interval. During the exploration interval, a BRIR was presented as described above, i.e. the subjects vocalized and listened to the computer-generated echoes of their own vocalizations. The ending of the exploration interval was signalled by a 2-kHz tone pip. After a 500-ms pause, the second exploration interval was presented in the same way, but with a different BRIR. Subsequently, the subjects had to indicate (using a joystick) whether the first or the second exploration interval contained the orientation towards the right-hand side of the corridor's longitudinal axis. Subjects were given audio feedback by a 250-ms frequency chirp which was upward modulated for positive feedback and downward modulated for negative feedback. The azimuthal separation of the two presented orientations (illustrated in figure 1b as the angle ϕ) was adapted with a three-down-one-up procedure: it was decreased after three correct responses and increased after one incorrect response, which yields threshold estimates at the 79.4% correct level [31]. Until the third reversal of the adaptive track, ϕ was changed by a factor of 2. It was changed by a factor of 1.2 for reversals four and five, and by 1.1 from the sixth reversal on. The experimental run was stopped at the 11th reversal and the just noticeable difference (JND) was calculated as the geometric mean of the azimuthal separations in degrees at the last six reversals of the run. All subjects were trained until their performance stabilized over runs. The criterion for stable performance was fulfilled when the standard deviation across the last three runs was less than 25% of the mean across these runs. Procedure for Experiment 2 In Experiment 2, subjects were asked to align themselves to be as parallel as possible to the virtual corridor's longitudinal axis (which is the 0 • direction in figure 1b), starting from a random orientation. Depending on the specific condition, rotations were conducted via rotation of the virtual corridor relative to the subject's fixed body and head (Experiment 2.1), via rotation of the subject's whole body in a rotating chair relative to the fixed virtual corridor (Experiment 2.2) or via independent rotation of both the subject's head and body relative to the fixed virtual corridor (Experiment 2.3). Data acquisition was randomly interleaved to balance residual training effects. In Experiment 2.1, subjects used a joystick connected to the PC to vary the angular velocity as a function of the joystick's deflection. The top speed was limited to 30 • s −1 for rotation. In Experiment 2.2, subjects used the joystick to control the relative motion of themselves and the VEAS in the same way as in Experiment 2.1, but the VEAS was fixed in world coordinates and the chair the subject was sitting on was rotated via a computer-controlled driving mechanism. The current orientation was assessed via a tracking system (Intersense motion tracker, Billerica, MA, USA) that acquired the chair's orientation 10 times per second. The BRIRs were updated with the same frequency of 10 Hz according to the tracked orientation. In Experiment 2.3, free head movements were allowed in addition. The current orientation was assessed by tracking the orientation of the subject's head 10 times per second. The orientation of the rotating chair was also tracked in order to document the relative orientation of head and body. Before each experimental run, the sighted subjects were blindfolded and the light was turned off. Subjects then started the experiment by pressing a button on the joystick. Subsequently, a short tone pip (50 ms, 1 kHz) indicated that VEAS presentation was switched on. Subjects could explore VEAS via echolocation and rotation without time limit. When they felt confident that they had found the required orientation, they pressed another button on the joystick, which stopped the experimental run. Another tone pip (50 ms, 2 kHz) confirmed that the run was stopped. In each experimental run, the subjects' orientation in VEAS was written to a file and saved to hard disk once per second. All subjects were trained in each condition until their performance stabilized over runs. The stability criterion was that the standard deviation across the last three runs was smaller than 5 • . The overall performance of a subject was defined as the mean value of the performance in the last three runs in degrees. Control experiment A control experiment was conducted in order to investigate the influence of rotation speed, which was limited in Experiments 2.1 and 2.2, but not in Experiment 2.3. The control experiment was identical to Experiment 2.1, except that the speed limit for rotation of the VEAS was increased to 360 • s −1 , which is faster than the highest speed observed in Experiment 2.3. Results All subjects were successfully trained to perform the echo-acoustic orientation tasks in VEAS. The analysed results are shown in figures 2 and 3. In addition, all raw data that is necessary to conduct the analyses and draw the figures presented in the paper is provided in the electronic supplementary material. In order to ensonify their virtual echo-acoustic environment, all subjects produced short broadband tongue clicks. Typically, the clicks had a duration of 4-11 ms and a sound pressure level of 60-90 dB SPL as measured at the headset microphone. The peak frequencies of the clicks ranged from 3 to 9 kHz. In the 2AIFC experiment, subjects typically produced two to seven clicks per 2-s interval. Experiment 1: static 2AIFC Experiment 1 aimed to formally quantify the ability of humans to use the echoes of self-generated vocalizations for orientation. In a 2AIFC paradigm, eight naive sighted subjects were successfully trained to discriminate a leftward deviation from a rightward deviation from the required orientation (cf. figure 1b) The two blind echolocation experts both managed to solve the task without any training, which is a proof of concept for our VEAS implementation. Their average JNDs were 7.4 • in the middle of the corridor and 16.1 • near a lateral wall. For all positions, the performance of the experts was slightly better than that of the sighted subjects. However, the trends in terms of a significant effect of lateral wall distance and no significant effect of rear wall distance were the same. These results show that naive sighted subjects can be effectively trained to use echolocation for orientation. Orientation errors from Experiment 2 in terms of absolute deviation from the required orientation (0 • ). Bars and error bars represent averages and standard deviations across all sighted subjects, respectively. The asterisks highlight significant differences between pairs of experimental conditions (Wilcoxon rank-sum test, * p < 0.05, * * p < 0.01). When a lateral wall was nearby (positions L1, L2), performance was significantly better in Experiments 2.2 and 2.3, where self-motion was allowed, than in Experiment 2.1, where subjects were stationary. This shows that subjects profited from self-motion in the echo-acoustic orientation task. Experiment 2: free rotation Performance for Experiment 2 was quantified in terms of the 'orientation bias', defined as the mean deviation from the required orientation (shown in figure 2), and in terms of the 'orientation error', defined as the mean absolute deviation (shown in figure 3). In Experiment 2.1, orientation adjustment was realized by rotating the VEAS relative to the fixed subject. The average orientation error was 4.9 • for symmetric lateral wall distances and 13.5 • when one lateral wall was nearby (cf. figure 3). There was a significant effect of lateral wall distance (F 1,28 = 81.87, p < 0.001), but no significant effect of rear wall distance (F 1,28 = 2.69, p = 0.11). These data confirm the detrimental effect imposed by a nearby lateral wall, as observed in Experiment 1. Moreover, the dynamic orientation paradigm revealed that this detrimental effect was due to a systematic error: for all subjects, there was a systematic orientation bias away from the nearby wall, with an average value of 13.5 • (cf. figure 2b). Although the performance measures used in Experiment 1 (thresholds via 2AIFC with feedback) and Experiment 2.1 (echolocation without time limit and without feedback) are not directly comparable, results from both experiments consistently show that subjects performed better in the middle of the corridor than near a lateral wall. This indicates that in Experiment 2.1, the additional dynamic cues due to VEAS rotation did not help subjects to overcome the detrimental effect imposed by a nearby lateral wall. In Experiment 2.2, the subject's whole body was rotated via a computer-controlled rotating chair, whereas the virtual corridor was fixed in world coordinates. Head rotations were not allowed. Here, subjects could exploit vestibular cues for orientation in addition to the dynamic echo-acoustic cues. For symmetric lateral wall distances, the average orientation error was 4.1 • , which is not significantly different from that for Experiment 2.1 (Wilcoxon rank-sum test, W 16,16 = 292, p = 0.30; cf. figure 3). However, here the detrimental effect of a nearby lateral wall was much less pronounced than in the previous experiments: for asymmetric lateral wall distances, the average orientation error was 4.5 • , which is significantly lower than for Experiment 2.1 (W 16,16 = 392, p < 0.001; cf. figure 3). There was no significant main effect of lateral or rear wall distance with respect to orientation bias or orientation error (cf. figure 2c). Hence, subjects must have exploited the additional vestibular cues to overcome the systematic bias. This shows that self-motion did indeed facilitate echo-acoustic orientation. 2 and 3). However, figure 2 shows that, over the last 10 runs, the number of front-back confusions at position M2 was significantly lower in Experiment 2.3 than in Experiment 2.1 (χ 2 = 20.95, p < 0.001) and Experiment 2.2 (χ 2 = 12.33, p < 0.001). Analysis of the subjects' movements in VEAS revealed that subjects changed their strategy in Experiment 2.3. In Experiments 2.1 and 2.2, rotation speed was relatively slow but constant, with periodic interruptions during which subjects produced echolocation calls. Between two adjacent echolocation calls, subjects hardly ever covered a range of more than 5 • -10 • . This indicates that subjects tried to orient by continuously scanning the acoustic properties of the virtual corridor. Constant echo-acoustic feedback was needed as a reference to assess VEAS. In Experiment 2.3, however, subjects moved their head back and forth between certain orientations several times quite fast. This indicates that they tried to compare remote orientations directly, either to use them as landmarks or to disambiguate orientations with similar acoustic properties, like the 0 • and the 180 • orientation at position M2. This behaviour seems to have helped subjects to avoid frontback confusions in Experiment 2.3. During head rotations, subjects covered angles of up to 180 • without interruption by intermediate echolocation calls while they kept the orientation of the rotating chair fixed. This indicates that the additional proprioceptive cues due to head motion allowed subjects to effectively assess echo-acoustic space referenced against the body orientation. As for Experiment 1, both blind echolocation experts solved the tasks in Experiment 2 without any training, which again confirms the validity of our VEAS implementation. Both experts fulfilled the stability criterion after only three to four runs. The orientation errors were as low as 2 • for all conditions, except positions L1 and L2 in Experiment 2.1. Here, orientation was biased by about 8 • away from the nearby lateral wall. Results from the control experiment A control experiment to Experiment 2.1 was conducted in order to investigate the influence of rotation speed. The results showed that subjects did not use the full range of speeds available in the control experiment, but employed roughly the same average and top speed as in Experiment 2.1. There were no significant differences in performance between the Experiment 2.1 and the control experiment. Hence, high rotation speed alone does not facilitate echo-acoustic orientation. This indicates that the vestibular and proprioceptive components of self-motion are crucial for calibrating the perception of the angle that is covered during rotation. Discussion The experiments show that sighted subjects can be successfully trained to use echo-acoustic cues for orientation in VEAS. The authenticity of our VEAS implementation was verified both physically and psychophysically, namely by measuring and comparing impulse responses for real and for virtual space, and by tests using two blind echolocation experts. In Experiment 1, subjects had to discriminate between two discrete acoustic snapshots of a virtual corridor. The method of VEAS presentation in combination with a strict 2AIFC paradigm guaranteed that subjects could not exploit any cues other than the intended echo-acoustic ones. They performed quite well as long as the lateral walls were symmetrically arranged. Such a configuration facilitates the exploitation of basic binaural-difference cues, in that subject simply had to judge whether the overall echo was stronger in their left or right ear. The obtained thresholds are consistent with previous studies measuring echolocation acuity in the horizontal plane [9,11]. However, performance deteriorated significantly in close proximity of a lateral wall, although it has been shown that monaural echo-acoustic information due to sound reflections from a wall is most useful at a distance of approximately 1 m or less [32]. This indicates that for the current task, the basic binaural-difference cues were more helpful than monaural absolute-loudness cues. In Experiment 2, subjects could voluntarily control their orientation in the virtual corridor via rotation of the corridor around their body and head (Experiment 2.1), via rotation of their whole body in a rotating chair relative to the virtual corridor (Experiment 2.2) and via independent rotation of both their head and body relative to the virtual corridor (Experiment 2.3). Results from Experiment 2.1 confirmed the detrimental effect of a nearby lateral wall and showed that it was due to a systematic bias, which is consistent with the results of Dufour et al. [33], who reported a biasing effect of nearby walls on sound localization. This indicates that the prominent early reflections from the closer lateral wall may have partially masked later reflections from the opposite lateral wall and thus systematically biased the subjects' estimate of their orientation in the virtual corridor. This systematic error was absent in Experiment 2.2, indicating that subjects had benefited from the whole-body rotation. However, they still made systematic front-back confusion errors. These were reduced in Experiment 2.3, when additional head rotations were allowed. The detrimental effect of a nearby lateral wall that we observed in Experiments 1 and 2.1 seems to contradict the results of Shinn-Cunningham & Ram [34]. They investigated how well human listeners can exploit changes in reverberation to identify their location and orientation in a room, and found that listeners reliably hear and exploit monaural spectral cues from prominent, early echoes. However, listeners were relatively insensitive to the exact timing and arrival direction of echoes, especially regarding the pattern of late-arriving echoes. In a second experiment, they showed that binaural echo-acoustic cues even impeded the perception of monaural intensity cues. Yovel et al. [21] described a similar conflict between absolute echo intensity and time-varying echo cues in echolocating bats. They trained Egyptian fruit bats to find a spherical target, to fly towards it and to land on it. During their experiments, Yovel et al. measured the directional aim of the bats' sonar clicks. The authors observed two different phases of spatial localization behaviour: for target detection, the bats first maximized the echo intensity by orienting the peak of their echolocation call directly on the target. For fine-tuning the localization, they then pointed the peak of their calls to the left and to the right of the target in an oscillatory manner. In this way, not the peak but the slope of the call was pointed at the target. Therefore, the intensity of the echoes was not maximal (because the peak of the call was not pointed directly at the target), but time-varying echo cues were emphasized. The authors proposed a trade-off between maximal echo intensity for detection and time-varying cues for localization. This trade-off may also explain the differences between the current results and those of Shinn-Cunningham & Ram [34]. In the latter study, subjects had to discriminate between four different positions in a room, which differed in the presence or the absence of nearby, sound-reflecting walls. Thus, they used a detection task, for which maximal echo intensity is known to be optimal. In our study, subjects had to fine-tune their orientation relative to the surrounding, sound-reflecting walls. This was a localization task, for which time-varying cues may be more helpful. Both in the static 2AIFC Experiment 1 and in the dynamic Experiment 2.1 with VEAS rotation, subjects performed better in the middle of the corridor than near a lateral wall, i.e. the detrimental effect of a nearby lateral wall was observed in both experiments. With respect to this detrimental effect, subjects did not profit from the dynamic cues due to rotation of the VEAS around themselves. The results suggest that subjects did not adapt their strategy to exploit dynamic cues, but still compared acoustic snapshots to find the target orientation, just like in the experiment with static cues. This is consistent with the results of Ashmead & Wall [17], who found no significant effect of listener movement on echo-acoustic object detection when they simulated linear approaching motion in virtual space with prerecorded stimuli, i.e. without vestibular stimulation. In Experiments 2.2 and 2.3, additional vestibular cues induced by self-motion were available to the subjects. Here, performance was significantly better than in the experiments where subjects were stationary. Hence, self-motion facilitates echo-acoustic orientation, and the vestibular component of self-motion is essential for benefiting from movement during echolocation. These results are consistent with the findings of Rosenblum et al. [16], who reported a subtle advantage of subject motion in an echo-acoustic target-ranging experiment with vestibular cues available. The findings of Kondo et al. [15] may shed light on the nature of the observed enhancement due to self-motion. They demonstrated that self-motion facilitates the perceptual organization of auditory streams by providing time-varying binaural cues. Our subjects may have profited from the same effect in Experiments 2.2 and 2.3, where self-motion was allowed. It is possible that self-motion helped them to perceptually segregate the reflections of the two lateral walls into individual streams as predicted by Kondo et al. [15]. This might have helped them to overcome the bias imposed by a nearby lateral wall. In Experiments 2.1 and 2.2, subjects made systematic front-back errors, which were reduced in Experiment 2.3, when head motion was allowed. The occurrence of such confusions and the influence of head motion are well documented for sound source localization with non-individualized headrelated transfer functions (HRTFs) [35]. However, in the current experiments, front-back confusion only occurred at position M2. This shows that subjects were not confused by the non-individualized HRTFs, but by the acoustic similarity of opposite orientations at this specific position. When head motion was allowed, subjects moved back and forth between certain orientations several times quite fast, i.e. they overcame the problem of acoustic similarity by comparing remote orientations directly, using a high angular speed of head motion. The psychophysical results confirm that this strategy was more effective than the slow scanning strategy subjects employed in all other conditions. However, subjects went back to slow scanning in the control experiment (where we increased the speed limit of the virtual corridor rotation). So, here, subjects did not exploit the increased speed limit of the rotating chair. This shows that the vestibular and proprioceptive components of self-motion were crucial to calibrate the perception of the angle that was covered during rotation and to create a cognitive map of the virtual corridor. Kolarik et al. [36] have shown that blindfolded sighted subjects are able to use echoic spatial information from a sensory substitution device (SSD) in combination with body-scaled information for accurate motor adjustments of their shoulder position when passing through an aperture. However, the authors point out that human echolocation with self-generated sounds critically differs from using spatial information obtained with an SSD, since SSDs produce ultrasound and have built-in signal processing, whereas human echolocation involves comparing sound emission and echo. They conclude that it remains to be determined whether echolocation with self-generated sounds can be used for tailoring precise motor adjustments. A recent review of the literature on human echolocation [37] affirmed the need for further work with respect to locomotive guidance through echolocation. The current results show that human subjects can use echo-acoustic information from self-produced vocalizations to turn their body and/or head to a desired heading with high accuracy. This shows that active human echolocation gives rise to internal representations which allow for precise locomotor adjustments and thus safe navigation through one's environment. Our findings constitute an important link between previous studies investigating echo-acoustic obstacle detection and localization under laboratory conditions on the one hand, and the real-life practicality of this information for blind people in evoking precise motor responses for collision avoidance on the other hand. Conclusion In a 2AIFC experiment, we showed that sighted human subjects can be trained to use echo-acoustic cues to discriminate between different orientations in enclosed spaces with high accuracy. For this task, binaural comparison of the perceived echoes was crucial. A second experiment, in which subjects adjusted their orientation in VEAS via rotations, showed that dynamic acoustic cues facilitate echoacoustic orientation, especially when the subjects are moving relative to the fixed VEAS and not vice versa. Thus, the current study shows that vestibular and proprioceptive information facilitates echo-acoustic orientation in humans. Ethics statement. The experiments were ethically approved by the Ethikkommission der Medizinischen Fakultät der LMU München (project no. 359-07). Subjects signed a written consent form that had been approved by the ethics committee. Data accessibility. The datasets supporting this article have been uploaded as part of the electronic supplementary material.
7,232.4
2014-11-01T00:00:00.000
[ "Psychology", "Biology", "Physics" ]
Pairwise Biological Network Alignment Based on Discrete Bat Algorithm The development of high-throughput technology has provided a reliable technical guarantee for an increased amount of available data on biological networks. Network alignment is used to analyze these data to identify conserved functional network modules and understand evolutionary relationships across species. Thus, an efficient computational network aligner is needed for network alignment. In this paper, the classic bat algorithm is discretized and applied to the network alignment. The bat algorithm initializes the population randomly and then searches for the optimal solution iteratively. Based on the bat algorithm, the global pairwise alignment algorithm BatAlign is proposed. In BatAlign, the individual velocity and the position are represented by a discrete code. BatAlign uses a search algorithm based on objective function that uses the number of conserved edges as the objective function. The similarity between the networks is used to initialize the population. The experimental results showed that the algorithm was able to match proteins with high functional consistency and reach a relatively high topological quality. Introduction With the development of high-throughput technology, such as the yeast two-hybrid system [1], an increasing amount of biological data are being modeled into biological networks. According to the different meanings of nodes and edges when the networks are built, the networks can be classified as protein-protein interaction (PPI) networks [2], gene regulatory networks [3], and metabolic networks [4]. Biological systems complete a series of biological processes through PPI, rendering the study of PPI networks of great significance [5]. Network alignment is a more efficient method for analyzing biological networks, in comparison to biological experiments [6], and can be used to discover functional modules among networks [7] and predict the unknown function of proteins [8]. Homologous protein pairs of less-studied biological networks can be discovered by comparison with biological networks that have been more extensively studied, to detect potential functions of unknown proteins [9,10]. may be mapped to dissimilar protein nodes [24]. Therefore, the concept of global network alignment has been proposed [25]. Global network alignment is aimed at discovering the overall similar mapping relationship between networks [26]. At present, a high number of global alignment algorithms have been proposed, such as IsoRank [25], GRAAL family algorithms [27][28][29][30], NETAL [31], MAGNA [32], MAGNA++ [33], SANA [34], ModuleAlign [35], AligNet [36], and IBNAL [37]. In IsoRank, the similarity of nodes between the networks calculated by the PageRank algorithm is used to guide the greedy algorithm to complete the alignment. The GRAAL family of algorithms includes GRAAL, MI-GRAAL, C-GRAAL, and L-GRAAL, all of which are based on the Graphlet degree similarity. NETAL first constructs an alignment score matrix, and then, a greedy strategy is adopted to update the scores until all nodes in the first network are aligned with nodes in the second network. MAGNA is an objective function-based alignment algorithm that uses a genetic algorithm for searching. MAGNA++ is the optimization of MAGNA that optimizes both structure and sequence similarities and provides a friendly graphical interface. SANA is also objective function-based and uses the simulated annealing search algorithm for alignment. Both the ModuleAlign and AligNet algorithms incorporate the idea of modularity into network alignment. IBNAL develops a clique-based index to measure the topology of the proteins. Within the framework of objective function-based search algorithms, this paper discretizes the bat algorithm [38] and proposes the BatAlign algorithm. First, the similarity matrix is constructed by the combination of biological similarity and topological similarity information. The sequence similarity adopts BLAST bit-score [39] and evaluates the similarity of the network structure by considering the neighbors of the nodes, to further improve the similarity between networks. The greedy search is then used to generate the initial population, and the pair of nodes with the maximum score is chosen and aligned to each other. Finally, the alignment results are obtained by initial population optimization. By building a coarse similarity score matrix to guide the initialization, BatAlign can shorten the search time to convergence, in comparison to random initialization of the population. Our main contributions are summarized as follows. (1) We propose BatAlign which uses a discrete bat algorithm for network alignment. The main idea of BatAlign is to iteratively update the bat position under the guidance of bat velocity (2) The network topology information and node sequence information are combined to calculate the node similarity. The node similarity guides the construction of the initial population. With the initialization mechanism, BatAlign can obtain a good biological score and relatively high topological score The related work on network alignment is introduced in the first section. The framework and theory of the BatAlign algorithm are explained in the second section. In the third section, BatAlign is compared with other state-of-the-art algorithms based on synthetic and real networks. The work of this paper and future prospects are presented in Section 4. Materials and Methods 2.1. Problem Definition. Assume that the two networks to be aligned are G 1 ðV 1 , E 1 Þ and G 2 ðV 2 , E 2 Þ, where V 1 , V 2 are the node sets of networks G 1 , G 2 , respectively, and E 1 , E 2 are the edge sets of G 1 , G 2 , respectively. Without loss of generalization, assuming that |V 1 | ≤|V 2 |, the small network G 1 is the source network and the large network G 2 is the target network. Global network alignment finds a mapping relationship f : V 1 ⟶ V 2 , which aligns the nodes in the small network to the nodes in the large network one by one, to maximize the overall similarity between the networks. The similarity of the node pairs between networks usually combines the similarity of topology and sequence. In this paper, the topology of the network is considered through the neighbors of a node, and the sequence similarity is combined to generate the similarity matrix between the networks: where S represents the similarity matrix between networks, B represents the sequence similarity matrix of nodes between networks, and A 1 and A 2 that note the topological structure of the node represent the adjacency matrix of networks G 1 and G 2 , respectively. Bat Algorithm. The bat algorithm [38] is a swarm intelligence optimization algorithm that simulates the echolocation behaviour of bats. The initial population is generated randomly, and then, the optimal solution is iteratively searched. The new solution is generated during searching by adjusting frequency f (Equations (3) and (4)); when the rate of pulse emission r is smaller than a random number, the local solution is generated around the selected best solution (Equation (5)): The bats can adjust the loudness (Equation (6)) and the rate of pulse emission (Equation (7)): network alignment, which is reflected in two aspects: position coding and velocity coding. Position discretization: in both networks, the nodes are numbered from one and each node number is unique in its own network. Each individual position in the population represents an alignment of the entire network, the position is a vector X with n 1 components, and the entries are members of G 2 . Velocity discretization: an individual velocity is represented by a vector V with n 1 components, whose entries are 0 or 1. The values 0 or 1 were used to represent the flying velocity of one node in a network, where 0 means keeping the solution of this node, namely, not to fly, and 1 means that the solution of this node can randomly fly. Individual initialization: the position and velocity of each individual need to be initialized. The initial position is an alignment generated by the greedy algorithm under the guidance of the similarity matrix. For example, assuming the similarity matrix of two networks has been obtained by Equation (1), the similarity matrix is shown in Figure 1(c). For each node in the source network, BatAlign identifies the node with the highest similarity; therefore, position x i is obtained, as shown in Figure 1(d). The method for initializing the velocity is given in Equation (8), and the velocity of the conserved node is 0. For example, assuming the individual position, shown in Figure 2(b), has been obtained, the velocity is obtained as shown in Figure 2 Due to the incompleteness of the similarity between network nodes, it is possible that simple greedy algorithms may not directly align all the nodes. Therefore, unaligned nodes were randomly mapped to generate all the individuals in the population (Figure 3). That is, only the nodes that have similarity are aligned first, while the nodes that do not have similarity are aligned randomly. Individual and Population Iteration. The individual iteration process is composed of two parts: generating new individuals and updates. Generating new individuals includes two parts: updating velocity and position. For an individual, the method for updating the velocity is given in are the source network and target network, respectively; (c) is the similarity matrix. Assuming the similarity matrix has been obtained, according to the matrix, the most similar node pair is chosen, and then, a2 is mapped to b3. Nodes are not aligned repeatedly; then, the most similar node pair is a3 and b4; a3 is mapped to b4; in a similar way, a1 is mapped to b1, and a4 is mapped to b5; thus, the individual position can be obtained as shown in (d). Assuming the similarity matrix in (a) has been obtained, according to the matrix, the position is obtained as shown in (b); a2 is mapped to b3; a3 is mapped to b4; a1 is mapped to b1; and a4 is mapped to b5. As nodes are not aligned repeatedly, the matrix is obtained as shown in (c). In this case, the similarities of a5 with other nodes are 0; thus, a5 is randomly aligned with b2, b6, b7, or b8; and a6 is also aligned randomly. By mapping unaligned nodes randomly, the positions of individuals with population size n are obtained as shown in (d). Computational and Mathematical Methods in Medicine where v t ij is the velocity of the ith individual on the jth dimension during the tth iteration, x ij is the position of the ith individual on the jth dimension, and x * j is the optimal solution on the jth dimension. The calculation of°v t−1 ij is as follows in°v where°f i is calculated as follows in°f where f max and f min represent the maximum and minimum frequency, respectively. In BatAlign, f max is set equal to 1 and f min equal to 0. The calculation of f i is Equation ( Figure 4 shows the process of individual iteration. The global search method is given in Equation (12). The node with a velocity of 0 is reserved, and the remaining nodes are those to be aligned and put into set U. The selection operation is represented by σ. Figure 5 shows the global search that generates a new position: The local search method is given in Equation (13), where the set C is composed of nodes with a velocity of 1. An example of local search is shown in Figure 6: x t−1 ij , otherwise: Computational and Mathematical Methods in Medicine The update operation is performed when the current loudness of the individual is greater than the random number between 0 and 1 and the value of the objective function of the new position is larger. The objects of the update operation include the velocity, position, rate, and loudness. The objective function used in this study is the number of conserved edges. The more conserved edges, the larger the objective function. In each iteration, the individual with the highest score is chosen as the optimal solution in the population. When BatAlign runs T iterations or the optimal solution remains the same after N times, the optimal solution is output as the final alignment. Results and Discussion 3.1. Experimental Dataset. Synthetic networks were used, retrieved from the NAPAbench2 [40], which was a synthetically constructed network alignment benchmark including three types of networks: Crystal Growth (CG), Duplication Mutation Complementation (DMC), and Duplication with Random Mutation (DMR). The number of nodes and edges of the three networks is shown in Table 1. The dataset of the real networks was obtained from the BioGRID database [41]. The test species includes the Rattus norvegicus (RN), Schizosaccharomyces pombe (SP), Caenorhabditis elegans (CE), and Mus musculus (MM). The information of the real networks is provided in Table 2. The similarity scores in the BioGRID datasets were BLAST bit scores computed by the BLAST package on NCBI (https://www.ncbi.nlm.nih.gov/). Gene Ontology terms [42] were used as standard functional annotations, and GO annotations were extracted from NCBI's Entrez Gene database [43]. Evaluation Metrics. The network alignment quality was evaluated in two aspects: topology and biology. The edge conservation under an alignment has been evaluated using three measures so far: Edge Correctness (EC) [27], Induced Conservative Structure (ICS) [44], and Symmetric Substructure Score (S 3 ) [32]. S 3 has been shown to be superior to EC and ICS, since EC only penalizes alignments from sparse graph regions to dense graph regions. ICS only penalizes alignments from dense graph regions to sparse graph regions; however, S 3 considers both aspects simultaneously. S 3 was used to evaluate the topological similarity of an alignment. The higher the S 3 value is, the more analogous structure the alignment has conserved: S 3 was proposed in MAGNA, and it is formulated as where f : V 1 ⟶ V 2 represents the alignment and |f ðE 1 Þ | is the number of edges from the smaller network G 1 that is conserved by alignment. The formulation of f ðE 1 Þ is as follows in Equation (15). |E 2 ðG 2 ð f ðV 1 ÞÞÞ | is the number of edges from the induced subnet of G 2 with the aligned node set. The formulation of f ðV 1 Þ is as follows in Equation (16): The network alignment biological quality was evaluated by two measures, including Gene Ontology consistency (GOC) [45] and Average Functional Similarity (AFS) [46]. The high GOC and AFS values indicate the high functional consistency of the alignment. GOC is based on the Gene Ontology (GO) consistency of the aligned pairs of proteins. GO terms describe some biological properties of a protein such as Cellular Component (CC), Molecular Function (MF), and Biological Process (BP). Proteins with similar GO terms are supposed to be functionally similar. GOC can be computed as follows in where GOðuÞ denotes the set of GO terms annotating a protein u. AFS is calculated based on the semantic similarity of the GO terms and depends on the distance between them in the ontology. Semantic similarity measures can be used to calculate the functional similarity in each category of BP, MF, and CC. The semantic similarity is calculated using a graphbased method, Wang. The detailed work of the Wang method is illustrated in [47]. AFS is defined as follows in where s c is the semantic similarity of nodes u and f ðuÞ, for [48] was used for semantic similarity calculation. Experimental Results and Analysis. The number of iterations in BatAlign was set to 1000; the size of the population was set to 40; N = 10; that is, when the optimal solution is not updated after 10 times, the current optimal solution was output as the final alignment result. BatAlign makes use of parameter α in Equation (1), where α determines the relative importance of sequence and topological similarity. Meanwhile, α = 1 implies that only sequence information was used. In order to ensure the fairness of the comparison, parameter α was set to 0:4 in all the algorithms that use alpha to control the weight of topological similarity and sequence score, and this value was also recommended by ModuleAlign. To verify the effectiveness of the BatAlign, the algorithm was tested on synthetic and real networks and compared to 7 Computational and Mathematical Methods in Medicine several state-of-the-art algorithms (i.e., NETAL [31], Modu-leAlign [35], L-GRAAL [30], MAGNA [27], and IBNAL [37]). NETAL only adopts topological information to construct the alignment. ModuleAlign is an algorithm based on modularity. L-GRAAL is the representative of the GRAAL family algorithms and integrates Graphlet degree similarity and sequence similarity. MAGNA uses a genetic algorithm, only considers the topological similarity, and is based on the objective function. IBNAL makes use of a novel clique-based index. The performance of the algorithms is evaluated in Figure 7, based on S 3 on the synthetic networks. In CG networks, the performance of BatAlign was inferior compared to L-GRAAL, NETAL, and ModuleAlign, and S 3 of BatAlign was 1.3-120 times higher than IBNAL and MAGNA. In the DMC networks, the score of BatAlign was 0.1-3.8 times higher than NETAL, MAGNA, and IBNAL. In DMR net-works, the performance of BatAlign was inferior to Modu-leAlign, L-GRAAL, and NETAL. S 3 of BatAlign was 3.8-4.6 times higher than MAGNA and IBNAL. The results show that the quality of the BatAlign is medium. The algorithms based on the GOC score on the synthetic networks are compared in Figure 8. BatAlign presented good biological scores in DMC, while its score was slightly lower than ModuleAlign, outperforming the other algorithms. The score of BatAlign was lower than Modu-leAlign and L-GRAAL in CG and DMC, while its performance was good compared to the other aligners. The topology of the real network is more complex than that of the synthetic network. Although the performance of BatAlign was not as good as ModuleAlign and L-GRAAL in synthetic networks, BatAlign performed well in real networks. BatAlign can identify functionally consistent proteins, which is helpful to biological research. Figure 9 shows the results of the different algorithms on the real networks. The BatAlign performance with respect to S 3 was low, while BatAlign outperformed other aligners in terms of GOC; in particular, the alignment between RN and MM achieved an excellent biological score, which may be because these two species were closely related in genetic relationship. NETAL performed best with respect to the S 3 score, but it had a very low GOC score, may be because NETAL is a topological-only method; it can realize high topological quality at the expense of the biological quality. However, GOC carries more importance metrics than S 3 as a metric. In Figure 9, S 3 of ModuleAlign and MAGNA was higher than BatAlign, but they scored low GOC values, and their alignment result may miss node pairs with high functional similarity. The results showed that BatAlign performed much better than IBNAL in terms of S 3 and GOC scores. The GOC of BatAlign was slightly lower than L-GRAAL when aligning RN and CE. However, BatAlign was superior to L-GRAAL when aligning other networks. AFS provides an alternative way to describe the biological quality of an alignment. Figure 10 represents the performance of each aligner in terms of AFS. The AFS of BatAlign was 20-50%, 19-54%, and 11-43% higher than NETAL, ModuleAlign, MAGNA, and IBNAL aligners, in terms of BP, MF, and CC, respectively. The performance of L-GRAAL was higher than BatAlign when mapping RN to CE and CE to MM. On the other hand, BatAlign outperformed L-GRAAL, when mapping other networks. Overall, BatAlign has good biological quality compared to other aligners. On synthetic networks, BatAlign had high GOC scores among selected aligners and competitive S 3 scores. On real networks, BatAlign performed well in terms of the biological score with a relatively high topological score. Thus, BatAlign reached a relatively high topological quality and a superior biological quality. Experiments showed that BatAlign may be a useful tool for predicting the functions of unknown proteins in less studied species through network alignment with species that have been better studied. Conclusions and Prospects The BatAlign based on a discretized bat algorithm for the global alignment of two networks is proposed in this paper. BatAlign discretizes the bat algorithm and uses 0 or 1 to represent the flying velocity. The population of BatAlign is initialized according to the similarity score matrix. A new solution is generated according to a global and a local search, performed according to velocity. The number of conserved edges is used as the objective function. BatAlign overcomes the shortcoming of other search algorithms based on objective functions that initialize the population randomly and can only rely on a larger population and many iterations to find the optimal solution. The results of BatAlign are comparable to other state-of-the-art aligners. Experiments showed that BatAlign is a pairwise biological network global alignment algorithm that performs well in terms of biological quality. Future work will include parallelization of the BatAlign and expansion from two to multiple networks.
4,832.8
2021-11-03T00:00:00.000
[ "Computer Science", "Biology" ]
Examination of Urban Agriculture Contribution to the Household Livelihood Outcome the Case of Bahir Dar City, Ethiopia : The general objective of this study is to examine the contribution of urban agriculture to the household livelihood in case of Bahir Dar city in Ethiopia. The motive for this study were the problems of unemployment, growing poverty, hunger, poor diets, bad air condition, depression as well as the special opportunities provided by the city including the growing demand for food, proximity to markets and availability of cheap resources such as urban organic wastes. The study used both primary and secondary data source. Stratified quota sampling was used by the study to collect primary data. Average annual urban agricultural net revenue per capita was taken as a common measure of all urban agricultural outcomes for target predictor’s. Other predictor variables that assumed to be the determinant of urban agriculture contribution also included in the model. Binary logistic regression technique is used to estimate the logit coefficient. The studies found that the greater correlation of livelihood security’s with average annual urban agricultural net revenue per capita than average annual non- urban agricultural net revenue per capita. The correlation between food, economic, education, health and empowerment security with average annual urban agricultural net revenue per capita were about 0.29, 0.6, 0.19, 0.21 and 0.22 respectively. This target explanatory variable was positive significant effect for food security dependent variable at 5% significance level and at its mean value the probability of more food security agreement were about 0.77, while other predictors also held at their mean value. Finally, this studies paper suggests that urban agriculture helps for household livelihood outcome of food, education, health, empowerment and economic security and should be considered in urban planning. Introduction Urban agriculture is a practice of rural agriculture in urban context or it is primary occupation practiced within integrated urban socio-economic and ecological system and it used as a strategy by many urban dwellers to improve their livelihood and overall well-being [18]. By the year 2050 it is expected that 66.4 percent of the world's population will be living in cities [15]. So, for these growth of population [23] advice is necessary for urban households to embark on urban farming as a means of filling the food demand. Income and financial, technical, and educational support is essential to maximize the benefits of urban agriculture [18]. In developing countries urban agriculture one of several food security options for households livelihood security; similarly, it is one of several tools for making productive use of urban open spaces, treating urban waste, saving, generating income and employment, and managing freshwater resources more effectively [2] and it practiced anywhere between about 10-70% of urban households in third world countries [1]. The study helped to assess the contribution of UA to the livelihood of agricultural participant households and to show UA contribution to livelihood of urban households who practiced agriculture either allied with other job or as a major job to urban planner and other researchers. Lack of formal employment opportunities, growing Household Livelihood Outcome the Case of Bahir Dar City, Ethiopia poverty, hunger, poor diets, physical inactivity, air pollution, depression, anxiety and financial insecurity as well as the special opportunities provided by the city including the growing demand for food, proximity to markets and availability of cheap resources such as urban organic wastes were the motive of this study [6] cited in [11]. Population growth and urbanization also the challenge in the world cities. In 2050 more than two-thirds of world population estimated to be living in urban areas and this could lead to a net addition of 2.4 billion people to towns and cities, which is more than the total global population increment of 2.2 billion, people worldwide [7]. In Ethiopia, 19.4% peoples lived in urban areas in 2015 and between 2007 and 2015 nineteen million people were added to the population and population size was growing by 2.9 percent per year and it was expected to nearly double in less than 33 years to around 185 million in 2050 [20,4]. In Bahir Dar city the total population was estimated to be 219,535 in 2015 [10]. In addition to that problem, some household livelihood depends on urban agriculture, although untreated wastes affect urban farmers, and upgrading methods or new technologies for farmers input is at its subsistence level [3,8]. For the above problems the integration of urban agriculture into the cities and towns is a remedy that increases vegetation within the city, improving air quality and reducing the probability of urban populations suffering diseases worldwide [15]. That means household living with in good air quality and a healthy life with a capability of having enough household assets and the household members becomes strong and successful in undertaking livelihood strategies (activities) to get improved livelihood outcome. Many theoretical literatures had existed related to this study. The most available theoretical literature related to title of this study written by [25] was about urban agriculture impacts on social, health, and economic. Even if it does not differentiate urban agricultural impacts specifically weather to the community or to the household, it was important literature for this study to know the contribution of urban agriculture. The most related and recent existing empirical literature were urban agriculture, poverty and food security by [1] in developing countries; using nationally representative households data and they found that existence of positive association between urban agriculture and dietary adequacy and diversity. The second paper was contribution of urban agriculture to employment, income and food security in Kisumu municipality in Kenya by [19]. His study concludes that urban agriculture supplements food requirements of the urban poor on one hand and a source of income for the few commercial urban agricultural participant on the other hand. The third paper is urban agriculture a way of forwarding food and nutrition security in Malaysia by [24] and they suggest that more likely probability of improving food security by practicing urban agriculture. The fourth paper was contribution of urban agriculture to household's livelihoods in Roysambu ward in Nairobi by [10]. He suggested practicing urban agriculture not serves as major income sources but rather it supplements family incomes. The fifth paper was contribution of urban agriculture to household food security in Ibadan Metropolis in Nigeria by [21] and they conclude that urban agriculture was profitable. The sixth paper is urban agriculture and its effect on poverty alleviation in Ibadan Metropolis in Nigeria by [23]; they concluded that vegetable enterprise was profitable and could help to reduce poverty to a minimum level. In the study area there was a little previous study slightly related to the title of the contribution of urban agriculture to the livelihood of urban farmer. Such study paper was about urban and peri-urban farming systems and utilization of the natural resources in Bahir Dar city and Dangla town, in Ethiopia written by [13]. They said that crop-livestock integration plays a vital role in the small holder farming systems and about 33.3% of the respondents practiced croplivestock farming. All of the above empirical literature shows the contribution of urban agriculture to particular household livelihood security categories. But, this study paper different from above empirical papers in that the use of five broad livelihood security categories of [17] which was broader than simply taking of one livelihood security categories of a household. In addition to that this study paper try's to reconcile the two opposing views: that is some of those study papers implies UA supplements other livelihood strategies and not as a major livelihood activities and some other paper says UA serves as a major livelihood activities and it is profitable. This study paper fill these gaps by examining the contribution of UA to the five broad livelihood security categories (food, health, education, empowerment and economic) in Bahir Dar city and its surroundings to the office of urban agriculture and also to the city planners as a solution of the above mentioned problems The study is tries to answer the following question: 1. What are the determinants of UA contribution to food security status of agricultural participant households in Bahir Dar city? 2. Does urban agricultural revenue has a greater association with higher livelihood security than nonagricultural revenue in Bahir Dar city? 3. Does UA have contribution for minimizing food insecurity in Bahir Dar city? The general objective is to examine the contribution of urban agriculture to the household livelihood outcome security in Bahir Dar city, Ethiopia. The following three specific objective were aimed to do. 1) To show the association UA contribution with livelihood security status of agricultural participant households in BDC. 2) To examine the contribution of UA to food security status of agricultural participant households in BDC. 3) To estimate the probability of more agreed food security households in respect to change UA revenue in BDC. The study will have important to understand the current situation of urban agriculture. It gives emphasis for private and institutional researchers to allocate more resources in developing and promoting urban agriculture and it serve as reference for other studies interested on the related issues. Due to time, financial and other constraints the study was geographically limited to Bahir Dar city administration area. The area was selected because of agricultural potential it has and the researcher's familiarity to the study area. It also conceptually limited to only the contribution of UA to the five broad livelihood security categories of household's [17]. The study paper has the following major limitations. One is perception measure of dependent variable was not good as of quantitative measurement, another limitation of the study was the interviewed households did not remember the needed correct information. Specially, agricultural sourced revenue and costs. Self-production for their consumption and to further agricultural production was the challenge for evaluation. To overcome these challenge study had used the average of the maximum (at most) and the minimum (at least) estimation of household head about their financial needed data. Another limitation of the study was the title broadness. This issue leads only food security studied by both descriptive and model alnyisis. The rest four (i.e. empowerment, health, education and economic) livelihood security categories of [17] studied using descriptive alnyisis. Generally, the study solution for the encountered challenges made the study to be valuable and important. The Study Area The study was conducted at Bahir Dar city administration area which is capital of Amhara regional state since 1991 and situated at 566 km northwest of Addis Ababa which is capital city of Ethiopia [13]. Bahir Dar lies on a very gentle slope with elevation ranging between 1783 and 1889 meter above sea level and it occupies the head stream of the Blue Nile basin and the city covers a total area of 256.4km 2 [13]. The city administration area organized into 6 sub cities and 11 pre urban kebeles. Type of Data The study used both primarily and secondary data sources. The primarily data were discrete type for each indicator of the five [17] broad livelihood security categories that had made dependent variable except economic security. Why the study used discrete data type was the difficulty of livelihood security to get quantitative measurement. It was hard to collect quantitative data type since it needs large fund and longer time. In addition to that problem, the most of variable by nature was qualitative type (eg. empowerment). Therefore, discrete type of data were appropriate for the study for except economic security livelihood outcome with the constraint of a given resources. For economic security it was relatively easy to collect quantitative data type, so the study used continuous form for this variable. The independent variable data type was both dummy and continuous form. Measurement Type and Variable Definition The five [17] livelihood security categories has many indicators which was measured by five-point Likhert type ordinal scale level of agreement except economic security i.e. strongly disagree, disagree, neither agree nor disagree, agree, strongly agree [29] and their respective valuation is 1, 2, 3, 4, and, 5. Similarly, the same evaluation for economic security for very lower, lower, medium, higher and very higher by classifying economics security index into five. In the side of dependent variable food security indicators are based on [8] experience questions. These questions of experiences measure are: you were not worried, you would not run out of food, you ate only many kinds of foods, your family did not skip a meal, your family ate not less than you thought you should, your family not ran out of food, your family were not hungry but did not eat and went without eating for a whole day. For health security indicators also the study used perception question identified by [27] also used by [24]. Such questions were your family ate healthy and nutritious food over last year, your family never seek stomach pen, your family were happy over last year and your family were physically strong. For empowerment: [24] identified and had used three indicators such as community participation, access to services and participation in the planning process. But, the study aimed to know the contribution of urban agriculture to empowerment of urban farmer's particularly. All household members participation in decision and specially females decision making for agricultural work was important indicator variable. For empowerment: [24] identified and had used three indicators such as community participation, access to services and participation in the planning process. But, the study aimed to know the contribution of urban agriculture to empowerment of urban farmer's particularly. All household members participation in decision and specially females decision making for agricultural work was important indicator variable. For education security: it is clear that agricultural income spent on one part of household requirement is on education [22] and agriculture as a teaching resource and natural laboratory [9]. To bring a connection between urban agriculture and education security, it had better to use experience based question such that agricultural revenue enables your family to learn in school and agriculture is a natural teaching material for your family over last year. For economic security: the study not used perception measure like as the previous measure of dependent variable. Economic security means access to adequate means of securing households livelihood outcomes [27]. In turn household's economic security represented by on their expenditures, which comes from household income, credit and saving level of households and household's physical Household Livelihood Outcome the Case of Bahir Dar City, Ethiopia asset. But, to construct economic security index, that was enough to used Credit, saving and annual household revenue. Because, a household physical asset level directly or indirectly related with these selected variable. The construction of the indexes was the same as the previous index construction in the subsequent formula of 1 and 2 and then by 3. The credit level valued from one to five in such a way that higher debited households with lower value and lower debited households associated with higher value. For household annual net revenue and saving level of household also valuated one up to five but in reverse way of debit level of households. Finally, the study had constructed comprehensive index and then classified in to two. Each of indicators was constructed sub-index using formula of: Where 'j' is indicator variable and 'zind' is the index for each household 'i'. This formula was important to construct five broad livelihood security categories [17]. The minimum and maximum values of the indicators are one and five respectively accept economic security indicator. After that, it was better to determine the index for each five broad [17] livelihood security categories using the following formula. SI mi= Note: if the value of Ymi equals to 0.5 or greater, household are more agreed on specific livehood security, otherwise lower agreed. Where 'L' number of indicator, 'SImi' is any of fivesecurity index of household ' i'and 'm' is a subscript to show economic, education, health and empowerment security. ′ ′ also the level of agreement of livelihood security (it can be lower or higher). Hence after constructed the index for each, it had been used five point broad (compressive) level of agreement for each livelihood security whether urban agricultural participant households in Bahir Dar city more agreed or not (lower agreed) in their securement of food, education, health, empowerment and for economic security wither households relatively higher secured or not (lower) as showed in the above equation one and two. The following were explanatory variables for both descriptive and food security model alnyisis. 1) Average annual urban agricultural net revenue per capita (H): This variable was the main target variable and the variable data were collected systematically. Because, households they did not remember the exact revenue gained and costs incurred especially for their agricultural livelihood activities. By nature, agriculture work might be needed relatively longer time than other livelihood activities to get what they need at the end (revenue) and urban agricultural revenue basically used for self-consumption, for cash sale and it maybe also used for others like further agricultural production. Therefore, for the solution of above challenge's, average of households head estimation about his family UA revenue at most and at least over last year was taken in all the case of allocation of revenue for the separate group source of agricultural revenue such that fruit & vegetable, grain and teef, animal fattening, fishing, bee keeping, cattle rearing, sheep, goats, hen rearing and Other UA source of revenue livelihood choice. Then, summed the whole average annual UA revenue of different sources and then deducted the average annual UA cost (it was collected with the same method of collecting average revenue for all possible input cash cost and non-cash cost) divided by family size. 2) Average of net-annual non-agricultural income per capita (K): This variable also one of the determinant of household livelihood security. For this variables the study was used the same method of collecting and preparing data with the average annual urban agricultural net revenue per capita, except it was collected for monthly and then changed to year (multiply by 12). 3) Dependency ratio (DR): It is the ratio of non-working individuals to working individuals and it is negatively associated with household wellbeing [28]. That is the higher dependency ratio related with lower household wellbeing. So, it can be the predictor's of household livelihood security. 3) Average education level (AEL): It is one of a human capital proxy and it linked with a higher probability of a household well-being [28] and he suggests that wellbeing tends to increase with average educational level. Its measurement type was ratio i.e. the total sum of each education year of literate individual's household member divided by total number of family. This variable is not an explanatory for education security dependent variable. 4) Residential land size (RLSPC): It affects urban household livelihood positively [28]. To minimize the effect of family size, its value was divided by family size to change into per capita form. 5) Farm land size per capita (FLSPC): Urban agricultural households dependent on farm land size. The higher farm land size associated with higher urban agricultural income [5,23,28,21]. Its value was divided by family size to change in to per capita form. 6) Location dummies (LOCDUM): Urban agriculture more practiced in the prei-urban area than urban area and the location of household's resident determined the livelihood of HHs [5]. Data Source Although most of the variable data were primary, the secondary data from the office of Bahir Dar city Urban Agriculture, Land and Environment Protection Authority used to get the total population of urban agricultural participants. The primary data was collected from household agricultural participants in collaboration with each kebeles urban agriculture office. Sampling Technique Stratified quota samplings were used in the study. Quota sampling has used to get sample size in each stratum of 11 pre-urban kebeles and six-sub cities. Using quota sampling the sample size had selected proportionally in each stratum and given quotas to be selected in each. Also, no need of list of population in this sampling and the study only needed the size of population. The study had used quota sampling to overcome basically two challenges encountered though the field works. One is most of households were irresponsive for giving the true information, so quota sampling enabled to substitute by another responsive households on the same stratum. The second challenge was time resources was scarce to collect the data and unavailability of research fund. Therefore, the study used stratified quota sampling in order to overcome these challenges. Sample Size Determination In Bahir Dar city administration area 12241 peoples participated in urban agriculture [3]. From 399 urban agricultural households, the data were collected from 17 stratum proportionally using the formula of n*pi. Where pi were proportion of targeted population included in stratum i and n were total number of urban agricultural participants. Method of Data Collection The data were collected through structured and nonstructured questioner. Except for non-urban agricultural revenue and costs, the data was collected yearly information. For other than urban agricultural revenue and cost date were collected monthly information and then take yearly estimation. Households they did not remember their revenue and expenditure especially in agricultural production. So, the average of data about household heads at most and at least estimation were collected to overcome these challenge. Method of Data Analysis Data analysis started with summarization of socio economic variables. Preparing average annual UA net revenue per capita and other added up predictors in a suitable form using excel sheet were the first work. For the dependent variables excel sheet was also used to make it in binary form. Because of resource limitation only food security was analyzed using model. Other the rest four livelihood security categories (i.e education, health, empowerment and economic security alnyisis using correlation studies in descriptive analysis part. The regression output were analyzed and presented in the table. Finally, graph used to show the probability of dependent variable in respect to change of target variable while other predictors keeping constant. Model Specification The model was specified as the following logit FSA i =α+β 1 H i +β i X i FSA can be food security binary level of agreement. H represents average annual urban agricultural net revenue per capita. This was the study target explanatory variable for household i. As stated befor it classified into five in such a way that relatively lowest, lowest, medium, highest, and very highest revenue with respective valuation of 1, 2, 3, 4 and 5. These were because of logistic coefficient too small if it were taken in regression as it is. The other explanatory were Xi s which affects the dependent variable directly or through target variable. Regression Technique The regression technique of the study was binary logistic regression; which is used to predict a binary dependent variable given one or more independent variables with various measurement types. This regression type helped to determine which of our independent variables have a statistically more significant effect on dependent variable [30,14]. For the reason of constraint mentioned above in the introduction section, the study used only one binary logistic regressions (for only food security agreement). Result This chapter has two parts of analysis. The first parts are descriptive analysis and the second is analysis using model. In the descriptive analysis part household livelihood category and indicators and participation in a particular livelihood choice were summarized. In the model analysis part, only one model constructed to examine urban agriculture contribution to food security status of households. Descriptive Analysis The study also proceeded to summarize the five livelihood security categories for the sampled households. From the above table 4 the number of households with percent's in bracket for lower and more (higher) secured households in column three and four respectively. For example 372 urban agricultural households economically lower secured, which was about 93.23% of sampled households and the rest 27 urban agriculture households were relatively higher economic secured, which was about 6.77% of sampled household. From the above table the lower and more (higher) secured household decided after constructing compressive index and then assign value zero for lower secured if the comprehensive index less than or equal to 0.5 Household Livelihood Outcome the Case of Bahir Dar City, Ethiopia and one for higher secured households if it was greater than 0.5. Household credit, household saving and household annual agricultural and non-non net revenue were analyzed in the following table 2. The above table 2 summarizes the current credit, saving, total household annual average net revenue of UA and non-UA participant household. As showed above in table 2, the mean of credit, saving and total annual average net revenue of UA participants was about 2261.965, 20811.96 and 54953.89 Ethiopian birr respectively. Average annual UA contribute to the total household annual average net revenue with mean of 46663.54 Ethiopian birr. As stated before, the study's used economic security indicators are credit (0, 50000), saving (0, 250000) and total annual verage household net revenue (3800, 186000) with minimum and maximum associated respective value in bracket in Ethiopia birr. The study also further summarized UA participant household's livelihood choice as showed in the following table 3. Note: code zero for non-participated and code one for participated in specific livelihood activities (choice) listed above in the first and second column respectively. P and G represents production and growing. As shown above table 3, households participated in hen rearing (or may for egg production) was the highest of all other agricultural livelihood choices. That was 268 (67.17%) households participated in hen rearing and 131 (32.8%) were non-participated. The second is households participated in fruit and vegetable growing (251 households), which were 62.91% and the rest of 148 (37.09%) households were non-participated. The 3 rd , 4 th , 5 th , 6 th , 7 th & 8 th were agricultural households who participated in grain and teef production (189), cattle rearing (162), non-agricultural work (153), animal fattening (69), fishing (67), other UA work (38) and bee keeping (4) with number of households in bracket respectively. The above table 4 shows that the correlation of livelihood security's with average annual urban agricultural net revenue per capita (H) and average annual non-urban agricultural net revenue per capita (K). As you showed above table all five livelihood security i.e economic, food, empowerment, education and health positively associated with average annual urban agricultural net revenue per capita. There correlations were about 0.6762, 0.2953, 0.2269, 0.1954 and 0.2159 respectively. But, average annual non-urban agricultural net revenue per capita (k) were only positively associated with economic security. Other the rest livelihood security negatively associated with non-urban agricultural sourced revenue. Analysis Using Model The best selected specified model using likelihood ratio test was the following: logitFSA i= α+β 1 H i +β 2 K i +β 3 RLSPC i +β 4 FLSPC i +β 5 DR i +β 6 AE i +β 6 LOCDUM+e i The regression output for the above best fitted model were showed in the following below table 5. Were FSA is food security agreement (it takes 1 for more agreed, otherwise zero), i represent individual household, ei is deviance residuals. Interpretation of Regression Output Other things remain the same for all interpretation of the following. The interpretation of logit coefficient for one birr increment of annual average UA net revenue per capita were not correct. A one birr change has negligible effect on logit coefficient of more food security agreement. So, annual average UA net revenue per capita classified into five with another evaluation from 1 up to five for every 6550 birr up to the maximum revenue of 32667 birr. So, from above table 5; for 6550 birr increment of annual average UA net revenue per capita results increment of more agreed food security households by 0.564 logit coefficient and this explanatory variable were positive significant at one percent. The change of probability from low agreed to more agreed household's (mfx) when urban average UA net revenue per capita increased by 6550 birr were 0.098. Annual averages UA net agriculture highly useful for the livelihood of urban households. Annual average non UA net revenue per capita positive insignificant at 5% and above. It has lower logit coefficient and marginal probability than agricultural sourced net revenue per capita as you look in the above table 5. The share of farmland size and residential land size of a family member positively associated logit coefficient of more agreed food security. The one hectare individual share of farm land size in a family member associated with 1.935 logit coefficient of more agreed food security agreement. The same thing the logit coefficient of more agreed food security associated the one meter individual share of residential land were about 0.011. This show the size of farm land has greater contribution for the practice of urban agriculture than the residential land size area in turn it has greater contribution to food security. Hence it has greater logit coefficient. Another positive significant variable were average education level. The one year greater of average education level results 0.149 logit coefficients of more agreed food security through the work of agriculture in the cities. The ratio of non-working to working number of individual and location dummy were a negative significant variable for food security agreement. That means 0.1 increment of dependency ratio associated with decrement of logit coefficient by 0.804. Working peoples were a greater contribution to agricultural work in the cities and towns. Location dummy were insignificant variable at 10% significance level as you look above table 5. Eventhough it were insignificant variable at 10%, household living in the pre-urban area were negatively associated with 0.411 logit coefficient. That means households who practiced urban agriculture in the cities has a greater food security agreement than households who practiced agriculture in the prie-urban. The following graph helps to understood how about the probability of food secured agreement level in response to increasing of average annual UA net revenue per capita. From the above graph one can refer the pattern of probability of food security agreement level with average annual UA net revenue per capita. The triangle (∆) represents the probability of more agreed household's. The cross sine (X) also symbol represents the probability of lower agreed households on their food security status against average annual UA net revenue per capital. Relatively the probability of lower agreement of food security was very high at a lower average annual UA net revenue per capita. But, with increment of average annual UA net revenue per capita, the probability come to down to the lower level. Mostly for more agreed households the opposite is true. Discussion Generally households lived apart from the center of the cities were the more agricultural practitioners and households lived nearer to the cities center were the lower agricultural practitioners and the more participation in other livelihood choice. In another expression households were lived in the prei-urban area were more agricultural participants and households were lived in the urban area were the lower agricultural participants. The positive strong correlation of livelihood security with average annual urban agricultural net revenue per capita (H) shows urban agriculture is essential and it can be major livelihood activity for urban poor. Both logit coefficient and odd ratio was positive and pvalue of target variable was less than 0.01. So, we can conclude that urban agricultural revenue statistically positive significant effect on agreement level of food security and. At the mean value of all predictors, the probability of more agreed households on their food security was about was about 0.77. Therefore, the study's seeks to [21,23,24] conclusions; urban agriculture has a great contribution to food security and it can be major livelihood activities. Conclusion Generally, urban agriculture plays a vital role for household who choice as a means of fulfilling their livelihood outcome. The contribution of average annual UA net revenue per capita in response to a change of more agreed food security categories of [17] in reference to lower agreed food security categories was statistically positive significant at 1%. On average 6550 birr increment of annual average UA net revenue per capita results increment of more agreed food security households by 0.564 logit coefficient, citrus paribus assumption. The correlation between economic, food, empowerment, education and health with average annual urban agricultural net revenue per capita were about 0.6762, 0.29, 0.22, 0.195 and 0.215 respectively. But, average annual non-urban agricultural net revenue per capita were only positively associated with economic security; which were 0.33. From this result one can conclude the contribution of urban agriculture to household's food security status was great and it has also very high contribution for education security status of households in the cities. Its revenue enabled to educate their child's in the school and also its work serves as to collect natural knowledge. Participating in production of different notorious and healthy food type has great positive contribution to household health security status and its activity was important for household family member's to be physically strong and mentally active. It has also positive contribution to empowerment status of households. Because, decision making of family members and female participation in the agricultural activities altogether positively correlated with average annual UA net revenue per capita. Urban agricultural revenue highly correlated with economic security as well; its contribution was till so great. That was most of households that were relatively higher economically status positively associated with higher average annual UA net revenue per capita.
7,791
2020-12-16T00:00:00.000
[ "Agricultural and Food Sciences", "Economics", "Environmental Science" ]
Four dimensional-scanning transmission electron microscopy study on relationship between crystallographic orientation and spontaneous polarization in epitaxial BiFeO3 Spontaneous polarization and crystallographic orientations within ferroelectric domains are investigated using an epitaxially grown BiFeO3 thin film under bi-axial tensile strain. Four dimensional-scanning transmission electron microscopy (4D-STEM) and atomic resolution STEM techniques revealed that the tensile strain applied is not enough to cause breakdown of equilibrium BiFeO3 symmetry (rhombohedral with space group: R3c). 4D-STEM data exhibit two types of BiFeO3 ferroelectric domains: one with projected polarization vector possessing out-of-plane component only, and the other with that consisting of both in-plane and out-of-plane components. For domains with only out-of-plane polarization, convergent beam electron diffraction (CBED) patterns exhibit “extra” Bragg’s reflections (compared to CBED of cubic-perovskite) that indicate rhombohedral symmetry. In addition, beam damage effects on ferroelectric property measurements were investigated by systematically changing electron energy from 60 to 300 keV. Most of the experimentally found metastable BFO phase identifications were made based on small distortions relative to pc notation-approximated equilibrium BFO unit cell.Few of these studies have shown electron diffraction patterns from metastable BFO phases with appropriate detailed analysis to effectively evaluate symmetry utilizing unit cell distortions with accurate basis atom positions.For example, nano-beam electron diffraction patterns combined with structure factor calculation (that makes use of true equilibrium BFO symmetry using hexagonal notation) unambiguously demonstrated the existence of Bragg's reflections at Q (scattering vector) ≈ 4.18 nm −1 specifically tied to a rhombohedral distortion, i.e., oxygen octahedral rotation, in equilibrium BFO phase 21 .While the Bragg's reflections can be readily used to distinguish rhombohedral BFO from other metastable BFO phases (owing to their exclusive association with rhombohedral BFO), only a few studies implementing the Bragg's reflections have been reported [22][23][24][25] .Thus, the relationship between the crystal symmetries of metastable BFO phases and their spontaneous polarization property remain debated. In this study, four dimensional-scanning transmission electron microscopy (4D-STEM) technique 26,27 is applied to an epitaxial BFO film designed to be under tensile strain using PrScO 3 (PSO) single crystal to investigate: (1) crystal symmetry within BFO film, (2) ferroelectric domain structure within BFO film in terms of the relationship between spontaneous polarization and crystallographic orientations, and (3) beam-damage effects on measured ferroelectric domain structure. Results and discussion Figure 1a shows a high angle annular dark field (HAADF)-STEM image of the BFO films grown on (101) o PSO substrate (space group: Pnma, a = 0.5780 nm, b = 0.8025 nm, c = 0.5608 nm, α = β = γ = 90°) along 111 o (subscript "o" stands for orthorhombic notation) zone axis using 120 keV electron probe 28 .~ 20 nm BFO epitaxial film shows up brighter than underlying PSO substrate because Bi atom within BFO, which is heavier than Pr and Sc in PSO, provides more signals to the HAADF detector located at the collection semi-angle of 80-100 mrad 29 .A green rectangle in Fig. 1a indicates the area where a 4D-STEM data set, two probe-scanning dimensions in real space and two momentum dimensions in reciprocal space, were acquired using 120 keV electron probe as shown in Fig. 1b-e.Figure 1b shows an example of a convergent beam electron diffraction (CBED) pattern that was collected from each scanning position in Fig. 1c-e.While 4D-STEM data acquisition with sub-angstrom aberration corrected electron probes is known to be advantageous to visualize the potential gradient across single atomic columns and the nuclear charge in GaN and SrTiO 3 [30][31][32] , sub-angstrom electron probes with large convergence angles cause Bragg's reflections in CBED patterns to overlap, which complicates measurement of www.nature.com/scientificreports/long range electric fields arising from ionicity because 4D-STEM signal is dominated by nuclear potential 33,34 .Thus, a small convergence semi-angle of ~ 1.26 mrad is used to prevent Bragg's reflection overlaps as shown in Fig. 1b.The spontaneous polarization orientations within BFO domains were determined by analysis of zeroth order CBED pattern shift along two orthogonal directions, i.e., x and y, as denoted in Fig. 1b.These shifts are known to occur due to the deflection of the incident electron beam by average electric field over unit cell in ferroelectric materials, whereas asymmetric intensities in conjugate disks resulting from Friedel's law breakage allows for polarity field measurement 26,27,33,34 .While no obvious contrast resulting from spontaneous polarization is seen from BFO layer in HAADF-STEM image (see Fig. 1a), areas with distinctively bright and dark contrasts are seen in Fig. 1c and d.The color-coded vector displacement map, based on Fig. 1c and d, is shown in Fig. 1e.Note that the intensity scales to the magnitude of the vector field and the color represents its orientation as shown by the color wheel at bottom-right corner.Ferroelectric domains with sizes ranging from ~ 10 to ~ 25 nm are clearly identified.Table 1 summarizes the distribution of spontaneous polarization orientations in each domain with respect to the in-plane orientation denoted in Fig. 1e.Note that the polarization angles measured are based on projection along 111 o PSO orientation.The mean and standard deviation are based on 10 pixels from the central area of each ferroelectric domain.It is worth noting that domains 2, 4, and 8 possess only an out-of-plane component of polarization with polarization angles ~ − 90°.All other domains have both of out-of-plane and in-plane polarization components, i.e., their polarization angles ≠ ± 90°. Prior to further discussion on the relationship between spontaneous polarization and crystallographic orientations, crystal symmetry within the BFO films needs to be identified.CBED patterns from domains 1 through 4, extracted from 4D-STEM data set, are shown in Fig. 2. While Fig. 2a-d all exhibit fundamental Bragg's reflections, Table 1.Summary of the spontaneous polarization angles, i.e., mean and standard deviation, with respect to the in-plane orientation denoted as an arrow in Fig. 1e.S1 for h to pc notation conversion).This result is in good agreement with a recent work that performed nano-beam electron diffraction analysis for BFO grown on PSO substrate 21 .Note that [110] h and [ 111] h zone axes found in the present work are crystallographically equivalent to [ 010] h and [211] h zone axes found in the recent work 21 , respectively, as the angles between the corresponding orientations are 120° and the CBED patterns of the corresponding orientations are identical (see Figure S1).In addition, the work showed that the extra Bragg's reflections shown in Fig. 2b and d are the result of oxygen octahedral rotation occurring in rhombohedral BFO (space group: R3c; lattice parameter: a = 0.5678 nm, and c = 1.3982 nm) by demonstrating that the electron diffraction simulation of pc notation-approximated BFO (space group: Pm3m ; lattice parameter a = 0.396 nm) that possesses no oxygen octahedral rotation exhibits no such extra Bragg's reflections 21 . Since local electronic structure in O (oxygen) K-edge in BFO is known to be sensitive to local bonding and geometry 18 , an electron energy loss spectrum (EELS) is acquired from the BFO film with ~ 1.0 eV FWHM energy resolution (see Fig. 3).The O K-edge spectral features can be discussed for two regions, as defined by the labels A, A' and B, B' as shown in Fig. 3. Peaks A and A' are readily attributed to hybridization between O 2p and Fe 3d states and a transition between O 2p and Bi 5d (or 6d) states, respectively [35][36][37] .peaksB and B' are associated with hybridization between O 2p and Fe 4sp states in bulk and thin BFO 35,36,38 .The relative intensities of A and A' , and of B and B' are in good agreement with those for bulk and thin film rhombohedral BFO, where the bonding geometry between Fe and O atoms is octahedral 23,[35][36][37] .Note that these are in disagreement with metastable BFO phases that show the relative intensity inversion for B and B' 17,18,36 .Accordingly, the EELS result on O K-edge film is consistent with the rhombohedral symmetry found from CBED patterns in Fig. 2. Figure 4 shows two atomic resolution HAADF-STEM images for the BFO/PSO interface [recorded at domains 1 and 2, respectively, of Fig. 1e] to investigate strain within BFO film which is expected from lattice misfit (~ 1.5%) with PSO 6,21,39 .The PSO lattice spacing along the in-plane orientation, i.e., ( 121) o , is commensurate with that of the BFO, i.e., 114 h , in domain 1 (see Fig. 4a) and ( 110) h in domain 2 (see Fig. 4b), with no sign of misfit dislo- cations despite ~ 1.5% of lattice misfit.Since misfit dislocations, known to relax elastic strain when the density is higher than a threshold value, are not found, the elastic strain resulting from ~ 1.5% of lattice misfit with PSO is thought to be maintained within BFO film.Fast Fourier transform (FFT) patterns from domains 1 and 2 (see insets in Fig. 4a and b) show the same characteristics as found in CBED patterns, i.e., extra Bragg's reflections (denoted by orange arrows in the inset of Fig. 4b) were found for domain 2 only.Thus, the result in FFT analysis of atomic resolution HAADF-STEM data is consistent with that of CBED patterns. The combined evidence of (1) rhombohedral symmetry (shown in Figs. 1, 2 and 3) and ( 2) ~ 1.5% tensile elastic strain (shown in Fig. 4), suggests that this level of strain falls below the threshold for equilibrium symmetry breakdown in this epitaxial BFO film.This result agrees with prior structural flexibility in rhombohedral BFO discussed previously in terms of : (1) a small perovskite tolerance factor (~ 0.88) allowing for large degree of rotation and/or tilting of oxygen octahedra 40 , (2) variation in experimentally found lattice parameters for bulk rhombohedral BFO (i.e., ~ 0.82% in a and ~ 0.71% in c in hexagonal notation 41 ), and (3) availability of multiple meta stable phases 7,8 . Let us turn our attention to the relationship between the spontaneous polarization orientations found in ferroelectric domains and their crystallographic orientations.Based on the results found in Figs. 1 and 2, an atomic model is constructed as shown in Fig. 5.Note that no tensile strain is assumed within BFO ferroelectric domains.Figure 5a shows that the unstrained interplanar distances between domains 1 and 2 are the same along out-of-plane [see ( 112) h in domains 1 and 2] indicating this interface [see dotted line in Fig. 5a] shows no misfit strain.Note that the HAADF-STEM image showed no distinctive contrast across this interface because of: (1) no misfit stain field and (2) the close relationship of the projected crystal structures in these two different Note that while the CBED pattern from domain 1 shows no extra Bragg's reflections (see Fig. 1a), that from domain 2 exhibits the extra Bragg's reflection (see Fig. 1b).Thus, the ferroelectric domain with no extra Bragg's reflections has spontaneous polarization orientation with both in-plane and out-of-plane components, whereas the one exhibiting extra Bragg's reflections possesses only an out-of-plane polarization component.This indicates that polarization orientation within BFO ferroelectric domains can be identified by the existence of extra Bragg's reflections in CBED pattern from BFO ferroelectric domain. To investigate probe-beam damage effect on ferroelectric domain, 4D-STEM data were acquired at 60 and 300 keV as shown in Figs. 6 and 7. Figures 6a and 7a are the HAADF images, acquired from the same area as in Fig. 1a, at each keV, showing no sign of contrast that could be associated with ferroelectric domains.The examples of CBED pattern obtained at each keV are shown in Figs.6b and 7b.Note that Bragg's reflections in the CBED patterns separate related to convergence angle adjustment (i.e., ~ 5.25 mrad for 60 keV and ~ 1.84 mrad for 300 keV).The color-coded vector displacement maps at each keV, based on shifts in zeroth order diffraction disks in CBED patterns (see Fig. 6c, d for 60 keV and Fig. 7c and d for 300 keV) are shown in Figs.6e and 7e.Note that the intensity scales to the magnitude of the vector field and the color represents its orientation as shown by the color wheel at bottom-right corner in each figure.It can be readily noticed that while the shapes and colors of ferroelectric domains in Fig. 6e are comparable to those in Fig. 1e, those of ferroelectric domains in Fig. 7e are different than those in Fig. 1e in terms of domain morphology and colors, i.e., polarization orientations.In particular, the bottom quarter of BFO layer shows low signal/noise (S/N) ratio in Fig. 7e indicating that while 120 and 60 keV probe energies induced no noticeable beam-damage, the 300 keV probe energy damaged BFO ferroelectric domain ordering.It is well known that high energy electron beams may cause both ionization and displacement damages.Since ionization damage decreases with increasing electron acceleration voltage 45,46 , the beam damage found at 300 keV is most likely attributed to displacement damage.When incident electron provides recoil energy greater than threshold displacement energy, E d , of each constituent atoms within target material, point defects, such as Frenkel pairs are introduced by knocked-on atoms.The maximum recoil energy, T m , that an incident electron transfers to constituent atoms in target material is given by 46 : ,where E is the energy of incident electron, m 0 the rest mass of electron, c the velocity of light, and M the mass of the displaced atom.Although E d necessary to create knock-on damage and make stable point defects (such as Frenkel pair) have not yet been determined for BFO, ~ 25 eV was proposed as a general guideline of threshold displacement energy 47 .Calculated T m for Bi, Fe and O atoms at 60, 120, and 300 keV are summarized in Table 2.While most of the T m values are less than ~ 25 eV of the suggested E d , the T m of ~ 53.2 eV found for O atoms at 300 keV is significantly greater than the suggested E d indicating that 300 keV electrons likely cause displacement damage in BFO through accumulation of O vacancies and interstitials.Previous theoretical studies discussed that O-poor conditions provide fully ionized oxygen vacancies which pair with cation atom to lead to local ferroelectric polarization called imprint effect which disturb spontaneous polarization within BFO 48,49 . Since electron beam capable of providing T m that is greater than E d to constituent atoms is known to knock-off the atoms from exit surface of the sample through displacement damage process 50,51 , it is reasonably assumed that 300 keV electron probe used in the current study can cause displacement damage leading to O-poor condition within BFO leading to disturb spontaneous polarization through imprint effect.This can be the reason of modified shapes and colors with low S/N ratio area found in measured ferroelectric domains by 300 keV electron probe as shown in Fig. 7e. Summary 4D-STEM technique was applied to an epitaxial BFO film engineered to be under ~ 1.5 % of bi-axial tensile strain using PSO single crystal substrate.Our key results include: (1) Color-coded vector displacement map derived from 4D-STEM center of mass deflection measurements identified BFO ferroelectric domains with sizes ranging from ~ 10 to ~ 25 nm.Two types of ferroelectric domains were observed, i.e., one with both in-plane and out-of-plane polarization components and the other with an out-of-plane polarization only.CBED and EELS analyses suggest that the strain is not enough to cause rhombohedral symmetry breakdown within BFO film.(4) Comparison of 4D-STEM data recorded at different incident electron probe energies (60, 120, and 300 keV tested) identified that displacement damage observed at 300 keV could reduce (and modify) measurable ferroelectric property within BFO film through possibly O vacancy formation. Methods An epitaxial BFO film of ~ 20 nm was grown on a (101) o PSO substrate using molecular beam epitaxy in PAPA-DIM facility at Cornell University.The cross-sectional sample preparation for 4D-STEM measurement was performed using a Ga ion Dual beam focused ion beam, Thermo Fisher Helios 600.Ga ion energy was gradually decreased from 30 to 2 kV to minimize ion beam induced damage.A Thermo Fisher Titan Themis G2 300 equipped with a probe corrector was used for 4D-STEM data acquisition at acceleration voltages ranging from 60 to 300 keV.The convergence semi-angles of electron probe were adjusted between ~ 1.25 to ~ 5.25 mrad to separate Bragg's reflections in CBED patterns.CBED patterns were calibrated using [202] and [ 202 ] Bragg's reflections of PSO substrate.Two different values of camera length, i.e., 160 (for 60 kV) and 300 (for 120 and 300 kV) mm were used to collect large spatial frequency (up to ~ 20 nm −1 ) information in CBED patterns.A Gatan OneView™ CMOS camera with readout binned to 512 × 512 pixels was used to collect diffraction data for 4D STEM.HAADF-STEM images and 4D-STEM were collected with ~ 176° image rotation with respect to CBED www.nature.com/scientificreports/patterns.The Gatan Microscopy Suite software was used to analyze 4D STEM data using a center of mass method that fits shifts across the full CBED pattern at each pixel position.The image rotation of ~ 176° was compensated before center of mass data process.A Gatan Image Filter (GIF) Quantum was used to acquire EELS data. Figure 1 . Figure 1.(a) A cross-sectional HAADF-STEM image of epitaxial BFO grown on PSO along [111] o zone axis using 120 keV electron probe (b) an example of CBED pattern from BFO, (c) the measured shift in zeroth order diffraction disks in CBED patterns along x direction, i.e., dx, (d) the measured shift in zeroth order diffraction disks in CBED patterns along y direction, i.e., dy, (e) vector displacement map with a color wheel as an inset bottom right corner.Ferroelectric domains are denoted by white-dashed lines with numbers. Figure 2 . Figure 2. CBED patterns acquired from domains 1 (a), 2 (b), 3 (c), and 4 (d) using 120 keV electron probe.Extra Bragg's reflections are denoted by orange arrows in (b) and (d).The two boxed areas in red in (b) and (d) are magnified as insets at the bottom-right corner, respectively. Figure 3 . Figure 3.An EELS on O K-edge from BFO film using 120 keV electron probe with ~ 1.0 eV energy resolution. Figure 4 . Figure 4. (a) Atomic resolution HAADF-STEM images from (a) BFO domain 1/PSO and (b) BFO domain 2/PSO interfaces along PSO [111] o zone axis.The interplanar distances of BFO I and BFO II along in-plane orientation are the same as those of PSO with no sign of misfit dislocations at the interfaces.FFT patterns from domains 1 and 2 are shown as insets at the top-right corner in (a) and (b), respectively, with extra Bragg's reflections from domain 2 denoted with orange arrows. Figure 5 . Figure 5. (a) Atomic model showing the epitaxial relationship between BFO domains 1 and 2 with respect to PSO substrate.Note that the out-of-plane interplanar distances between the two domains are identical.Spontaneous polarization orientations in BFO domains 1 (b) and 2 (c) are shown with blue arrows.Note that the BFO unit cell is projected along the corresponding zone axes of each BFO domain, i.e., [110] h for domain 1 and [ 111] h for domain 2. (001) h plane in each BFO domain is denoted in blue with the BFO unit cell. Figure 6 . Figure 6.(a) A cross-sectional HAADF-STEM image of epitaxial BFO grown on PSO along [111] o zone axis using 60 keV electron probe, (b) an example of CBED pattern from BFO, (c) the measured shift in zeroth order diffraction disks in CBED patterns along x direction, i.e., dx, (d) the measured shift in zeroth order diffraction disks in CBED patterns along y direction, i.e., dy, (e) Vector displacement map with a color wheel as an inset bottom right corner.Ferroelectric domains are denoted by white-dashed lines with numbers. Figure 7 . Figure 7. (a) A cross-sectional HAADF-STEM image of epitaxial BFO grown on PSO along [111] o zone axis using 300 keV electron probe, (b) an example of CBED pattern from BFO, (c) the measured shift in zeroth order diffraction disks in CBED patterns along x direction, i.e., dx, (d) the measured shift in zeroth order diffraction disks in CBED patterns along y direction, i.e., dy, (e) Vector displacement map with a color wheel as an inset bottom right corner. ( 2 ) Further comparison with CBED patterns acquired from the ferroelectric domains indicates correlation between extra Bragg's reflections and the polarization component characteristic, i.e. extra Bragg' reflections indicate out-of-plane polarization only; no extra Bragg's reflection leads to both in-plane and out-of-plane polarization components within BFO ferroelectric domains.(3) While atomic resolution HAADF images indicates ~ 1.5% biaxial tensile strain within BFO film elastically, Table 2 . Summary of maximum recoil energy, T m , for Bi, Fe, and O atoms against electron probe energy ranging from 60 to 300 keV.
4,797.8
2024-07-05T00:00:00.000
[ "Materials Science", "Physics" ]
Streaming Readout of the CLAS12 Forward Tagger Using TriDAS and JANA2 An effort is underway to develop streaming readout data acquisition system for the CLAS12 detector in Jefferson Lab's experimental Hall-B. Successful beam tests were performed in the spring and summer of 2020 using a 10GeV electron beam from Jefferson Lab's CEBAF accelerator. The prototype system combined elements of the TriDAS and CODA data acquisition systems with the JANA2 analysis/reconstruction framework. This successfully merged components that included an FPGA stream source, a distributed hit processing system, and software plugins that allowed offline analysis written in C++ to be used for online event filtering. Details of the system design and performance are presented. Introduction An effort was started in early Spring 2020 to develop a prototype streaming data acquisition system (DAQ) for the CLAS12 detector [1] in experimental Hall-B at Jefferson Lab. This system brought together components from the existing CODA DAQ system [2] [3], the TriDAS DAQ system [4] and the JANA2 software framework [5]. The prototype system was used to successfully read out the CLAS12 Forward Tagger(FT) and Forward Hodoscope(FH) detectors in a streaming mode during an active beam test. The COVID-19 pandemic halted the beam operations early in the testing period, but the test was resumed once beam operations started back up in the summer. The longer term goals for the project are to expand the prototype to the full CLAS12 DAQ system and eventually deploy the system to other experiments at Jefferson Lab and elsewhere. One of the main benefits of a streaming readout (SRO) system is that it allows custom hardware triggering systems to be replaced with software algorithms run on cheaper commodity hardware. In addition to removing the deadtime inherent with traditional hardware triggers, this allows more complex triggering algorithms that can operate on whole detector events (as opposed to only fast subdetectors). The following sections give details on the setup for the beam test and the various components of the prototype system. This is followed by some analysis results of the data taken with the system. Experimental Setup The beam test focused on the reaction e X → π 0 X with π 0 → 2γ produced by the interaction of 10.6 GeV, ∼100 nA, CEBAF electron beam on 125µm lead and 40 cm gaseous deuterium targets. The inclusive π 0 electro-production was chosen because the two decay γs can be detected by a single detector (the CLAS12 Forward Tagger) reducing the complexity of the experimental set up. Moreover, the invariant mass of the two photons to form a π 0 , provides a clean signature over the background due to the scattered electron and other electro-magnetic processes. The Forward Tagger or FT [6] is part of the CLAS12 detector [7] hosted in Hall-B at Jefferson Lab. It is composed of a lead-tungstate electromagnetic calorimeter (FT-Cal), used to measure the photon energy and position, and a plastic scintillator hodoscope (FT-Hodo), used to distinguish neutrals from charged particles and, in turn, identify gammas. The FT covers a small solid angle (0 o < φ < 360 o and 2.5 o < θ < 5 o ) in the beam direction. It is mainly used to detect electrons scattered at small angles from the target and forwardgoing neutral particles, such as π 0 , produced in the interaction of the electron beam with the target. The 332 PbWO crystals of the FT-Cal and the 232 tiles of the FT-Hodo were read out by JLab digitizers. The limited number of channels and the combination of two different detectors (calorimeter and plastic scintillators) usually used to trigger the experiment DAQ, represents the ideal bench test for a real on-beam set up. For a quantitative assessment of the streaming read out DAQ chain, some data where also collected in standard triggered mode for later comparison. Readout Electronics In our streaming readout (SRO) system physics signals were continuously digitized by the fADC250 flash ADC. The fADC250 is a VME64x 16-channel direct-conversion ADC module, conforming to the VITA-41 switch serial standard (VXS). These high-speed flash ADCs were developed at JLAB as part of the 12 GeV upgrade. Currently these modules are deployed in many JLAB experiments, providing energy deposition, timing, as well as hit and trigger information. The fADC250 is equipped with an FPGA, that receives 12-bit datawords streaming at 250 MHz from 16 fADC channels in a module. The fADC250 FPGA performs data processing for each fADC channel, computes energy sum of all fADCs, and generates acceptance pulses for each fADC. Each VXS crate houses 16 fADC modules that are managed by the VXS Trigger Processor (VTP) module. The VTP is a VXS switch card module, that was designed to play the leading role in the level1 trigger formation as a central or a global trigger processors (CTP, GTP) in a traditional triggered DAQ system. Figure 1 shows a diagram of the VTP module used to communicate and process data from the fADC250 modules in the crate. The design of the VTP contains high speed backplane serial links to each front-end payload module in the crate. Fiber optic serial links provide communication to other crates. In addition, the VTP has more FPGA resources for data processing logic, and a dual-core 1GHz ARM processor, capable of running data processing components, such as event building, trigger and processing diagnostics. These features make the VTP module an ideal candidate for designing a streaming data acquisition system. In the presented streaming DAQ system prototype the fADC250 data is received at 10Gbps from each payload slot of the VXS crate. Data from each fADC250 is then buffered into DDR3 memory, where each module has a dedicated 256MB space for data buffering. The role of these buffers is to allow a significant burst of physics input signals, as well as for handling substantial network delays or downstream processor latencies. The VTP streaming firmware implements 4 parallel instances of fADC250 streaming systems, each feeding a 10Gbps Ethernet link. Each instance handles 4 slots (i.e. 4 fADC250 modules), with a 1 GByte memory buffer. Every 8 slots of fADC share a 2GByte DDR3 buffer. Ethernet was chosen for the streaming readout interface because of its widespread support and compatibility. The VTP is programmed with the destination IP address and socket, where streaming data from 4 fADC slots are to be sent. Taking into account the computing power of contemporary servers, a socket/server per 10Gbps link is feasible for data transfer. The fADC data-payload is packed into a TCP data frame, containing a header that includes information about the frame number and the timestamp. This information is necessary for ensuring the data coherency and synchronization. Streaming TCP frames correspond to a programmable time-span (typically 65536ns) for which the reported fADC hits are collected. At the beginning of the streaming readout, the VTP module synchronously starts its own timer that is used to timestamp fADC hits. At every elapse of the frame-time, a TCP data frame is sent containing hits for the corresponding time-span. VTP is responsible for dropping streaming data-frames in case the downstream receivers are not able to accept a higher input rate for a long periods of time. Burst conditions in the order of 100msec at 32MHz/channel ( for all channels) are handled without data loss by the VTP DDR3 memory buffers. When the buffers are full, an entire frame (65536ns chunk of data) is dropped. These losses should not happen under normal conditions when the network and downstream processing chain are efficient enough to keep up with the readout rate. The VTP frame counter (the record-number in the TCP header) is used to identify the number of dropped frames, thus representing the efficiency of entire data stream processing pipeline. Front end Streaming Source The JLAB data acquisition system called CODA was designed to work with trigger-based readout systems. A key component is the Event Builder, which collects data from 100+ Readout Controllers (ROCs) and VXS Trigger Processor Boards (VTPs). In the traditional triggered mode of operation, the event builder builds events based on event number. Also used for triggered mode is the Trigger Supervisor (TS) module which synchronizes all components using clock, sync, trigger and busy signals. ROCs would read front-end electronics over a VME bus, and VTPs are used to help form trigger decisions and report some trigger-related information. A detailed description of the CLAS12 triggered mode system can be found in the literature [8]. To use the available front-end electronics in streaming mode, the role of the TS was reduced to clock distribution, and the Event Builder was replaced with new SRO components and back-end software capable of gluing the front-end module information together based on timestamp instead of event number. In addition the role of the ROCS and VME bus were reduced to just the initial configuration of the front-end modules. In streaming mode, all front-end electronics readout is performed by the VTP boards over the VXS serial lines rather then VME bus. This allows an increase in the bandwidth limit from about 2GBit/s to 20Gb/s for each of the participating electronics crates, with the possibility to be increased to 40GBit/s if needed. New firmware was developed for the VTPs to implement streaming mode. Figures 2 and 3 show the original version of CODA, as well as the streaming version of CODA (without back-end which notes as TriDAS). In short, the front end readout software running inside the VME controllers and VTP boards was modified to stream data out freely, and a new SRO component was developed to be the intermediate translator between front-end and back-end. The SRO component is a multi-threaded and multi-node component capable of handling the 20Gb/s data rate from every electronics crate. It gets data from VTPs, converts it into TriDAS format applying appropriate format checks, and then supplies the results into the TriDAS. Online Streaming DAQ The Trigger and Data Acquisition System [4] (TriDAS) is software originally designed and implemented for streaming read-out of Astroparticle Physics events, specifically for the NEMO project which aimed at developing the technologies to build a cubic-kilometre sized telescope for high-energy cosmic neutrinos. The TriDAS scalable, modular, and flexible design made it also adaptable to the requirements of a beam-based experiment with minimal development effort. TriDAS is made by several software components, each devoted to a specific task in the data-processing chain and implemented in C++11. In this paper we describe the TriDAS as sketched in fig. 4, which represents the implementation realised for the CLAS12 streaming readout test, in the summer of 2020. The HitManagers (HMs) represent the first data aggregation stage. They receive the data streams 200 ns around a hit whose energy exceeded a threshold of ≈ 2 GeV. No attempt is currently made to check for and recover events spanning two STS's due to the detector window (200 ns) being so much smaller than the STS (50 ms) that it represents a negligible amount of potential data loss. Level 1 events identified within a TS are then fed to the L2 classification/selection algorithms that are implemented in separate binaries that are specified in the run configuration file. These binaries are loaded and configured at run-time, allowing one to easily change the L2 algorithms or their parameters on a run-by-run basis without the need for recompiling while still keeping the highest possible computation efficiency. A token-based mechanism is at the base of the TriDAS SuperVisor (TSV) load balancing. Each TCPU thread owns a token that is given to the TSV on completion of the TS processing. The TSV, then, maintains a pool of "free to use" TCPU threads which are then matched to the new Time Slices that are continuously assembled by the HMs. The Event Manager (EM) collects the selected L2 events and then writes them to the so called Post Trigger (PT) file. The TriDAS System Controller (TSC) is the part of the system with which users directly interact. Through it, users may configure and control the TriDAS activities. For the aforementioned test with CLAS12, a simple interface to the TSC was built in order to steer TriDAS along the hierarchical state machine sketched in fig. 5. In the IDLE state, only the TSC process is running and waits for user commands. Upon the Init transition, the TSC retrieves a JSON-formatted run configuration file, called Datacard. The Datacard describes the geometry of the detector and the configuration of the TriDAS system for a given run. If the transition is successful, the state machine moves into the Initiated sub-state machine. TriDAS is now in the STANDBY state, where still no other process than TSC is running. During the Configure transition, The TSC decides the run number and then starts the HM, TCPU and EM processes on the corresponding nodes. If all the processes start successfully the state machine moves into the Configured sub-state machine. The Tri-DAS is now in the READY state. During the Start transition, the TSC computes the start date and time of the run (which for the CLAS12 case is always the fixed value 01/01/2020 00:00:00) and starts the TSV. If this transition is successful, TriDAS moves into the RUN-NING state. Software Trigger The TriDAS system supports user-level plugins to allow implementation of custom processing algorithms which can be used to implement a software trigger. For this prototype system, a TriDAS plugin was constructed that implemented the JANA2 framework. JANA2 is a multi-threaded event processing/analysis framework designed for both offline and streaming applications. User algorithms written within the JANA2 framework were then made available for forming software triggers in the form of JANA2 plugins. The benefit of this is that the full suite of reconstruction algorithms used in the offline reconstruction are available for use as triggers/filters in the streaming system. This includes accessing translation tables, and calibration constants. The software triggering itself was done by using multiple JANA plugins, each implementing their own trigger(s). Each plugin produced one or more TriggerDecision objects for each potential "event" identified by the TriDAS system. The decision for each algorithm was in the form of a 16bit integer where a value of zero meant no-keep and any non-zero value meant keep. If any trigger algorithm indicated a keep condition then The TriDAS system was told to keep the event. A unique 16bit ID was assigned to each trigger algorithm (passed in the TriggerDecision object). The 16bit ID and 16bit decision for each "event" was given to TriDAS so it could store the decision for each algorithm with each event written out. The JANA2 plugins list was determined by the JANA configuration file. Configuration settings for the individual triggers were also set in this file. The system allows different individuals to maintain the code base for their trigger separately while the selection of which triggers are used and what their configurations are is kept in a single configuration file that is Figure 6. Snippet from the JANA configuration file showing settings for one of the software triggers implemented. The format is simple key-value pairs. Lines starting with "#" are comments. Having the keys start with "TRIGGER:FtCalClus:" was just a choice of convention for this particular beam test. read in at run time. Figure 6 shows a snippet of the configuration file with settings for the FT calorimeter multi-cluster trigger. The on-demand design of JANA2 specifically supports multi-tiered triggering. This means trigger algorithms can be designed such that more expensive algorithms are only run for events or time slices when a decision cannot be made using the output of less expensive algorithms. The benefit of this is that the compute resource required for the software trigger can be provisioned for the average time needed for a keep/no-keep decision rather than for the most expensive algorithm. For example, consider a situation in which one wishes to trigger on events with a detected proton track and two other charged tracks in the forward direction such as a rare Primakoff reaction γp → pπ + π − . Even the rough tracking algorithm used to identify the two very forward going tracks can be expensive. At the same time, one wants to take a significant amount of of pre-scaled events using a minimum bias trigger where only a hit count or minimal calorimeter energy is needed. This would be a very fast algorithm. JANA2 can be configured to only run the more expensive Primakoff tracking trigger for those events which were not already flagged for saving by the fast minimum bias trigger. Figure 7 illustrates how this can work. Figure 7. Illustration of how the JANA2 on-demand design can be leveraged to reduce the overall CPU required to implement a software trigger. In this scenario, the more expensive algorithms are only run on an event/time slice when a keep or drop decision cannot be made using a less expensive algorithm. The off-line data analysis is focused on identification of the π • ->γγ decay events where both photons are detected in the calorimeter. In particular, in this paper we report the result obtained by the electron-beam on lead target test. In the off-line data reconstruction, performed by applying the same full suite of reconstruction algorithms used in the on-line reconstruction, the recorded signal of each crystal was converted into energy by applying proper calibration constants. The latter were determined in a previous calibration run performed in standard trigger mode. The standard calibration procedure is described in [9]. Figure 8 shows the reconstructed γγ-invariant mass spectrum. It is characterized by two peaks on the π • mass region: the first peak at higher mass is associated to π • production from the lead target, while the second one is related to π • s production from the aluminum target window. Figure 8. γγ invariant mass spectrum. The labeled peaks are both due π • → γγ decays. The peak marked Al target window has its position shifted to a lower invariant mass due to the assumption that the vertex is located at the Pb target position when calculating the invariant mass. Artificial Intelligence SRO can further the convergence of online and offline analyses allowing to incorporate new emerging software approaches. For example, the inclusion of high level A.I. algorithms in the analysis pipeline can foster better data quality control during data taking and shorter analysis cycles. A.I. is becoming ubiquitous in nuclear and particle physics and encompasses all the concepts related to the integration of intelligence into machines; unsupervised learning is a type of algorithm able to learn patterns from untagged data (i.e., no training phase) offering new solutions to near real-time reconstruction problems. An unsupervised hierarchical clustering algorithm inspired by hdbscan [10] has been developed as a plugin within the JANA2 framework. The following are essential features of this unsupervised approach: (i) can be easily ported to other experiments; (ii) formally does not depend on cuts making it less sensitive to variations in experimental conditions during data taking; (iii) able to cope with large number of hits; (iv) excels when dealing with challenging topologies/arbitrary shaped clusters, different cluster size and noise. The main idea behind the hierarchical clustering is to consider all the information at the hit-level in the detector (spatial, time, and energy) and look at the density of the hits in that space of parameters, after defining a metric (e.g., euclidean) which allows to define what is called "mutual reachability" among points. In this way, clusters can be interpreted as more likely (higher density) regions separated by less likely regions (lower density). Within this framework all hits have a probability of belonging to a cluster as well as of being outliers and one can make decisions when forming clusters based on these probability values. Tests have been performed both online and offline (on collected data) to analyze and reconstruct clusters in the FT-Cal, and provided results consistent with the π • yields already discussed. Summary A prototype streaming data acquisition system was successfully tested in beam conditions during the summer of 2020. The prototype system combined several different software systems with an existing hardware DAQ system for the test. These included the CLAS12 detector and CODA DAQ system, the TriDAS streaming DAQ system, and the JANA2 event processing framework. The test successfully read out the CLAS12 Forward Tagger detector and subsequent analysis was able to extract a clean physics signal in the form of a π • invariant mass peak. The prototype system is being used as the basis for developing a larger system planned for the entire CLAS12 detector and its future physics program.
4,778.2
2021-04-23T00:00:00.000
[ "Physics" ]
DETERMINANTS OF COMMON FACTORS IN KOREAN BANKS’ CREDIT DEFAULT SWAP PREMIUMS Using the panel analysis of non-stationarity in idi osycratic and common component method, we decompose Credit Default Swap (CDS) premium data of 11Korean banks into common factors and idiosyncratic shocks. We find that the CDS premium of all 11 banks is mostly explained by one common factor. We also find that the common factor of the banks’ CDS premium is mainly affected by the level and the volatility of stock market prices in develo ped markets and oil prices. It suggests that the Ko rean banking industry is susceptible to foreign shocks d ue to the heavy dependency of the Korean economy on export. We also find that a structural break in the common part of CDS premium occurred in mid-2007, implying that the exposure of credit risk in Korean banks jumped up after the 2007 financial crisis. INTRODUCTION The explosion and dramatic reversal of capital flows among international markets since the 1990s have ignited a heated debate. Some people argue that globalization has gone too far and that international capital markets have become extremely erratic. Conversely, others claim that globalization allows capital to move to where it is mostly needed in promoting economic growth. After the currency crisis in the late 1997, Korea has gradually opened its financial markets to promote foreign investment. Since the currency crisis, a series of institutional changes was implemented to facilitate the direct foreign investment. The changes included (1) opening the corporate bond market (December 1997), (2) allowing the purchase of shortterm financial products (February 1998), (3) abolishing the limit of domestic equity investment (May 1998), (4) allowing hostile M&A activities (April 1998), (5) opening more industries for foreign investment (May 1998) and (6) enacting the Foreign Investment Promotion Act (November 1998). Especially, in September 1998 the Foreign Exchange Management Act was abolished. Subsequently, in April 1999 the Foreign Exchange Trade Act was enacted and implemented to minimize regulations on foreign trade and to expand foreign exchange trading. With a series of institutional changes, Korean financial markets have become more volatile and more vulnerable to foreign shocks. When negative economic news comes from foreign countries, Korean financial markets could immediately be slashed by large capital outflows. Skeptical expectations on the Korean economy due to a decrease in exports and changes in portfolio may lead foreign investors to withdraw their fund from the Korean markets. As a result, sequential capital outflows induced shortage of liquidity in the domestic market, which had negative impacts on the Korean economy in the short run. In particular, more liberalized Korean financial markets were thrown into turmoil when the subprime mortgage Science Publications AJEBA crisis in the United States broke out in September 2008. The subprime crisis is attributed to the problem of the United States economy due to the failures of asset management strategy of the U.S. financial institutions. The crisis unfortunately affected Korean financial markets and the economy through withdrawal of foreign funds which led the Korean economy to be in severe liquidity shortage and a credit crunch. As Korean financial markets could not function well under a credit crunch, credit risks of lending increased and accordingly the credit default swap (hereafter CDS) premium soared. Korean banks' CDS premiums seemed to especially be susceptible to foreign shocks. Financial institutions would be exposed to a higher credit risk of lending due to the increase in bankruptcy risk of Korean firms. Therefore, their CDS premium rises. Various methods have been employed to measure bank risk in the existing literature. Those methods include some alternative measures of firm risk such as subordinated debt spread (Krishnan et al., 2006) and expected default frequency calculated by an option pricing model (Altman and Hotchkiss, 2005). In addition, the CDS premium or spread has been increasingly popular as a simple indicator of bank credit risk. A CDS is a bilateral transaction under which the buyer is insured against credit risk and pays premium to the seller. The CDS premium is expressed as a function of the nominal value of the contract. Previous studies investigating the pricing of CDS premium claim that CDS premium is an efficient measure of credit risk. For example, Longstaff et al. (2005) claim that CDS spreads appropriately reflect credit risk. Kim et al. (2010) finds that the CDS spreads for Asian borrowers widened during the 2007-2009 crisis because of high expected default frequency. Besides individual risk, researchers become more interested in systemic risk in the financial sector after a financial crisis. For instance, Bijlsma et al. (2010) review main literature investigating reasons for systemic risk and policy implications of systemic risk. Because of externalities, contagion and spillover inherent in financial markets, we must be concerned about systemic risk as well as individual risk. Systemic risk is measured by various indicators: principal components of the banks' CDS (Billio et al., 2010), spillover index (Diebold and Yilmaz, 2009), dynamic conditional correlation (Rahman, 2014) and co-risk measures (Adrian and Brunnermeir, 2011) and so on. A few studies exploit the common factor as a measure of systemic risk. Kool (2006) investigates the role of common factors in European bank CDS spreads for financial stability and documents that the common factor is related to the European P/E ratio and the European 2-yaer nominal interest rate. Applying a dynamic factor model to the distance-todefault of EU banks, Brasili and Vulpes (2006) find that the commonality in bank risk appears to have increased since 1999. Eichengreen et al. (2012) recently report that common movement of banks' CDS spreads rose after the subprime crisis, using principal components analysis. Rahman (2014) also finds the extreme co-movements of financial institutions' default swap contracts in the aftermath of the subprime crisis. We focus on common factors of Korean banks' CDS premium measure to estimate an indicator of systemic risk. Following the Bai and Ng (2004) method, we extract common factors of banks' CDS premium. After exploring the properties of the common factors, we attempt to select an optimal number of common factors and to find determinants of the common factors. The main empirical findings are as follows: First, most variation of individual bank CDS premium is explained by a common factor. Second, the common factor of bank CDS premium is strongly affected by the level and volatility of stock prices in the developed market. In addition, the common factor is affected by spot oil price and sovereign bond rate. Finally, there was a structural break in the common part in August 2007as a result of contagion of the subprime crisis in the U.S. We offer some policy implications from the findings. First, individual bank's CDS premium has a strong tendency to move in the same direction, indicating that the Korean banking in dustry is exposed to a substantial systemic risk. Second, because systemic risk is strongly susceptible to foreign capital out flows due to changes in the foreign macro-financial economic condition, regulatory efforts should be made to minimize the impact of foreign capital outflows on Korean financial markets and economy. METHODOLOGY The factors affecting the CDS premium can be categorized into macro-financial variables and firm specific variables mostly reflecting balance sheet information. The firm's specific variables include leverage, equity return, idiosyncratic volatility, the price to book ratio and credit ratings. On the other hand, macro-financial variables cover interest rates, term structure, equity market returns, equity market volatilities, macroeconomic conditions, sovereign bond yields and country credit ratings for sovereign bonds. In Science Publications AJEBA particular, the bank CDS premium in emerging markets would respond to the movements of the capital flows due to changes in macroeconomic conditions. Because changes in the aggregate macroeconomic environment would affect CDS premium of all banks, the common factors extracted from CDS premium of banks would be explained by macro-financial variables. An approximate factor model is intuitively appealing in observing how the common factors of individual banks have reacted to the changes in the macroeconomic environment. We decompose the CDS spreads across Korean banks into one or more common factors and idiosyncratic components attributable to individual firms by identifying common factors suggested by Bai and Ng (2004). Next, we attempt to find out what macrofinancial variables have determined the common factors among CDS premiums of Korean banks. The determinants might be closely related to the stability of the Korean banking industry and ultimately to the stability of the Korean economy. Factor Model Let X it be the observed CDS spread for the ith bank at time t, for i=1, …, N and t=1, …, T. Consider the following model: where, e it is the idiosyncratic component of X it with a zero mean and is orthogonal to F t , which is a vector of common factors. λ i is a vector of factor loadings related to F t . λ i F t is called the common component of X it . Equation 1 is then the factor representation of the data which has two unobserved components-common factor and idiosyncratic components. Common factor F t can be estimated by taking the first difference of Equation 1 as follows: where, f t = ∆F t . By applying the principal component analysis to ∆x i,t , estimates of r factors of ˆt f are obtained. To determine the number of common factors r in Equation 2, the following criterion is adopted, which is the most robust under the presence of cross correlations among the idiosyncratic components: Where: The information criteria reflect the trade-off between the goodness-of-fit and over fitting. The first term on the right shows the goodness-of-fit given by the residual sum of squares, which depends on the estimates of the number of factors. If the number of the factors r increases, variance of the factors f t also increases while the sum of squared residuals decreases. The penalty of over fitting, which is the second term on the right, is an increasing function of the cross-section size N and time series length T. The optimal number of factors minimizes IC (r). After the optimal number of the common factors is determined based on Equation 3, CDS spread data was decomposed into r common factors and idiosyncratic component of bank i's premium. With the calculatedcommon factor premium among Korean banks, this study investigates what affects common factor premium by employing regression analysis. Data Data consists of a balanced panel of daily CDS premium of 11 major Korean banks as a direct measure of credit spreads. The periodic payment expressed in basis points is called CDS premium. By definition, it provides a pure measure of the default risk of the reference entity. The sample covers data from January23, 2006 to April18, 2011 and includes 1,366 observations. The Korean banking industry consists of seven major commercial banks, five specialized banks and sixlocal banks. Because of some banks having insufficient trading records, the data includes only seven major banks and four specialized banks. Specialized banks were established with specific purposes of bolstering financing in specific areas facing funding difficulties due to profitability and expertise, based on the Special Act and run by the Korean government. The CDS data was extracted from Bloomberg. Other data-representing macroeconomic conditions-was derived from the Korean Center for International Finance. Table 1 reports summary statistics for the CDS premium of each bank. Most banks experienced a mean premium of 121.2 to 157.8 basis points over the sample period. The first four banks and Citibank Korea Inc. demonstrate relatively lower mean than the rest. Since the first four banks are special banks considerably controlled by the Korean government, they would be perceived as relatively less risky. Standard deviations for most banks range from 124.4 to 169.8 basis points. In general, the larger the mean, the larger the standard deviation is. Private banks, in particular, experienced more volatile movements of premium over the sample period. The CDS premium for most banks soared above 800 basis points around late October 2008 right after the financial crisis triggered by the Lehman Brothers collapse in the United States. Since then, the CDS premium demonstrated alow and stable movement around 11.9 to 16.5 basis points until June 2007. AJEBA We implement unit root tests to check the stationarity of the CDS premium of Korean banks. The results in Table 2 show that every series are non-stationary in level while they are stationary in the first difference. Common Part of CDS Premium We use the method proposed by Bai and Ng (2004) to extract the common factors corresponding to the latent risk dimensions in the CDS premium. Before determining the number of common factors, we conduct the crosssection dependence test suggested by Breusch and Pagan (1980) in order to check whether the crosssection dependence exists among banks' CDS premium. The test result provides evidence that the CDS premium series are dependent upon each other. The results are presented in Table 3. In order to find the optimal number of common factors, we employ Equation 3 and calculate the value of IC(r). The number of common factors is tested up to 8. The result in Table 4 shows that the lowest value is-7.042 when the number of common factor is one (r = 1). Here, r represents a number of common components while IC stands for value of information criteria suggested by Bai and Ng (2002). Hence, the CDS premium data of 11 Korean banks is decomposed into one common factor and eleven idiosyncratic series. The estimated common factor explains approximately 98.5% of the total variations of the CDS premiums. That is, the variations of the CDS premium are mostly explained by the estimated common factor. The average during that period increased to 0.46. A major reason for continuous increase in the CDS premium is that the Korean capital market was so closely linked to the U.S. capital market and hence was affected by the subprime mortgage turmoil and global financial crisis. A tremendous outflow of foreign funds drove the Korean economy into a damaging situation due to a sharply decreasing liquidity supply. Naturally, the sequential credit crunch led to difficulties in financing for business firms. As the financial status of Korean firms worsened, the CDS premium of banks sharply increased. After adjusting and recovering from the financial crisis shock, the CDS premium gradually lowered down to 0.55 on January 12, 2010. It increased to around 0.69 since then but never returned to the same level that it reached in 2006 for a substantial period of time. This indicated that the global financial system was not yet fully recovered and stable. That is, the Korean economy could not completely be independent from financial crisis shocks and the Korean economy and banking industry were still in danger. Determinants of Common Factor of the CDS Premium Because of the frequency of the daily CDS premium, we select the macro-financial data publicly announced daily. The variables arelimited to the movements of foreign and domestic financial markets, currency markets and commodity markets. Suh and Lee (2011) take into account per capita GDP, GDP growth rate, foreign reserves, fiscal balance, current balance as macro variables that determine the CDS premium. Considering the limitation of daily data availability, we initially employa FTSE index for the developed markets (hereafter FTSED) and a FTSE index for the emerging markets (hereafter FTSEE), KOSPI500 (hereafter KOSPI), the CDS premium of Korean sovereign bond matured in 2025 (hereafter Korea bond CDS premium) and the Dubai oil spot price (hereafter Oil price) in the regression analysis. In addition, the volatility of each variable is added in the model for a better specification. Each variable is measured as moving averages of 20 trading days while the volatility of each variable is measured as moving standard deviations of 20 trading days. Looking at movements of volatilities in Fig. 2, we suspect a co-movement of volatilities and the common part of the CDS premium. Acointegration test is conducted in order to check if variables are cointegrated. Level data would be used if cointegrated. Otherwise, the differenced data would be chosen in order to avoid a spurious regression problem. Johansen's Trace test is conducted with the lag length 2 and the result is presented in Table 7. Regression results of determinants of the common component model with dummy Observing the common factor movement in Fig. 1, we suspect the existence of a structural break occurred inmid-2007. The structural break test suggested by Chow is conducted to detect structural breaks. As presented in Table 6, the null hypothesis of no structural break on August 1, 2007 is rejected at the Pvalue 0.00. That is, the CDS premium seems to jump due to the subprime crisis at that time. We add a dummy variable to Equation 4 to reflect the actual events which were related to the subprime mortgage crisis initiated in April 2007 and the following financial crisis in September 2008. In April 2007, New Century Financial filed for bankruptcy and triggered U.S. subprime mortgage crisis. Korea was affected when the American Home Mortgage Investment (AHMI) filed for bankruptcy protection in the court in August 2007. AJEBA Model I includes FTSED, FTSEE, KOSPI, Korea bond CDS premium, oil price and a dummy. All variables except the dummy are in logarithm because of its convenience for sensitivity analysis. To check if volatilities affect the common risk, Model II is estimated by adding the volatilities of each variable and also by dropping a statistically insignificant variable, FTSEE. Simple OLS method is used for estimation. As shown in Table 7, Model II offers a higher adjusted R2than Model I. Adding the volatilities and dropping FTSEE improve the explanatory power of the model. FTSED, KOSPI, Korea bond CDS premium, oil price and the volatility of FTSED appear to be statistically significant at 5%. The sign of the estimate of every statistically significant variable seems to be consistent with what we predicted. The estimate of the dummy variable is also statistically significant at 5% and positive. DISCUSSION This study examined what factors determine Korean banks' credit default swap premiums. As descrived in section 3, we first identify common factors in the CDS premium and further examine the determinants of the common factor employing an empiorical model shown in Equation 4. We discuss the major findings of the study as follows. First, the estimates of FTSED and KOSPI were negative and statistically significant at 5%. It suggests that both foreign and domestic stock market movements had a negative impact on the CDS premium. When the developed foreign stock markets such as NYSE and EU sharply fell or collapsed, the foreign capital outflows for switching to a safer asset immediately exploded. The Korean banks were therefore faced with the shortage of liquidity and an increase in the default risk of loans. Accordingly, the CDS premiumsurged up. By the same token, when Korean stock markets fell sharply, exactly the same phenomena happened. Hence both movements of foreign and domestic stock markets negatively affect the CDS premium. The magnitude of responsiveness to KOSPI (-1.62) was slightly greater than that of the FTSED (-1.31). Second, the volatility of FTSED and the Korean banks' common factor of the CDS premium turn out to be positively related. The common factor of the CDS premium jumps up as the foreign stock market gets more volatile. In addition, the common factor of the CDS premium appears to be the most responsive to the volatility of FTSED. The magnitude of the sensitivity to the volatility of FTSED is estimated at 4.78, which is approximately 3.5 times of FTSE developed markets and 2.8 times of KOSPI in terms of the absolute value. As the movement of FTSE developed markets became more unpredictable and riskier, it induced the foreign capital outflows to increase and caused the CDS premium to rise. However, the volatility of the Korean stock market had surprisingly no impact on the CDS premium. Third, the CDS premium rises as the oil price increases. This implies that an increase in oil price tends to have negative impacts on the profits of Korean business firms by raising their production costs. The default risk of loans increases because of the weak profit structure and thus the CDS premium rises. Fourth, as noticed, the structural break, which was caused by the subprime mortgage crisis, is incorporated into the model by adding a dummy variable. The estimate of the dummy variable turned out to be 0.41. That is, the subprime mortgage crisis period from August 1, 2007 to April 18, 2011 shifted the CDS premium up by 0.41. Consequently, the magnitude of the increase was considered to be an adjustment to the increased risk due to the subprime mortgage crisis and the following financial crisis. Fifth, the estimate of the CDS premium of Korean sovereign bond variable appears to be positive and statistically significant at 5%. Since the Korean sovereign bond is issued by the Korean government in foreign currencies, the CDS premium is mainly affected by the country risk. As the country risk increases for various reasons, the CDS premium of bond rises and the CDS premium of banks increases accordingly. Sixth, FTSEE is found to be statistically insignificant at 5%. That is, the movements of the CDS premium in the Korean market seem to be rather closely linked to the movements of the developed markets than on the emerging markets. Lastly, the volatility of KOSPSI, the volatility of the CDS premium of Korea bond and the volatility of oil price appeared to be statistically insignificant at 5%. Only the volatility of FTSED is statistically significant at the conventional level. This implies that the Korean bank CDS premium is strongly affected by the volatility of the stock market in developed countries. CONCLUSION To find the determinants of the common factor of the CDS premium of Korean banks, we first decomposed the CDS premium of 11 Korean banks into common factors and idiosyncratic series by employing the method suggested by Bai and Ng (2004). We find that there is only one common factor deriving the CDS premium of Korean banks. Surprisingly, the most variation of each banks' CDS premium is explained by the common factor. It implies that the Korean banking industry confronts a substantial degree of systemic risk. AJEBA Next, we attempted to find the determinants of the common factor by regressing the common factor on macro-financial economic variables such as the daily stock composite index of foreign and domestic markets, Korean sovereign bond CDS premium, volatilities of each asset markets and the commodity prices. The regression results showed that the common factor was determined by the composite index of FTSE developed markets, KOSPI500, the Korea sovereign bond CDS premium, the Dubai spot oil price and the volatility of FTSE developed markets for the sample period. In particular, the common factor of the CDS premium appeared to be very sensitive to the FTSE level and its volatility. We also found a structure break for the CDS premium movement, which appeared to be affected by the subprime crisis in the United States since August 1, 2007. These findings suggest that Korean banks are very susceptible to foreign capital movements, which are caused by changes in foreign economies. Not only is the Korean economy heavily dependent upon foreign economy through export, but also Korean financial markets are liberalized enough for foreign capital flows at foreign investors' convenience. In particular, an excessive amount of withdrawal of the foreign capital would induce a reduction in liquidity and a credit crunch. Accordingly, business firms' default rates of repayment rise and the CDS premium of the Korean banks increases. The empirical findings suggest that the policy authority must pay heed to foreign stock markets to sustain the stability of banking industry. It is necessary to consider the stabilization of Korean financial asset markets, the maintenance of an appropriate level of foreign exchange reserves for emergencies and the expansion of foreign exchange swap agreements. In addition, financial supervision is needed to induce financial institutions to be less dependent on short-term financing to cushion against shocks resulting from exogenous capital outflows.
5,593
2014-12-08T00:00:00.000
[ "Economics" ]
Dark solitons of the Gross-Neveu model We present N-soliton solutions for the classical (1+1)-dimensional Gross-Neveu model which satisfy non-zero boundary conditions. These solutions are obtained by direct method using some properties of the soliton matrices that appear in the framework of the Cauchy matrix approach. (we will explain all the designations in what follows). This two-dimensional massless fermion asymptotically free field model has been introduced in 1974 in connection with the search for symmetry breaking, and since then has attracted a lot of interest in semiclassical field theory. The solutions of classical equations corresponding to the Lagrangian L may be considered as candidates, or classical approximations, for the particles of the corresponding quantum theory. That is why the analytic solution of the classical model remains an actual problem (see, for example, [2][3][4][5][6][7][8][9][10][11][12][13][14]). At the classical level, the Gross-Neveu model is closely related to the theory of integrable systems. In papers [15,16] the authors found a class of the Gross-Neveu-like models which are completely integrable, and one can find there the inverse scattering transform which gives the possibility of deriving various solutions, in particular the soliton ones. Looking at more recent works [5][6][7][8][9][10][11][12][13][14] one can notice that, although the authors do not use the results of, say, [16] directly, they use various approaches developed in the theory of integrable systems: the Zakharov-Shabat scattering problem, Gelfand-Levitan-Marchenko equations (in [7][8][9]), or the theory of reflectionless potentials [17][18][19], Hirota ansatz and inverse scattering transform for the sinh-Gordon equation (in [5,6,[10][11][12][13][14]). The main object of this work is the N s -soliton solutions for the classical Gross-Neveu model (1.1) or, in matrix form, © The Author(s) 2012. Published by Oxford University Press on behalf of the Physical Society of Japan. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. where Ψ = ψ 1 , ... ψ Nf , which were derived in the paper [6] by Fitzner and Thies. Thus, the essential part of this paper may be viewed as an alternative derivation and representation of the results of [6]. The method used in what follows is a variant of the Cauchy matrix approach [20][21][22][23][24][25]. We start with an ansatz based on some class of matrices, the so-called 'almost-intertwining' matrices [26] that satisfy the 'rank one condition' [27][28][29], which is a particular case of the Sylvester equation [30][31][32][33]. An analysis of the properties of these matrices leads us to solutions of the 'two-field' model (see section 2). The important point is that we do not need to solve the 'consistency' equations separately. We just show in sections 2.2 and 3 that the proposed ansatz possesses some reduction, that automatically resolves the consistency condition and leads from (1.3) to (1.2). This gives the possibility of obtaining the Gross-Neveu solitons, which we discuss in section 4. The dependence of b| and |k on the coordinates describing the model, b| = b(ξ, η)| and |k = |k(ξ, η) , is defined by and which leads to In what follows, we use another set of rows and columns, this time of the length N f , defined by and Here, Z is a N f -set of constants Z = {z 1 , ... , z Nf }, 1 Z | and |1 Z are N f -row and N f -column with all components equal to 1, and D Z is the diagonal N f × N f -matrix, while B Z and K Z are rectangular matrices given by where b n and k n are components of b| and |k (see (2.2)). Applying the rules (2.3) and (2.4) to the definitions (2.7) and (2.8) one can obtain, by simple algebra, the following equations governing the 'evolution' of and where the functions u o and v o are defined by (2.14) Constraints. The linear (with respect to i u Z | and | i v Z ) equations presented in the previous subsection are an important part of the approach of this paper. However, now we face a more difficult problem: we have to close system (2.12)-(2.14) (note that there is no obvious relationships between . Contrary to the derivation of (2.12) and (2.13), which is a straightforward procedure similar to one used repeatedly by various authors, the 'closure' problem is less trivial. In the framework of the theory of integrable systems, it is related to the so-called Bargmann constraints or the nonlinearization procedure. We do not discuss here the 'theoretical' aspects of this problem. Instead, we demonstrate that it is possible to relate the rows It can be shown (see Appendix A) that equation (2.1), together with the definitions (2.7), (2.8) and (2.14), imply and (here, L s and R s are the elements of the diagonal matrices L and R). As one can see from equations (2.15), the variables and v o which are involved in equations (2.12) and (2.13) are not enough, in the general case, to obtain a closed system. However, there exists a reduction of (2.1) that eliminates these difficulties. The key point in our calculations is the fact (demonstrated in Appendix B) that the restriction leads to the following result: 24) where the N f × N f diagonal matrix E, which satisfies is introduced to take into account the terms proportional to D ±1 Z in (2.12) and (2.13). In terms of Φ and Ψ, equations (2.12) and (2.13) can be written as and σ 1 is the Pauli matrix, while equation ( To summarize, matrices Φ and Ψ satisfy equations corresponding to the Lagrangian It is easy to see that (2.30) resembles the Gross-Neveu Lagrangian (1.2). The main difference is that the Lagrangian (2.30) is built of two matrices, Φ and Ψ. In the following section we discuss questions related to complex/Hermitian conjugation and establish that there is a natural reduction which links Φ and Ψ. Involution. It turns out that, if one works in the framework of the soliton ansatz used in this paper, the behavior of solutions under the complex/Hermitian conjugation is determined by whether the matrices L are real or imaginary. Indeed, it is not difficult to show that the condition The requirement leads to the restrictions z * n = z n (n = 1, ... , N f ) (3.4) and where * stands for the complex conjugation. This results in and hence To conclude our analysis, we consider separately the cases = ±1, and rewrite the Lagrangian (3.9) in terms of the Dirac matrices, Gross-Neveu case ( = 1).To take into account the fact that in this case, as follows from (3.5), both ξ and η are pure imaginary we introduce two real variables, Noting that S is the unit matrix and that h is real we can introduce the real coupling constant g = 1 2h (3.13) and rewrite the Lagrangian (3.9) (omitting an insignificant constant) as Solitons of the Gross-Neveu model. Here, we would like to collect the results related to the Gross-Neveu model. As follows from (3.1) with = 1, the matrix L is pure imaginary. Thus, we write it as with real µ m . Equation (2.1) leads to l,m=1,...,Ns where and C lm are constants given by The columns of the matrix Ψ can be presented as where the phases Θ n = Θ n (t, x) are given by (4.9) To summarize, formulae (4.8) together with (4.1)-(4.3), (4.7) and (4.9) provide the N ssoliton solutions for the Gross-Neveu model. The function u o , which in this case can be presented as satisfies the sinh-Gordon equation where = ∂ tt − ∂ xx (we prove these facts in Appendix C). It is not difficult to obtain from (4.8) the behavior of ψ n in the asymptotic regions. For simplicity, we carry out this analysis under the following assumption: (4.14) In a similar way one arrives at are given by while the fermion density Q, For the one-soliton solution N s = 1, the matrix L is scalar, L = iµ (we drop the subscript 1), and the one-soliton solution is characterized, except for the real set {z 1 , ... , z Nf }, by one velocity v, v = (1 − µ 2 )/(1 + µ 2 ), and one constant C 11 , which without loss of generality can be set equal to unity, C 11 = 1. The matrix Y becomes where (4.24) Equation (4.8) can be rewritten as where φ ± n are the limits of e −iΘn ψ n defined earlier and given by with δ n = 2 arg z n + i The distribution of the condensate S, which is defined in (4.18), is given by while the fermion density Q, defined in (4.20), can be presented as with Q ∞ being defined in (4.22) and Discussion. To derive the solitons of the Gross-Neveu model we used rather standard technique from the theory of integrable systems. The Cauchy matrix approach, which appeared in 1980s as an alternative to the inverse scattering transform, was subsequently modified to become one of the easiest way to derive explicit solutions for integrable nonlinear equations. In this paper, and in many others, it is used just like an ansatz, which, if compared, for example, with the inverse scattering transform, is more straightforward, not restricted by imposing some boundary conditions beforehand, and rather flexible (see, e.g., [34]). Even in the framework of this paper one can note that our ansatz, with slight modifications, leads to solutions for both the Gross-Neveu model and its γ 5 variant (3.18). Clearly, it has its limitations. The soliton ansatz of this paper, that in context of other integrable models leads to 'general' N-soliton solutions, in the case of the Gross-Neveu equations provides less than one might anticipate. The solutions presented above belong to the so-called type I (the simplest) class, 9/13 according to the classification of [5]. Indeed, if we take notice of the z n -dependence, the condensate function S can be presented as which leads toψ where λ n is a constant, given by λ n = z −1 n Nf m=1 z −1 m . Thus, in our attempt to derive the N f -flavor solitons we have actually obtained some kind of direct sum of N f = 1 solitons (up to the linear global mixing Ψ → ΨU where U is a constant unitary matrix). This means that to find less trivial N f -flavor solutions (even if we restrict ourselves to the classical Gross-Neveu model with finite N f , i.e. without continuous constituent) we have to go beyond the soliton ansatz used above. However, this very important question is outside of the scope of this paper. The last question we would like to mention is the question of terms. In the theory of integrable systems solutions like ones presented in this paper are usually called 'solitons' or, more precisely, 'dark solitons' where the word 'dark' indicates that they are solitons which satisfy constant non-zero boundary conditions. In the field theory, the more widely used term is 'kink'. If one looks at the one-soliton solution (4.25) (or (4.29)), then there is no discordance: the tanh-function is what is usually associated with a kink. However, in the situation with two (or any even N s ) solitons the asymptotic behavior of both the condensate S and the fermion density Q differs from the kink-, or N s -kink-like, one (see, for example, figure 1). From the definitions of B Z and K Z , one can easily derive where h is defined in (2.16) and These equations, together with the definitions (2.7) and (2.8), lead to To simplify this expression, one should note that which, together with (2.1), leads to and, finally, to First, one has to note that the matrix T that links |k and b|, where |b T = ( b|) T , and which is given by T = diag (... , k n /b n , ...) , From (2.1) one can easily obtain A T = T −1 AT, which holds for all ξ and η and which implies G T = T −1 GT. Noting also that |k o = −T|b T o , where |b T o = ( b o |) T , one can consequently obtain from (2.14) which proves (2.20). In a similar way, noting that and that the matrixF Z in the definition (2.17) is diagonal (and hence commutes with T) one can obtain Taking the determinant of the last equation, using the identity det(1 + |u v|) = 1 + v||u and noting that det G = 1/ det |1 + A| one arrives at which is (4.10).
2,983.4
2021-11-01T00:00:00.000
[ "Physics", "Mathematics" ]
2.58 kW Narrow Linewidth Fiber Laser Based on a Compact Structure with a Chirped and Tilted Fiber Bragg Grating for Raman Suppression We report a high power, narrow linewidth fiber laser based on oscillator one-stage power amplification configuration. A fiber oscillator with a center wavelength of 1080 nm is used as the seed, which is based on a high reflection fiber Bragg grating (FBG) and an output coupling FBG of narrow reflection bandwidth. The amplifier stage adopted counter pumping. By optimizing the seed and amplifier properties, an output laser power of 2276 W was obtained with a slope efficiency of 80.3%, a 3 dB linewidth of 0.54 nm and a signal to Raman ratio of 32 dB, however, the transverse mode instability (TMI) began to occur. For further increasing the laser power, a high-power chirped and tilted FBG (CTFBG) was inserted between the backward combiner and the output passive fiber, experimental results showed that both the threshold of Stimulated Raman scattering (SRS) and TMI increased. The maximum laser power was improved to 2576 W with a signal to Raman ratio of 42 dB, a slope efficiency of 77.1%, and a 3 dB linewidth of 0.87 nm. No TMI was observed and the beam quality factor M2 maintained about 1.6. This work could provide a useful reference for obtaining narrow-linewidth high-power fiber lasers with high signal to Raman ratio. Introduction In the past years, owing to the great improvement of laser diodes (LDs) brightness and high-quality large mode area (LMA) fiber as well as beam combining technology, the output power of continuous-wave (CW) fiber lasers has been scaled rapidly [1][2][3][4]. However, the further improvement of single fiber output power is limited by various nonlinear effects, transverse mode instability (TMI), thermal effect, etc. For the present single fiber lasers, further power scaling is even difficult with compromise in bandwidth, beam quality, and so on. Spectral beam combining (SBC) is a promising approach to break through the limitations of the fiber lasers [5,6]. In SBC, the key is that the sub beam needs to be a narrow linewidth fiber laser (NLFL) with high beam quality, which usually realize by a main oscillator power amplifier (MOPA) configuration [7]. At present, there is no unified definition about the "narrow" of NLFLs. Considering its practical application in SBC, in this paper, the linewidth of NLFLs is defined as <1 nm. For MOPA structure, there are two main types of seeds, namely few longitudinal mode fiber oscillator laser (FOL) seed and phase modulated single-frequency laser (PMSFL) seed. The method utilizing a phase modulation seed for power amplification is relatively mature, benefiting from the stable temporal property, high nonlinear effects threshold and spectral purity during the amplification process [8][9][10]. Up to now, the power of NLFL based on the PMSFL seed has been scaled to several thousand watts [10][11][12][13][14][15][16][17], and the highest power of NLFL has exceeded 5 kW [16,17]. For MOPA structure based on a FOL seed, because of its simple, compact and economical structure, it has also attracted enormous attention in recent years [18][19][20][21][22][23]. By this method, the maximum power also reached 3 kW [23]. However, among the factors currently limiting the further power scaling of NLFLs based on FOL seed, Stimulated Raman scattering (SRS) is one of the most important limitations. Since the injecting seed laser is not strictly single mode and is usually broadened during the power scaling process in the amplifier. The onset of SRS would bring about a series of problems, such as signal power declination, beam quality deterioration, fiber component damage, etc. [24,25]. Furthermore, the SRS of NLFLs would affect the beam quality of output and the efficient after spectral beam combining. Thus, high power NLFLs with high signal to Raman ratio are becoming increasingly important for SBC. Many methods have been used to suppress SRS in high-power fiber lasers, including the large-mode-area (LMA) fibers, spectrally selective fibers, long-period fiber gratings, chirped tilted fiber Bragg gratings (CTFBGs), and so on [26][27][28][29][30][31]. Among these methods, CTFBG is considered a comparatively suitable component for SRS suppression. With a tilt angle introduced chirped FBG, CTFBG can couple the forward core mode, which are originally transmitted only in the core, to backward core modes and the cladding modes. Due to its simplicity of application and good spectrum stability, CTFBGs utilized as broadband spectral filters have been extensively studied. In 2017, we firstly proposed and demonstrated using CTFBG to suppress SRS in high-power fiber amplifier [26]. Then, with the improvement of CTFBG fabrication technology, the power handling capability of CTFBGs kept refreshing, which has increased from hundred watts to the kilowatts level [27][28][29][30][31]. These studies are all based on the conventional high-power fiber laser, whose linewidth is in several nanometers level and not belongs to narrow linewidth. So far, there is no report that CTFBG used to suppress SRS in NLFLs, especially in the output of fiber laser handling multi-kW laser power to directly filter SRS. Here, we report a counter pumped MOPA configuration NLFL based on a FOL seed and CTFBG suppressing SRS. A CTFBG with carrying high power capability is applied to directly filter forward SRS at the output of multi-kW level fiber laser. With the CTFBG included, both the threshold of SRS and TMI increased, and the output power reached to 2.58 kW with a power improvement of 300 W compared that without CTFBG, and the laser slope efficiency was about 77.1%. At the output spectrum of the maximum power, the signal to Raman ratio was 42 dB, and the 3 dB linewidth was about 0.87 nm. The beam quality factor M 2 maintained about 1.6 during the power scaling. This work is helpful for obtaining high-power narrow-linewidth fiber lasers with high signal to Raman ratio. Experimental Setup The all-fiber FBG-based MOPA configuration fiber laser was established, as shown in Figure 1. The FOL seed consisted of a wavelength stable laser diode (WS LD) worked at 976 nm, a pair of fiber Bragg gratings (FBGs) with a center wavelength of~1080 nm, a 3 m long 10/130 µm Yb-doped fiber (YDF) and a cladding power stripper (CPS). The absorption coefficient of the YDF was 5.2 dB/m at 976 nm. The high reflective (HR) FBG and output coupler (OC) FBG provided a full-width-at-half-maximum (FWHM) of 2.6 nm and 0.04 nm, respectively. The seed laser was injected into the amplifier stage through a mode field adaptor (MFA), of which the input fiber and output fiber had a size of 10/130 µm, 20/400 µm, respectively. The amplifier stage based on counter pumping had two configurations with a difference that whether CTFBG and band-pass filter (BPF) were used or not. For amplifier configuration 1, the gain fiber was a 11.5 m long double-cladding YDF with 20 µm/0.06 NA core and 400 µm/0.46 NA inner cladding. The absorption coefficient of the gain fiber was 1.42 dB/m at 976 nm. The YDF was coiled with minimum diameter of 90 mm. Four 976 nm WS LDs were employed as the counter pumping sources. The pumping power was coupled into the active fiber via a (6 + 1) × 1 fiber combiner. The input signal fiber and output signal fiber of the backward combiner both had a size of 20/400 µm. The CPS was used to eliminate residual pump light and the laser was output by a quartz block head (QBH) at the end. Including the backward combiner and QBH, the germanium-doped fiber (GDF) with a core/cladding diameter of 20/400 µm has a total length of 3 m for delivering the output laser. After the narrow-linewidth oscillator for one-stage amplification experiment was completed, the amplifier configuration 2 was established based on configuration 1 by inserting a BPF and a CTFBG. The BPF is commercially available and CTFBG is specially customized. The BPF was inserted after the seed laser to filter out part of the background spectral noise and backward Raman light of the amplifier, preventing affecting seed laser injected. A specially designed and fabricated CTFBG on 20/400 µm fiber was inserted between the backward combiner and the CPS for SRS suppression, resulting in 0.7 m increasing of GDF. The CTFBG had an average rejection depth of ∼20 dB with a rejection bandwidth of more than 20 nm, which covered the whole Raman spectral range of 1080 nm laser. Its Bragg reflection range was longer than 1150 nm [29]. The measured insertion loss was 2.1%. Then the experiment on the filtering effect of the CTFBG on the SRS in amplifier was carried out. In the all-fiber laser system, all the components in the experiment, including YDF, LDs, combiners, FBGs and CPSs, were placed on a water-cooled heat sink to ensure the stability in high power operation. The output power was divided into two parts via an HR mirror, the high power was detected by a power meter. The low power was used to measure the beam quality by a Beam Squared M 2 analyzer manufactured by Ophir. Meanwhile, the spectrum and time domain were also recorded by a Spectrum Analyzer and a photo detector. Laser Performance with Amplifier Configuration 1 The seed power injected into the main amplifier is set to 17.5 W. Figure 2 illustrates the spectrum of the seed laser, which measured by a Yokogawa AQ6370D Optical Spectrum Analyzer with a spectral resolution of 0.02 nm. The 3 dB and 20 dB bandwidths are about 0.28 nm and1.40 nm. In the fiber oscillator, although only 3 m of YDF was applied, the laser efficiency still reached 76%. To begin with, the MOPA system was first operated with amplifier configuration 1. During the power scaling process, the output power and spectrum were monitored and recorded. Figure 3a shows the output spectrum at different output power with the GDF length of 6 m. With a pumping power of 1982 W, the output power was 1570 W, but strong nonlinear effects were observed, such as SRS and four wave mixing. The signal to Raman ratio was measured at about 32 dB. The difference between the signal and four wave mixing ratio was about 42 dB. Then, cutting the GDF length to 3 m, the spectrum at different output power is illustrated in Figure 3b. Under the same signal to Raman ratio of 32 dB, the output power could reach to 2278 W with a pumping power of 2840 W, which was 700 W higher than the original one. Furthermore, the nonlinear effect was obviously mitigated, not only the SRS, but also four wave mixing effect was greatly weakened. Figure 3c,d demonstrate the output and optical-optical efficiency versus pump power with a GDF length of 3 and 6 m, respectively. The inserted pictures illustrate the spectral linewidth at their highest powers. By comparing these results, we can see the efficiency of amplifier increasing from 78.8% to 80.3%. At their highest power of 1570 W and 2278 W, the 3 dB are 0.41 nm and 0.54 nm with a linewidth increasing rate of 3.8 pm/100 W and 4.2 pm/100W. Experimental results shows the efficiency and power spectral density of amplifier had improved owing to the weak nonlinear effect caused by the shortening of GDF length. Therefore, the nonlinear effect has a great effect on the amplifier efficiency and output spectrum. With the GDF length of 3 m, the beam quality at several different output powers is illustrated in Figure 4a. One can see that the beam quality M 2 had been maintained about 1.6 during the process of amplification. With the increase of pumping power, the output power stopped increasing at 2278 W. Figure 4b shows the temporal signals and corresponding FFT spectra at the maximum power a periodic fluctuation of the time domain signal was observed, reasonably indicating TMI occurred [24], which resulted in the power stagflation. However, the beam quality shows no sign of deterioration. The final output was limited by TMI. These results that with GDF length of 3 m are referred to as the case of without CTFBG for the following comparison with the results with CTFBG. Laser Performance with Amplifier Configuration 2 In order to further weaken the SRS, experiments were carried out in amplifier configuration 2. A BPF was inserted between the seed laser and MFA. Not it can filter out the background spectral noise of the seed, but prevent the backward SRS affecting the seed performance. Furthermore, a specially designed CTFBG with high power capability provided an effective method to directly filtering forward SRS at the output of multi-kW level fiber laser. Figure 5a shows the output spectrum at different output power with CTFBG. The maximum output power came to 2576 W at the pump power of 3346 W with the CTFBG included, 300 W higher than that without CTFBG. The difference between the signal and Raman light was 40 dB at the maximum output power. The spectrum at their highest power level is compared in Figure 5b. It can be seen that the forward Raman light was greatly filtered out, the suppression ratio is about 12.4 dB under the same output power level. More importantly, it brought an increase in output power with a lower Raman intensity. Figure 5c shows the Signal to Raman ratio evolution. If the Signal to Raman ratio of 50 dB is set as the SRS threshold, the green curve could demonstrate the SRS threshold is improved by 800 W with CTFBG operated. When the signal power increased to 2.58 kW, the Signal to Raman ratio came to 40 dB. This value is 850 W higher than no CTFBG applied, as illustrated by the blue curve in Figure 5c. During the process of amplification, the backward spectrum of the seed be measured. Figure 5d compared the backward spectrum at the output power of 1600 W with and without BPF. The backward SRS was filtered out clean and after adding BPF, there was no difference between the forward spectrum and the original one. Laser Performance with Amplifier Configuration 2 In order to further weaken the SRS, experiments were carried out in amplifier configuration 2. A BPF was inserted between the seed laser and MFA. Not it can filter out the background spectral noise of the seed, but prevent the backward SRS affecting the seed performance. Furthermore, a specially designed CTFBG with high power capability provided an effective method to directly filtering forward SRS at the output of multi-kW level fiber laser. Figure 5a shows the output spectrum at different output power with CTFBG. The maximum output power came to 2576 W at the pump power of 3346 W with the CTFBG included, 300 W higher than that without CTFBG. The difference between the signal and Raman light was 40 dB at the maximum output power. The spectrum at their highest power level is compared in Figure 5b. It can be seen that the forward Raman light was greatly filtered out, the suppression ratio is about 12.4 dB under the same output power level. More importantly, it brought an increase in output power with a lower Raman intensity. Figure 5c shows the Signal to Raman ratio evolution. If the Signal to Raman ratio of 50 dB is set as the SRS threshold, the green curve could demonstrate the SRS threshold is improved by 800 W with CTFBG operated. When the signal power increased to 2.58 kW, the Signal to Raman ratio came to 40 dB. This value is 850 W higher than no CTFBG applied, as illustrated by the blue curve in Figure 5c. During the process of amplification, the backward spectrum of the seed be measured. Figure 5d compared the backward spectrum at the output power of 1600 W with and without BPF. The backward SRS was filtered out clean and after adding BPF, there was no difference between the forward spectrum and the original one. Figure 6a shows the output power and efficiency versus pump power. The slope efficiency of the system is 77.1%, which is slightly lower than before due to the insertion loss of CTFBG. The inserted picture shows the 3 dB and 20 dB signal bandwidth of 0.87 nm and 4.62 nm. Figure 6b compared the 3 dB and 20 dB bandwidth of the signal laser versus output laser power between with and without CTFBG. From the results of bandwidth comparison, with the increase of 0.7 m GDF introduced by CTFBG, the linewidth was wider than that without CTFBG, especially for 20 dB linewidth, resulting in a decrease of linewidth increasing rate obviously. The beam quality with different output powers is illustrated in Figure 6b. When the output power reached to 2576 W, the beam quality was maintained about 1.6 and didn't deteriorate. Figure 6d shows the temporal signals and corresponding FFT spectra at the maximum power. When the maximum output power was reached, TMI didn't appear. This means the threshold of TMI was improved with the suppression on SRS. By analyzing, the reason may be that the CTFBG coupled the forward Raman light into backward-propagating cladding, which decreased the heat deposition in the fiber, so the TMI threshold would increase with the suppression on SRS. The further power improvement was limited by pump power. Figure 6a shows the output power and efficiency versus pump power. The slope efficiency of the system is 77.1%, which is slightly lower than before due to the insertion loss of CTFBG. The inserted picture shows the 3 dB and 20 dB signal bandwidth of 0.87 nm and 4.62 nm. Figure 6b compared the 3 dB and 20 dB bandwidth of the signal laser versus output laser power between with and without CTFBG. From the results of bandwidth comparison, with the increase of 0.7 m GDF introduced by CTFBG, the linewidth was wider than that without CTFBG, especially for 20 dB linewidth, resulting in a decrease of linewidth increasing rate obviously. The beam quality with different output powers is illustrated in Figure 6b. When the output power reached to 2576 W, the beam quality was maintained about 1.6 and didn't deteriorate. Figure 6d shows the temporal signals and corresponding FFT spectra at the maximum power. When the maximum output power was reached, TMI didn't appear. This means the threshold of TMI was improved with the suppression on SRS. By analyzing, the reason may be that the CTFBG coupled the forward Raman light into backward-propagating cladding, which decreased the heat deposition in the fiber, so the TMI threshold would increase with the suppression on SRS. The further power improvement was limited by pump power. Next, when using CTFBG to suppress SRS in NLFL, the influence of the system fiber length on the output linewidth needs to be considered. Other methods to suppress spectrum broadening and TMI also need to be taken. Furthermore, the experimental design strategies for higher output power should mainly focus on the character of injected seed laser, active fiber, and the co/counter pumping power ratios to achieve a comprehensive suppression for both SRS and TMI. Conclusions In summary, we have presented an all-fiber narrow linewidth fiber amplifier seeded by a narrow reflection FBG-based oscillator, and a CTFBG was used to suppress the SRS for laser power increasing. Without the CTFBG, the maximum output laser power was 2276 W, which was mainly limited by TMI. With the CTFBG being inserted between the backward combiner and the output passive fiber, the increasing of both the threshold of SRS and TMI were observed, and the maximum laser power was improved to 2576 W with a signal to Raman ratio of 42 dB, a slope efficiency of 77.1%, and a 3 dB linewidth of 0.87 nm. At the maximum power, no TMI was observed and the beam quality factor M 2 maintained about 1.6. By further optimizing the system parameters, such as the power and the linewidth of the seed, the active fiber length and its coiled way in the amplifier, the position of the CTFBG, and so on, this system could be expected to reach 5 kilowatts level in the future. This work could provide good reference for obtaining compact highpower narrow-linewidth fiber lasers with high signal to Raman ratio. Next, when using CTFBG to suppress SRS in NLFL, the influence of the system fiber length on the output linewidth needs to be considered. Other methods to suppress spectrum broadening and TMI also need to be taken. Furthermore, the experimental design strategies for higher output power should mainly focus on the character of injected seed laser, active fiber, and the co/counter pumping power ratios to achieve a comprehensive suppression for both SRS and TMI. Conclusions In summary, we have presented an all-fiber narrow linewidth fiber amplifier seeded by a narrow reflection FBG-based oscillator, and a CTFBG was used to suppress the SRS for laser power increasing. Without the CTFBG, the maximum output laser power was 2276 W, which was mainly limited by TMI. With the CTFBG being inserted between the backward combiner and the output passive fiber, the increasing of both the threshold of SRS and TMI were observed, and the maximum laser power was improved to 2576 W with a signal to Raman ratio of 42 dB, a slope efficiency of 77.1%, and a 3 dB linewidth of 0.87 nm. At the maximum power, no TMI was observed and the beam quality factor M 2 maintained about 1.6. By further optimizing the system parameters, such as the power and the linewidth of the seed, the active fiber length and its coiled way in the amplifier, the position of the CTFBG, and so on, this system could be expected to reach 5 kilowatts level in the future. This work could provide good reference for obtaining compact high-power narrow-linewidth fiber lasers with high signal to Raman ratio. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy. Conflicts of Interest: The authors declare no conflict of interest.
5,089
2021-11-25T00:00:00.000
[ "Physics", "Engineering" ]
Shrinkage estimators for semiparametric regression model Semiparametric regression models are extensions of linear regression models to include a nonparametric function of some explanatory variables. In semiparametric regression model researchers often encounter the problem of multicollinearity. In the context of ridge estimator, the estimation of shrinkage parameter plays an important role in analyzing data. In this paper, numerous selection methods of the shrinkage parameter of ridge estimator are explored and investigated. Our Monte Carlo simulation results suggest that some estimators can bring significant improvement relative to others, in terms of mean squared error. Introduction Semiparametric regression models have received considerable attention in statics and econometrics, because of their flexibility in modeling events [1,2]. "Consider a semiparametric regression model given by i i x t [3]. Most of the approaches for the semiparametric regression model are based on different nonparametric regression procedures. The have been several approaches to estimating β and (.) f . An alternative approach to nonparametric procedure is differencing methodology. This incoming ,used differences to remove the trend in the data that arises from the function (.) f and does not require an estimator of the function (.) f and often called difference-based procedure. Provided that (.) f is differentiable and the t ordinates are closely spaced , it is possible to remove the effect of the function (.) f by differencing the data appropriately. In model (Eq.(1)), [5] [5] concentrated on estimation of the linear component and used difference-based estimation procedure is optimal in the sense that the estimator of the linear component is asymptotically efficient and the estimator of the nonparametric component is asymptotically minimax rate optimal for the semiparametric model used higher order differences for optimal efficiency in estimating the linear party by using special class of difference sequences. Now consider a semiparametric regression model in the presence of multicollinearity. The existence of multicollinearity may lead to wide confidence intervals for the individual parameters or linear combination of the parameters and signs. For our purpose we only employ the ridge regression concept due to Hoerl and ken nard (1970), to combat multicollinearity. There are a lot of work adopting ridge regression methodology to overcome the multicollinearity problem. Note that with 1 m p = = from (2.2) we have We then estimate the linear regression coefficient β by the ordinary least-square estimators based on the differences. Then we obtain the least-squares estimate The role of constraints (Eq. (3)) is now evident. The first condition ensures that, as the t 's become close , the nonparametric effect is removed and the second one ensures that the variance of the sum of weighted residuals remains equals to 2 σ in Eq. (2). Now, we define the ( ) n m n − × differencing matrix D whose element satisfy Eq. (3) as This and related matrices are given, for example, in [4,[6][7][8][9]. Applying the differencing matrix to model (Eq. (2)) permits direct estimation of the parametric effect. As a result of development in Speck man (1988) it is know that the parameter vector β in (Eq. (1)) can be estimated with parametric efficiency. We now show the difference-based estimators that can be used for this purpose. Since the data have been ordered so that the values of the nonparametric variable(s) are close, the application of the differencing matrix D in model (Eq. (2) where (.) tr is the trace function for a squared matrix and p is the projection matrix defined as Ridge Estimator To overcome the effect of multicollinearity, ridge estimator is usually utilized. The ridge estimator for the semiparametric regression model (RE) is defined as Simulation Results A Mont Carlo simulation scheme to evaluate the performance of the estimating methods for the ridge estimator shrinkage parameter. "The explanatory variables , and for all study sample sizes (small, medium and large), and that they improved the performance of the ridge estimator compared to other methods because they gave the lowest values for MSE. 5-As for the correlation coefficient , it was noticed the superiority of the K1 method for all sizes, followed by the K 8 method in small and medium samples size, but in the case of large samples size, the HK 2 method came second. 6- The results showed that, when a correlation coefficient 0.99 ρ = , the K 3 method was the best, and in the next rank was the K11 method at different sizes of the study samples.
1,030.4
2021-01-01T00:00:00.000
[ "Physics" ]
The Expansion Methods of Inception and Its Application : In recent years, with the rapid development of deep learning technology, a large number of excellent convolutional neural networks (CNNs) have been proposed, many of which are based on improvements to classical methods. Based on the Inception family of methods, depthwise separable convolution was applied to Xception to achieve lightweighting, and Inception-ResNet introduces residual connections to accelerate model convergence. However, existing improvements for the Inception module often neglect further enhancement of its receptive field, while increasing the receptive field of CNNs has been widely studied and proven to be effective in improving classification performance. Motivated by this fact, three effective expansion modules are proposed in this paper. The first expansion module, Inception expand (Inception-e) module, is proposed to improve the classification accuracy by concatenating more and deeper convolutional branches. To reduce the number of parameters for Inception e, this paper proposes a second expansion module—Equivalent Inception-e (Eception) module, which is equivalent to Inception-e in terms of feature extraction capability, but which suppresses the growth of the parameter quantity brought by the expansion by effectively reducing the redundant convolutional layers; on the basis of Eception, this paper proposes a third expansion module—Lightweight Eception (Lception) module, which crosses depthwise convolution with ordinary convolution to further effectively reduce the number of parameters. The three proposed modules have been validated on the Cifar10 dataset. The experimental results show that all these extensions are effective in improving the classification accuracy of the models, and the most significant effect is the Lception module, where Lception (rank = 4) on the Cifar10 dataset improves the accuracy by 1.5% compared to the baseline model (Inception module A) by using only 0.15 M more parameters. Introduction Convolutional neural networks (CNNs) have experienced rapid development in the past decades.Currently, CNNs are widely used on many computer vision application tasks including facial expression recognition [1-4], Alzheimer's disease diagnosis [5], and so on.LeNet [6] marked the beginning of CNNs, which were early attempts but were limited in the computational resource.Over time, the emergence of AlexNet [7] pushed the breakthrough of CNNs, which realized the deep network by introducing the ReLU activation function and using two GPUs to share the computation.VGG [8] builds on this foundation by using more small-sized convolutional kernels to control the cost of the parameter amount and thus deepen the network in order to extract more representative global features.ResNet [9] introduces the residual connection to solve the gradient vanishing problem which makes the network deeper.All of the above models have no branching structure or the branches only act as residual connections, which makes them ineffective in extracting globally different features.Unlike them, the Inception family of networks consists of a series of modules with Symmetry 2024, 16, 494 2 of 16 a multi-branch structure.By using convolution kernels of different sizes or using branches of different depths, the modules can extract features with varying global degrees. In previous studies, researchers have mainly focused on building networks using classical Inception modules and fusing Inception modules with other methods, with relatively little exploration of network depth and width.Generally speaking, the deeper the network, the more global features can be extracted.The wider the network or the more branches it has, the richer the extracted features.Therefore, we would like to make a deep study in this area and explore how to further improve the Inception module to enhance its performance and its ability to extract features.Specifically, in this paper, we improve the performance of the Inception module by extending its depth and width.However, extending the depth and width of the module while incurring a huge parametric cost may suffer a high computational complexity in some tasks, which limits its application in resource-constrained environments, such as mobile devices or edge computing devices.Therefore, in this paper, the Inception module is optimized and improved, and the idea of lightweighting is also introduced to improve its computational efficiency.We propose three extension methods for progressive optimization.With these improvements, the performance of the Inception module is enhanced, which promotes its wide application in various tasks in resource-constrained environments. The Models of the Inception Series The first member of the Inception model family was the landmark GoogLeNet [10].As a model proposed in the same year as VGG, it has deeper depth and achieved higher classification accuracy on the ImageNet dataset.As shown in Figure 1, the core module of GoogLeNet can be referred to as the Original Inception module, which uses pooling layers and multiple convolutional layers with different convolutional kernel sizes in parallel to obtain globally different features.C. Szegedy et al. [11] proposed to reduce the number of parameters by decomposing the convolutional layers in the original Inception module with larger convolutional kernel sizes into convolutional layers with smaller convolutional kernel sizes.This decomposition can be either symmetric or asymmetric, as in Figure 1; the decomposed structures are referred to in this paper as Inception module A (symmetric decomposition), Inception module B (asymmetric decomposition), and Inception module C (asymmetric decomposition).Another effective improvement of [11] is the application of the batch normalization [12] method.C. Szegedy et al. [13] further proposed Inception v4 and also proposed to accelerate the convergence of the model by introducing residual connections. Lightweight CNNs Lightweight CNNs significantly reduce the number of parameters while ensuring model accuracy.The main way to reduce the number of parameters is to use a convolutional approach that is as sparse as possible but does not affect the inter-feature information instead of the traditional approach with high-density connections.This idea is adopted in the mainstream depthwise separable convolution and grouped convolution, in which the grouped convolution groups the input tensor, then convolves it group-by- In addition to the classical methods mentioned above, there are also some approaches to improve the Inception structure in recent years.X. Zhang et al. proposed a new module, Residuals Inception (RI) [14], in which each parallel branch of the RI module is replaced by three densely connected convolutional layers to the original structure, allowing the neural network to extract a richer set of features.M.Z.Alom et al. proposed an IRCNN [15], which combines CNNs and Recurrent Neural Networks (RNNs) to improve classification accuracy.L. Xie et al. [16] proposed the use of an optimized Inception module combined with a convolutional block attention module (CBAM) attention mechanism [17], and introduced residual connectivity to its structure to extract multi-scale features to improve classification accuracy.F. Chen et al. proposed a BeIn-v4 [18], which introduced the SKNet [19] attention mechanism to extract features of images more effectively to improve classification accuracy. Lightweight CNNs Lightweight CNNs significantly reduce the number of parameters while ensuring model accuracy.The main way to reduce the number of parameters is to use a convolutional approach that is as sparse as possible but does not affect the inter-feature information instead of the traditional approach with high-density connections.This idea is adopted in the mainstream depthwise separable convolution and grouped convolution, in which the grouped convolution groups the input tensor, then convolves it group-by-group and finally splices it to obtain the output.However, this method cannot realize communication among the convolution groups, which means it cannot extract the features of images effectively.To solve this problem, X. Zhang et al. proposed a tensor rearrangement method in ShuffleNet v1 [20], which realizes the communication between convolution groups by rearranging the output of grouped convolution.Another convolutional approach is the Depthwise Separable convolution proposed by F. Chollet et al. [21].This method consists of two steps: 1. Extracting features channel-by-channel using depthwise convolution.2. Implementing inter-feature communication using pointwise convolution. There are also some works about the optimization of the structure of the lightweight network to improve the classification accuracy.GhostNet [22], proposed by K. Han et al., is a stack of some ghost modules in which each ghost module is generated by identity and depth-separable convolutional layers to generate the complete feature map.Mo-bileNeXt [23], proposed by D. Zhou et al., is a replacement of the depthwise separable convolution in the MobileNet v2 [24] network with a sandglass module (which consists of 3 × 3 Depthwise Separable convolution, 1 × 1 convolution (squeeze), 1 × 1 convolution (recover), and 3 × 3 Depthwise Separable convolution) to improve the network performance.WeightNet [25], proposed by N. Ma et al., is a simple and efficient dynamic generative network, which applies the SENet [26] channel attention mechanism, i.e., the dynamic weight tensor is first obtained by global average pooling and a fully connected layer with sigmoid activation, and then the original tensor is weighted by the weight tensor.In contrast, EfficientNet [27], proposed by M. Tan, is improved by a neural network architecture search (NAS) in three aspects simultaneously-input resolution, network depth, and width-to improve the classification accuracy of the network model. Some recent works on model lightweighting have been carried out mainly for the Vision Transformer (ViT) [28].ViT has achieved impressive results in the field of computer vision.However, such models are often not better deployed on mobile devices due to their large model size and high latency; thus, a lightweight design for such models is necessary [29][30][31][32][33][34][35].MobileViG [29] is the first hybrid CNN-GNN for vision tasks on mobile devices, which mainly proposes Sparse Visual Graph Attention (SVGA) for faster speed.FastViT [30] introduces a novel token mixing operator, RepMixer, which effectively reduces memory access costs.SwiftFormer [31] introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. In summary, researchers have made some improvements to Inception in terms of convolution kernel size, convolution method, and inter-stage connectivity, but no research has been conducted from the perspective of expanding the depth and width of the Inception module.In this paper, we first extend the depth and width of the Inception module from different ways and show that the proposed method can provide excellent classification performance.However, simply extending the depth and width of the module would involve a large number of parameters.Therefore, we further improve the structure and incorporate a lightweight network to propose a number of feasible extension methods.The main contributions of this work are: 1. A basic extension method, the Inception-e module, is proposed by us.Based on the Inception module A, firstly, the basic expansion method is raised, and the experimental results prove that increasing the depth and width of the Inception module A is beneficial to the improvement of the classification accuracy of the model, but the method is accompanied by a huge number of parameters.2. To solve the problem of increasing the number of parameters due to extension, an equivalent extension method, the Eception module, is proposed by us, which has comparable perceptual field and feature extraction abilities to Inception-e.The Eception module improves the classification accuracy of the model while saving on the number of parameters. 3. A lightweight expansion method, Lception module, is proposed.On the basis of Eception module, inspired by the idea of a lightweight convolutional neural network, by cross-replacing the ordinary convolutional layers of the Eception module with depthwise convolutional layers, the weights of these layers are sparser, and thus reduce the number of parameters.The experimental results show that the Lception module can effectively improve the classification accuracy of the network with almost the same number of parameters. The remaining part of this article is arranged as follows.In Section 2, the structures of Inception-e, Eception, and Lception are described in detail.In Section 3, the main focus is on the experiments and analyses we conducted, including the datasets used for the experiments, the validation of the three extended methods, the Grad-CAM visualization analysis, and the comparison with some other methods.The conclusions are provided in Section 4. Basic Expansion Method-Inception-e Paralleling more and deeper convolution branches on the basis of Inception module A can improve its ability to extract features, so that the model can provide higher classification accuracy.The extended structure is named Inception-e module. The original Inception module uses convolutional layers with different kernel sizes to extract global features and fuses them, where the larger the kernel size is, the more global (more abstract) features are extracted.The convolutional layers effectively save parameters without reducing the structure's ability to extract features.We believe that concatenating convolutional branches with increasing depth steps on top of Inception-module A enables the structure to extract globally richer features.As shown in Figure 2, different depths of convolutional branches have different effects on feature extraction, with deeper branches capturing more global features and shallower branches extracting more detailed features. As shown in Figure 2, parallel concatenating more branches with deeper depths allows the model to extract more representative and richer features.Specifically, shallower convolutional branches extract more detailed features such as facial texture, while deeper branches extract features that are more representative of facial features such as facial contours. As shown in Figure 3, our proposed method only expands the core structure of Inception module A, while the other parts remain unchanged.The Inception-e method is to progressively concatenate deeper convolutional branches on top of the core structure.The nth rank expansion structure is named as Inception-e module (rank = n).Although this method improves the feature extraction ability of the structure and enables the model to obtain higher classification accuracy, it is accompanied by a huge parameter cost, and the number of parameters rises rapidly with the increase of rank. catenating convolutional branches with increasing depth steps on top of Inception-module A enables the structure to extract globally richer features.As shown in Figure 2, different depths of convolutional branches have different effects on feature extraction, with deeper branches capturing more global features and shallower branches extracting more detailed features.As shown in Figure 2, parallel concatenating more branches with deeper depths allows the model to extract more representative and richer features.Specifically, shallower convolutional branches extract more detailed features such as facial texture, while deeper branches extract features that are more representative of facial features such as facial contours. As shown in Figure 3, our proposed method only expands the core structure of Inception module A, while the other parts remain unchanged.The Inception-e method is to progressively concatenate deeper convolutional branches on top of the core structure.The nth rank expansion structure is named as Inception-e module (rank = n).Although this method improves the feature extraction ability of the structure and enables the model to obtain higher classification accuracy, it is accompanied by a huge parameter cost, and the number of parameters rises rapidly with the increase of rank. Equivalent Expansion Method-Eception In order to reduce the number of parameters brought by the extension, the Eception module is proposed in this paper.This structure is equivalent to the effect of Inception-e module for feature extraction, but saves a large number of parameters. As shown in Figure 3, unlike Inception-e module, which takes the output tensor of the previous convolutional layer of the branches as input, the Eception module only keeps Equivalent Expansion Method-Eception In order to reduce the number of parameters brought by the extension, the Eception module is proposed in this paper.This structure is equivalent to the effect of Inception-e module for feature extraction, but saves a large number of parameters. As shown in Figure 3, unlike Inception-e module, which takes the output tensor of the previous convolutional layer of the branches as input, the Eception module only keeps the last two convolutional layers of the branches and discards the rest of the convolutional layers, while the penultimate convolutional layer takes the output tensor of the previous convolutional layer of the adjacent branches as input. This method effectively improves the efficiency of the convolutional layers and reduces the number of redundant convolutional layers, thus suppressing the spike in the number of parameters caused by the expansion.When the size of the output tensor of the core structure is W × H × (C × rank), the number of 3 × 3 convolutional kernels of the Incptione module is N I , the number of 3 × 3 convolutional kernels of the Eception module is N E , and the comparison of the relationship between the number of 3 × 3 convolutional kernels and rank of the two modules is shown in Table 1. Table 1.Comparison of the relationship between the number of 3 × 3 convolutional kernels and the rank of the Inception-e module and the Eception module. Obviously, when the rank is ≥3, the Eception module can effectively save the number of parameters compared to the Inception-e module, and the higher the rank, the larger the saving ratio. When reducing the number of parameters, the change in the structure of the Eception module compared with the Inception-e module does not affect the feature extraction ability of the module.As shown in Figure 4, the receptive field of the input features are the same for both.When the input tensor is X input , the output tensor X output and X ′ output of the core structure of the Inception-e module and the Eception module can be represented as where F is the convolution operation, r f is the equivalent receptive field size for that operation, and // is the connection tensor.The receptive field can be calculated as where rf l denotes the receptive field size of the convolution at layer l, rf l−1 denotes the receptive field size of the convolution at layer l − 1, k l denotes the size of the convolution kernel at layer l (assuming the convolution kernel is symmetric), and s l denotes the convolution step size at layer l. Obviously, multiple convolutions with a small convolution kernel can convolve a receptive field equal to that of one convolution with a large-sized convolution kernel: e.g., F s=7 (.) means that the operation is equivalent to the receptive field of a 7 × 7 convolutional layer.Clearly, the equivalent receptive field sizes of the convolutions experienced by the components of X output and X ′ output are the same, so the ability of the Eception module to extract features is approximately equivalent to that of the Inception-e module.The later Obviously, multiple convolutions with a small convolution kernel can convolve a receptive field equal to that of one convolution with a large-sized convolution kernel: e.g., Fs=7(.)means that the operation is equivalent to the receptive field of a 7 × 7 convolutional layer.Clearly, the equivalent receptive field sizes of the convolutions experienced by the components of Xoutput and X ' output are the same, so the ability of the Eception module to extract features is approximately equivalent to that of the Inception-e module.The later experiments confirm the idea that the classification accuracy of the Eception module is very close to that of the Inception-e module. Lightweight Expansion Method-Lception Cross-replacing the ordinary convolution layers in the Eception module with depthwise convolution layers can further reduce the number of parameters of the structure, which is named the Lception module. As in Figure 3, to obtain the Lception module, the ordinary convolutional layers of the Eception module are cross-replaced as depthwise convolutional layers with activation function h-swish [36] and convolutional kernels of size 5 × 5.The use of depthwise convolution allows the structure to further reduce the number of parameters on top of the Eception module.The introduction of h-swish instead of RELU as the activation function of the depthwise convolution layers can improve the model performance by effectively avoiding the neuron death phenomenon.In addition, since the weights of the depthwise convolutional layers are very sparse, using a larger convolutional kernel (such as a convolutional kernel of size 5 × 5) can effectively expand the perceptual field of the Lception module with only a very small parameter cost. As shown in Figure 5, depthwise convolution, as a very sparse convolution method, must implement effective inter-channel feature communication to enable it to extract features effectively.Depthwise convolution achieves inter-channel feature communication Lightweight Expansion Method-Lception Cross-replacing the ordinary convolution layers in the Eception module with depthwise convolution layers can further reduce the number of parameters of the structure, which is named the Lception module. As in Figure 3, to obtain the Lception module, the ordinary convolutional layers of the Eception module are cross-replaced as depthwise convolutional layers with activation function h-swish [36] and convolutional kernels of size 5 × 5.The use of depthwise convolution allows the structure to further reduce the number of parameters on top of the Eception module.The introduction of h-swish instead of RELU as the activation function of the depthwise convolution layers can improve the model performance by effectively avoiding the neuron death phenomenon.In addition, since the weights of the depthwise convolutional layers are very sparse, using a larger convolutional kernel (such as a convolutional kernel of size 5 × 5) can effectively expand the perceptual field of the Lception module with only a very small parameter cost. As shown in Figure 5, depthwise convolution, as a very sparse convolution method, must implement effective inter-channel feature communication to enable it to extract features effectively.Depthwise convolution achieves inter-channel feature communication through pointwise convolutional layers, while our approach uses depthwise convolutional layers crossed with ordinary convolutional layers to achieve inter-channel feature communication as well.Moreover, since we retain the ordinary convolutional layers with 5 × 5 convolutional kernels, this structure has a larger perceptual field than the depthwise convolution, and can extract more abstract and useful features for classification. through pointwise convolutional layers, while our approach uses depthwise convolutional layers crossed with ordinary convolutional layers to achieve inter-channel feature communication as well.Moreover, since we retain the ordinary convolutional layers with 5 × 5 convolutional kernels, this structure has a larger perceptual field than the depthwise convolution, and can extract more abstract and useful features for classification. To verify the performance of the proposed model, the benchmark dataset Cifar10 is mainly used for the experiments.In the Cifar10 dataset, the training set includes 50,000 images and the test set includes 10,000 images, totaling 60,000 32 In Figure 6, some images of the FER+ dataset and RAF-DB dataset are given. To verify the performance of the proposed model, the benchmark dataset Cifar10 is mainly used for the experiments.In the Cifar10 dataset, the training set includes 50,000 images and the test set includes 10,000 images, totaling 60,000 32 In Figure 6, some images of the FER+ dataset and RAF-DB dataset are given. The Network Models and Experimental Conditions Table 2 shows the network models for the experiments using the Cifar10 dataset.On the classifier, the global pooling flattening tensor is firstly utilized, then Dropout regularization [40] is performed to discard 20% of the features, and finally, the classification is The Network Models and Experimental Conditions Table 2 shows the network models for the experiments using the Cifar10 dataset.On the classifier, the global pooling flattening tensor is firstly utilized, then Dropout regularization [40] is performed to discard 20% of the features, and finally, the classification is achieved using a full connection layer with an activation function of Softmax.We consider that due to the different sizes of input images, the number of modules used in the models of other datasets will also be adjusted accordingly, and other settings are similar to those shown in Table 1.For experimental conditions, on the Cifar10 dataset: Training batch: 32 3. Regularization: L2 regularization of the weights of all convolutional layers. On the RAF-DB, FER+, FER2013 datasets: No momentum acceleration is used.Considering the small dataset, the training epochs are extended.The rest of the conditions are the same as above. Validation of the Three Expansion Methods Some experiments were conducted to validate the proposed three expansion methods.Overall Accuracy (OA) is adopted as a measure of the classification accuracy of the model, and OA can be expressed as: TP denotes the number of positive classes predicted as positive, FN denotes the number of positive classes predicted as negative, FP denotes the number of negative classes predicted as positive, and TN denotes the number of negative classes predicted as negative. In this section, the Cifar10 dataset is used for the experiments.Figure 7 shows the experimental results of the classification accuracy, of parameters, and time complexity of the proposed three methods under each rank.number of positive classes predicted as negative, FP denotes the number of negative classes predicted as positive, and TN denotes the number of negative classes predicted as negative. In this section, the Cifar10 dataset is used for the experiments.Figure 7 shows the experimental results of the classification accuracy, number of parameters, and time complexity of the proposed three methods under each rank.For the Inception-e module, as seen in Figure 7a, parallel concatenating more and deeper convolutional branches can effectively improve the classification accuracy of the model.Inception-e module (rank = 6) has the highest classification accuracy of 90.4%, which is a 2.1% improvement over Inception module A. As seen in Figure 7b, the classification accuracy can be effectively improved by the Inception-e, but this expansion is accompanied by a huge number of parameters.The larger the rank, the faster the number of parameters rises, even up to 13.4 M when rank = 6. For the Eception module, as seen in Figure 7a, the accuracy of the Eception module is close to that of the Inception-e module, which indicating that the two modules are equivalent in their ability to extract features.At rank = 6, the Eception module achieves the highest classification accuracy of 90.2%, which is 1.8% higher than that of Inception module A and only 0.2% lower than that of Inception-e module (rank = 6).As seen in Figure 7b, the Eception module effectively reduces the number of parameters that added to the expansion compared to the Inception-e module.For example, the Eception module (rank = 6) has 6.6 M parameters, which is 49% less than that of the Inception-e module (rank = 6) of the same rank. For the Lception module, as shown in Figure 7a, the classification accuracy of the Lception module approximates that of the other two methods, and the classification accuracy of the Lception module (rank = 6) reaches 90.2%, which is 1.9% higher than Inception module A, only 0.2% lower than that of the Inception-e module (rank = 6), and basically similar to that of the Eception module (rank = 6).The number of parameters of the Lception module (rank = 6) is 3.6 M, which is only 27% of Inception-e (rank = 6) and 55% of Eception (rank = 6).The classification accuracy of Lception (rank = 4) is 1.5% higher than that of Inception module A and pays only 0.15 M more parameters. In summary, compared with the classical Inception module A, the proposed Inceptione, Eception, and Lception modules all have significant advantages in classification accuracy.By using more and deeper branches, the Inception-e module effectively improves the ability to extract features.The Eception module effectively reduces the number of parameters and achieves higher classification accuracy by improving the structure, while the Lception module greatly reduces the number of parameters and achieves a lighter network by replacing the traditional convolutional crossover in the Eception module with depthwise convolution.In addition, the Lception module makes more effective use of intra-regional correlation by increasing the perceptual field of convolutional kernels on the image, and obtains more discriminative image features, thus overcoming the degradation of classification performance caused by the reduction of the number of convolutional kernels due to the replacement of the depthwise convolution layer. For the proposed three model structures, some experiments are conducted on the Cifar10 dataset.The experimental results show that, as an extended structure of Inception module A, all three methods effectively improve the classification performance compared to Inception module A. The proposed Lception module can obtain the classification accuracy that approximates that of the Inception-e module and the Eception module with the minimum number of parameters, which fully demonstrates the effectiveness of the methods. Grad-CAM Visual Analysis In order to more intuitively illustrate the impact of different expansion methods on network performance, some images of different classes are selected in RAF-DB and the Grad-CAM [42] method is utilized to visualize and analyze the four structures of Inception module A, the Inception-e module (rank = 6), the Eception module (rank = 6), and the Lception module (rank = 6).The experimental results are shown in Figure 8.The color of the heat map reflects how much the network pays attention to the region, and the darker the color indicates that the neural network pays more attention to the region.As shown from these visualization results, all three of our proposed expansion modules focus on a larger range and more features than Inception module A. This further demonstrates the effectiveness of the proposed expansion method.This is because with more and deeper branching structures, the module is able to capture more global features while still retaining the ability to extract fine-grained features.The color of the heat map reflects how much the network pays attention to the region, and the darker the color indicates that the neural network pays more attention to the region.As shown from these visualization results, all three of our proposed expansion modules focus on a larger range and more features than Inception module A. This further demonstrates the effectiveness of the proposed expansion method.This is because with more and deeper branching structures, the module is able to capture more global features while still retaining the ability to extract fine-grained features. Comparison with More Methods To further verify the performance of the proposed modules, we selected Lception module (rank = 4), Eception module (rank = 4), and Eception module (rank = 8) to compare with some classical networks, lightweight networks, and some facial expression classification methods, and the comparison results are shown in Table 3.The * in Table 3 indicates that the structure was performed under the same experimental environment as the method proposed in this paper.VGG-GAP refers to the use of global average pooling instead of flatten operation to spread the tensor on top of classical VGG and uses only one layer of fully connected layer for classification.Some results are also cited from the literature [43] for a number of networks when using the FER+ dataset. As shown in Table 3, the overall accuracy of Lception module (rank = 4) is 86.9, Eception module (rank = 4) is 87.3%, and Eception module (rank = 8) is 87.6% on the FER+ dataset.There are different degrees of improvement compared with original Inception module and Inception module A. Compared with other classical structures VGG-11-GAP, VGG-13-GAP, VGG-13, VGG-19, ResNet18, Lception module (rank = 4), Eception module (rank = 6) uses a smaller number of parameters to obtain higher classification accuracy.For example, on the FER+ dataset, the number of parameters of Lception module (rank = 4) is only 6.5% of that of VGG-19, but the accuracy is 2.5% higher than that of VGG-19, which is a significant advantage.Compared with the lightweight networks Mobilenet v1 [47] and Mobilenet v2, and Shufflenet v1 and Shufflenet v2 [48], the accuracy of the proposed Lception module (rank = 4) is increased by 2.8%~6.5% in the FER+ set with a slightly increased number of parameters. The proposed Eception module and Lception module also show advantages over other neural network-based methods for facial expression classification.SHCNN [45] is a shallower neural network and the model achieves 86.5% accuracy on the FER+ dataset.The core method of [44] consists of ensembles with shared representations (ESRs) based on convolutional networks.The classification accuracy of the ESR-9 achieves 87.15% on the FER+ dataset.In [43], a lightweight emotion recognition (LER) model was proposed that combines densely connected convolutional layers and model compression techniques into a framework that eliminates redundant parameters, obtaining an accuracy of 85.67% on the FER+ dataset.The method in [46] obtained a classification accuracy of 84.29% on the FER+ dataset.The listed Eception (rank = 6) and Lception (rank = 4) can obtain a higher classification accuracy compared to the above methods. As shown in Table 4, we also conducted some experiments using the FER2013 dataset.The literature [49] uses transfer learning to classify facial expressions.By fine-tuning classical convolutional neural networks, the feature extraction capability of large classical networks can be effectively utilized.The literature [50] proposes to judge the reliability of the current classification result using a multi-layer perceptron (MLP) classifier.If the result is unreliable, the given face image is used as a query condition to search for similar images.Then, another MLP is trained to predict the final emotion category by summarizing the classification output vectors of the query image and its retrieved similar images.The literature [51] presents a generic convolutional neural network model for real-time CNNs.The literature [52] proposes several differently structured subnets.These subnets are compact CNN models trained individually.The whole network is composed by assembling these subnets together.Compared to these models, our proposed method can obtain higher accuracy on the FER2013 dataset.Alexnet [49] 66.7 GoogLenet [50] 64.6 GoogLenet + MLP [50] 65.8 mini-Xception [51] 66.0 Subnet3 [52] 62.4 Subnet Ensemble [52] 65.0 In summary, the comparison with Inception module A demonstrates that the proposed extension methods are effective in improving the model accuracy by paying only a smaller number of parameters.Compared with some classical structures, lightweight structures, and other mainstream methods, the proposed Eception module and Lception module can obtain higher classification accuracy.This is because the proposed modules parallelize more branches, allowing them to extract richer and more abstract features.However, the shape of the convolutional kernels used in these modules are symmetric, which makes the number of parameters in the modules tend to be larger than those using asymmetric convolutional kernels.This issue will be addressed in our next research work.The paper involves relevant code that will be uploaded at https://github.com/LIUZHENQUANS/EMI(accessed on 14 March 2024). Conclusions In this paper, we investigate the extension methods of the Inception module and propose a new idea of extending the network structure.By carefully designing the network structure, it can provide the improvement of classification performance with a large margin than that of the original network, and with fewer parameters. Specifically, we first propose the Inception-e module, which improves the classification accuracy by concatenating more and deeper convolutional branches, and then propose the Eception module to solve the problem of excessive parameters due to the increase of depth and width.Then, the Lception module is designed based on the Eception module by crossreplacing the ordinary convolution in the Eception module with depthwise convolution.The experimental results show that the extended network structure can effectively improve the classification accuracy as well as reduce the number of network parameters.We also found that the convolution kernels used in the proposed methods are symmetric.Considering that models using asymmetric convolution kernels can obtain larger receptive field with the same number of parameters, further application of asymmetric convolution kernels based on the proposed method may be beneficial to improve the model performance, which is a possible direction for future related work. It is worth noting that the method proposed in this paper has strong generalization ability.It can be applied not only to the Inception network, but also other similar classical network structures.In the future, we will continue to explore more effective network structures to improve the classification performance of networks. Figure 2 . Figure 2. The relationship between the depth of the branch and the extracted features. Figure 2 . 18 Figure 3 . Figure 2. The relationship between the depth of the branch and the extracted features. Figure 3 . Figure 3.The schematic diagram of the Inception-e module, the Eception module, and the Lception module.The structure within the dashed box is the core structure of Inception module A. 18 Figure 4 . Figure 4.The overall comparison of features extracted by the Eception module and the Inception-e module. Figure 4 . Figure 4.The overall comparison of features extracted by the Eception module and the Inception-e module. Figure 5 . Figure 5.Comparison of the depthwise separable convolution with the proposed method. × 32 color images in 10 classes.To facilitate the experiments, both the training set and the test set images are enlarged to 96 × 96 size and normalized, and data enhancement is done only for the training set.The FER2013 dataset consists of 35,887 grey scale images of size 48 × 48 in seven categories, including: 1-Surprise, 2-Fear, 3-Disgust, 4-Happiness, 5-Anger, 6-Neutral, 7-Sadness.It includes 28,709 images in the training set (set to be the training set), 3589 images in the public test set (set to be the validation set), and 3589 images in the private test set (set to be the test set).We normalize all images and only do data augmentation on the training set.The FER+ dataset is relabeled from FER2013.When using the majority mode, it includes 25,045 images in the training set, 3191 images in the validation set, and 3137 images in the test set, totaling 31,373 48 × 48 grayscale images in eight categories.The FER+ set has one more category than FER2013 and RAD-DB set: 8-Contempt.We normalize all images and only do data augmentation on the training set.The RAF-DB dataset, including single-label and double-label, totals 29,672 images.These are used in this paper, including 12,271 images in the training set and 3069 images in the test set, totaling 15,340 100 × 100 color images in seven categories.To facilitate the experiment, both the training and test sets were cropped to 96 × 96 size and normalized, and only the training set was data enhanced. Figure 5 . Figure 5.Comparison of the depthwise separable convolution with the proposed method. × 32 color images in 10 classes.To facilitate the experiments, both the training set and the test set images are enlarged to 96 × 96 size and normalized, and data enhancement is done only for the training set.The FER2013 dataset consists of 35,887 grey scale images of size 48 × 48 in seven categories, including: 1-Surprise, 2-Fear, 3-Disgust, 4-Happiness, 5-Anger, 6-Neutral, 7-Sadness.It includes 28,709 images in the training set (set to be the training set), 3589 images in the public test set (set to be the validation set), and 3589 images in the private test set (set to be the test set).We normalize all images and only do data augmentation on the training set.The FER+ dataset is relabeled from FER2013.When using the majority mode, it includes 25,045 images in the training set, 3191 images in the validation set, and 3137 images in the test set, totaling 31,373 48 × 48 grayscale images in eight categories.The FER+ set has one more category than FER2013 and RAD-DB set: 8-Contempt.We normalize all images and only do data augmentation on the training set.The RAF-DB dataset, including single-label and double-label, totals 29,672 images.These are used in this paper, including 12,271 images in the training set and 3069 images in the test set, totaling 15,340 100 × 100 color images in seven categories.To facilitate the experiment, both the training and test sets were cropped to 96 × 96 size and normalized, and only the training set was data enhanced. Figure 6 . Figure 6.Some examples of the FER+ dataset and RAF-DB dataset. Figure 7 . Figure 7.Comparison of experimental results of the Inception-e module, the Eception module, and the Lception module.(a) Comparison of the classification accuracy of the three methods; (b) comparison of the parameter quantities of the three methods. Figure 7 . Figure 7.Comparison of experimental results of the Inception-e module, the Eception module, and the Lception module.(a) Comparison of the classification accuracy of the three methods; (b) comparison of the parameter quantities of the three methods. Symmetry 2024 , 18 Figure 8 . Figure 8.Heat maps of different categories of images in the RAF-DB dataset. Figure 8 . Figure 8.Heat maps of different categories of images in the RAF-DB dataset. Author Contributions: Conceptualization, C.S.; data curation, C.S., Z.L. and J.Q.; formal analysis, Y.D.; methodology, C.S.; software, Z.L.; validation, C.S., Z.L. and J.Q.; writing-original draft, J.Q.; writing-review and editing, C.S. and Y.D.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded in part by the National Natural Science Foundation of China (42271409), in part by the Heilongjiang Province Higher Education Teaching Reform Research Project (Grant number: SJGY20220403), and in part by the Education Science Research Project of Qiqihar University (Project Number: GJQTYB202216). Table 2 . The network model structures when using the Cifar10 dataset.
9,258.2
2024-04-18T00:00:00.000
[ "Computer Science" ]
Development of a Health Behavioral Digital Intervention for Patients With Hypertension Based on an Intelligent Health Promotion System and WeChat: Randomized Controlled Trial Background: The effectiveness of timely medication, physical activity (PA), a healthy diet, and blood pressure (BP) monitoring for promoting health outcomes and behavioral changes among patients with hypertension is supported by a substantial amount of literature, with “adherence” playing a pivotal role. Nevertheless, there is a lack of consistent evidence regarding whether digital interventions can improve adherence to healthy behaviors among individuals with hypertension. Objective: The aim was to develop a health behavioral digital intervention for hypertensive patients (HBDIHP) based on an intelligent health promotion system and WeChat following the behavior change wheel (BCW) theory and digital micro-intervention care (DMIC) model and assess its efficacy in controlling BP and improving healthy behavior adherence. Methods: A 2-arm, randomized trial design was used. We randomly assigned 68 individuals aged >60 years with hypertension in a 1:1 ratio to either the control or experimental group. The digital intervention was established through the following steps: (1) developing digital health education materials focused on adherence to exercise prescriptions, Dietary Approaches to Stop Hypertension (DASH), prescribed medication, and monitoring of BP; (2) using the BCW theory to select behavior change techniques; (3) constructing the intervention's logic following the guidelines of the DMIC model; (4) creating an intervention manual including the Hypertension and Health Behavior Interventions Hypertension, as a high-prevalence chronic disease, has become an important risk factor for many diseases (eg, stroke, renal disease) and a major contributor to the global burden of disease [1,2].Approximately one-third of older adults with hypertension fail to achieve their blood pressure (BP) control goals [3].The reasons for the low rate of hypertension control are related to high-risk lifestyles such as poor dietary habits and low levels of physical activity (PA). Relevant studies have shown that adherence to recommended health behaviors can significantly reduce systolic blood pressure (SBP) by an average of 4.0 mm Hg to 5.6 mm Hg and diastolic blood pressure (DBP) by an average of 4.1 mm Hg to 5.3 mm Hg in individuals with hypertension [4].Engaging in a wide range of exercise training can lead to average reductions of 4.08 mm Hg to 8.24 mm Hg in SBP and 2.5 mm Hg to 4.0 mm Hg in DBP [5].Regular BP monitoring behaviors result in average reductions of 2.53 mm Hg to 4.7 mm Hg in SBP and 1.45 mm Hg to 2.4 mm Hg in DBP [6,7].Adherence to Dietary Approaches to Stop Hypertension (DASH) not only reduces SBP and DBP by approximately 5.5 mm Hg and 3 mm Hg, respectively, but also reduces the chance of developing hypertension by 26% [8,9].High medication adherence, combined with comprehensive interventions like diet and exercise management, leads to improved BP control [10,11].It is evident that comprehensive health behavior interventions can achieve effective BP control, with adherence playing a pivotal role. In recent years, several studies exploring intelligent health promotion systems that incorporate advanced technologies like artificial intelligence, wearable devices, and mobile communication have consistently shown their efficacy in managing chronic diseases [12][13][14].In addition, numerous studies support that independent mobile health (mHealth) apps play a vital role in community-based patient management [10,12,[14][15][16].WeChat, China's predominant social communication mobile app, serves as one platform for mHealth interventions.It boasts a staggering daily user count of up to 902 million people and over 1 billion monthly active users spanning all age groups [17].Several studies have indicated that mHealth-based interventions can enhance health outcomes, quality of life, and self-care among patients with chronic diseases [18,19].BP monitoring adherence (BPMA), dietary habits, and self-efficacy behaviors of patients with hypertension have been somewhat improved by these interventions based on mHealth [20][21][22].However, other studies have shown no significant change or little improvement in BP, medicine, exercise, and DASH adherence [20,[23][24][25]. Theoretical Framework The behavior change wheel (BCW), which integrates 19 relevant theoretical frameworks for behavior change, was first proposed by Michie et al in 2011 [26].As Figure 1 illustrates, it consists of 3 tiers.The inner tier is the Capability, Opportunity, Motivation-Behavior (COM-B) model, which is used to identify barriers to intervening in the target behavior.The second tier comprises the following 9 intervention categories intended to tackle identified behavioral obstacles: education, persuasion, motivation, coercion, training, restriction, environmental restructuring, demonstration, and empowerment.The outermost tier encompasses 7 policy categories (eg, regulation and legislation) that aid in the implementation of macrolevel interventions [27].This theory consists of 3 steps: understanding the behavior, identifying intervention options, and identifying content and implementing the options (ie, behavior change techniques [BCTs]).Michie et al [28], along with other scholars, developed "The behavior change technique taxonomy of 93 hierarchically clustered techniques," which contains 93 BCTs and provides names, definitions, and examples.In the final step, researchers can select the necessary BCTs from this taxonomy list. Scholars have applied this theory to community health promotion, health care management, and nursing care, resulting in positive outcomes [29][30][31][32].Additionally, some researchers have extended its application to digital and mHealth interventions [33].The effectiveness of this lies in its ability to aid interveners in the systematic and scientific identification of intervention functions and specific BCTs for behavior change problems.Despite this, the theory does not provide much insight into the components of the intervention.The BCW can be supplemented by the digital micro-intervention care (DMIC) model [25,34]. In 2020, the DMIC model proposed by Baumel and colleagues [34] provided a reference paradigm.This theoretical model promotes shorter, more focused interventions, known as micro-interventions.They can be highly focused on implementation in people's daily lives to help intervention recipients achieve desired short-term goals (the basis for achieving long-term goals).The DMIC comprises the following 3 core concepts: events (in-the-moment attempts at change or impact toward the overall target of the intervention), decision rules (guiding which events are deployed and when), and proximal assessments (assessing the impact of the event).Events are similar to specific BCTs, while decision rules deploy events in a meaningful way based on time, user status, or environmental information, allowing interveners to dynamically adjust the content of micro-interventions.Overall outcome assessment, proximal event outcome assessment, and assessment of user participation in the micro-intervention are 3 types of impact assessment for digital micro-interventions.Based on the results of the overall and proximal assessments and the continuous recording of the user engagement experience (ie, measuring the quality of attention, engagement, and immersion during the use of the program), it is possible to identify individuals with low levels of engagement and later search for the causes, modify the intervention decision rules, and re-engage the user.This theoretical framework advocates for interventions that aim to achieve specific objectives through in-the-moment intervention elements.These elements may not be directly tied to the attainment of a broader clinical goal [34].Each intervention in every event represents an immediate attempt to modify or influence the overall goal of the intervention.This implies that, in order to achieve a clinical objective, interventions should be broken down into numerous small steps and regulated through proximal assessments, ultimately leading to the attainment of the overall outcome. Goal of This Study Therefore, our objective was to develop a health behavioral digital intervention for hypertensive patients (HBDIHP) based on BCW and DMIC and assess the effectiveness of this program in 2 groups after 3 months of intervention.This program involved exercise, diet, BP monitoring, and medication adherence intervention strategies.Consequently, we aimed to assess the effectiveness of this approach for enhancing outcomes for older adults with hypertension. Community-Oriented Intelligent Health Promotion System The Intelligent Health Promotion System is a cloud platform-based system that leverages health sign data and health questionnaire responses to generate intelligent health reports, personalized exercise prescriptions, and other tailored health recommendations.It also tracks and monitors individual health data.This system comprises the following 3 main layers: perception layer, decision layer, and application layer (see Figure 2).This system is installed in the Intelligent Health Cabin at community health service centers (see Figure 3).It currently supports a range of connected instruments, including cardiovascular function monitors, arteriosclerosis detectors, body composition monitors, bone densitometers, and physical fitness detectors.After completing assessments with these instruments, the system sends those collected data to its central cloud platform database.Participants need to continue to fill out various questionnaires, such as chronic disease history questionnaires, medication profiles, and family medical history surveys.The system then uses both instrument data and questionnaire responses to activate the intelligent decision-making and inference engine, which generates comprehensive reports.These reports provide a comprehensive evaluation of an individual's health status and offer personalized, evidence-based recommendations, namely personalized intervention plans. Assessment of Health Outcomes and Risk Prediction The comprehensive report provides an overview of participants' health, identifying health issues in areas such as the cardiovascular system, lipid metabolism, musculoskeletal system, lifestyle, and physical fitness, while explaining the meaning of abnormal indicators.In the cardiovascular assessment, the system not only evaluates participants' cardiovascular systems based on instrument results but also predicts participants' heart age and the risk of cardiovascular disease using a machine learning model. Personalized Health Advice The comprehensive report offers personalized health advice to each resident, which includes exercise prescriptions, dietary recommendations, and suggestions for behavior correction.The exercise prescription, based on the design principles of the American College of Sports Medicine guidelines (frequency, intensity, type, time, volume, and progression [FITT-VP]) combined with the Transtheoretical Model, delineates intervention plans tailored to the health care stage, exercise habit formation stage, scientific fitness stage, and exercise habit maintenance stage.After being processed by the intelligent decision module, the system generates exercise prescriptions customized for individual residents (see Figure 4), considering the exclusion of exercise contraindications.These prescriptions encompass exercise recommendations, principles, weekly plans, exercise correction, precautions, and exercise guidance (see Figure 5) [13]. A dietary guidance and behavior correction database was constructed using technical strategies such as expert systems and knowledge graphs.In practice, the system provides dietary recommendations and behavior correction suggestions (eg, health advice for sedentary individuals to change their unhealthy habits) based on diet-related questionnaires, medical history collection, and physical examination results (see Figure 6). Development of the Health Behavioral Digital Intervention A multidisciplinary working group for digital health intervention strategies was established during the entire intervention program design process.This group included 3 patients with hypertension, 1 clinical expert in hypertension, 1 management expert, and 1 behavioral psychology expert.Group-focused interviews were carried out at every stage of development.At this stage, experts evaluated intervention strategies using their professional knowledge, while patients with hypertension assessed the acceptability and utility of the interventions from their perspective.After finalizing the intervention scheme, experts who were not involved in the development were invited to assess its structural validity. Defining Health Behavior Management Targets and Health Education Content The identification of the 4 key health behavior management targets and associated health education content for hypertension (see Table 1) was based on existing literature and previous research findings. Development of Digital Health Education Materials Patient health education needs for the 4 targets were gathered through focus group interviews.Subsequently, evidence-based principles were used to develop textual materials related to exercise, diet, medication taking, and BP monitoring.These materials underwent content validity assessments by experts before being transformed into health education videos.For diet, in addition to creating health education videos based on the DASH guidelines, the research team developed a simplified DASH grading diet index score.The scoring system included items for evaluating daily meals, such as grains, vegetables, fruits, protein sources, cooking oil, and compliance with recommended food types and quantities.Items 1 through 7 represented positive scoring criteria for each recommended food category and quantity, while items 8 through 10 incurred deductions.The aim was to educate patients on recommended dietary behaviors through the acquisition of DASH knowledge and proficiency in using the simplified DASH grading diet index score.For exercise, in addition to providing general knowledge about exercise, the research team augmented the guidance materials (eg, videos on how to perform recommended exercise types) without altering the existing elements of the intelligent exercise prescription.Medication adherence and BP monitoring were primarily addressed through instructional videos that imparted relevant knowledge and skills.All relevant videos were accompanied by textual materials that were ultimately compiled into the "hypertension self-management manual."See Figure 7 for an overview. Development of Digital Intervention Scheme Based on BCW and DMIC We used the systematic workflow of the BCW to identify appropriate intervention categories and suitable BCTs.Details of this process have been published in other papers [35].Based on the identified BCTs, we developed corresponding textual content, which, along with the previously mentioned digital health education materials, collectively form the foundational elements of the DMIC theory.Subsequently, additional elements of the DMIC theory, namely proximal assessment indicators and decision rules, were determined based on literature review and expert opinions. First, the BCW theory was used to identify barriers related to capability, opportunity, and motivation affecting adherence to health behaviors by patients with hypertension.Intervention categories were chosen to address these barriers, including methods like education, persuasion, and incentivization.Figure 8 illustrates the process of developing the mHealth intervention scheme for improving adherence. Second, BCTs, such as feedback on behavior, prompts, self-monitoring of behavior, and verbal persuasion about capability, were selected from the BCT taxonomy, which had already been coded and organized by researchers like Michie et al [28].Furthermore, based on the selected BCT, specific textual content was prepared (eg, BCT: focus on past success; text: "You have successfully quit smoking in the past, and we believe you can also develop a scientific exercise habit!Keep it up!"). Third, following the DMIC model, we classified the textual content corresponding to BCTs and digital health education materials as "events."Subsequently, we established decision rules that determined when and in what order interventions for these events should take place.Additionally, we established proximal assessment indicators (eg, knowledge level) for tailoring intervention strategies to individual patients and their corresponding events (eg, continue learning if qualified, relearn if not).Events, proximal assessment, and decision rules were integrated into intervention units organized by chronological stages, including assessing and preparing, committing and planning, and reinforcing behavioral habits.Notably, health education was primarily implemented during the assessing and preparing phase (first week). Fourth, the intervention scheme was validated by an expert panel through 2 rounds of Delphi surveys.The panel of 15 experts included 6 nutrition experts, 2 clinical cardiologists, 3 clinical nurses and nursing teachers, and 4 exercise experts.The content validity index (CVI) was calculated and assessed using the item-level CVI (I-CVI) and a 4-point scale, respectively.The first round of assessments using the I-CVI ranged from 0.6 to 0.8.After adjusting the content based on the feedback and comments from the experts, the I-CVI reached 1 in the second round. Finally, all intervention logic and guidance content were compiled into an intervention manual.This manual included daily intervention tasks (eg, questionnaires, text based on BCTs, and the order of videos to be sent), communication guidelines, and personalized guidance strategies (eg, after assessing the extent of the patient's knowledge and skills, the approach to providing personalized guidance can be provided). Trial Design and Setting The study protocol was previously published [35].This was a randomized controlled trial.The experimental group received the health behavioral digital intervention based on an intelligent health promotion system and WeChat for 12 weeks, while the control group received routine health services and was provided with a "Hypertension Self-Management Manual" to guide daily health behaviors (refer to Figure 9).The trial was conducted at 2 community health centers, both located in Anhui Province: Sanxiao Kou Community Health Service Center and Dongfeng Community Health Service Center. Patients The staff at the community health service centers recruited eligible patients with hypertension within their jurisdiction through phone calls or verbal invitations using a convenience sampling method.The inclusion criteria for participants were as follows: (1) diagnosed with primary hypertension or currently taking antihypertensive medication; (2) aged >60 years; and (3) proficient in using smartphones and the WeChat application.The exclusion criteria were as follows: (1) patients with hypertension undergoing nonpharmacological treatment; (2) those with diabetes, kidney disease, or other conditions requiring special dietary and exercise considerations; (3) individuals participating in or having participated in other health management projects; and (4) those unable to measure BP in a home environment. Sample Size The sample size for this study was determined based on the effect size.According to a previous meta-analysis comparing the BP-lowering effects of mHealth interventions and other traditional methods used by patients with hypertension, the mHealth experimental group demonstrated a significant reduction in BP, with an effect size of 0.7 [36].The sample size for this study was calculated using GPower 3.1, assuming α=.05 and β=.2 and accounting for a 20% dropout rate, resulting in a final sample size of 68 participants. Randomization, Allocation, and Blinding Prior to randomization, a researcher unfamiliar with the experimental design used SPSS (version 23; IBM Corp) to generate 68 random numbers.Subsequently, using the visual binning method, these numbers were divided into 2 groups (experimental group and control group).The paper slips containing the labeled random numbers were placed into sealed envelopes, and another researcher unaware of the experimental design was responsible for assigning participants to either the experimental group or control group.This researcher sequentially opened the envelopes and assigned participants to the experimental group or control group based on the numbers.Additionally, during the data collection phases before and after the experiment, we trained and employed 2 nursing graduate students unfamiliar with the study design.The data collection for the experimental group intervention process was conducted by 5 nursing undergraduates who were also unfamiliar with the experimental design. Primary Outcomes Before and 12 weeks after the intervention, we used a cardiovascular function monitoring device (BX-CFTI-100, Intelligent Machine Institute) to measure participants' BP.Measurement details have been published [13].Through WeChat, 5 undergraduate students unfamiliar with the experimental design collected adherence indicators from the experimental group participants weekly.These indicators included exercise adherence, calculated as the ratio of actual weekly exercise time meeting the prescribed intensity to the total prescribed weekly exercise time; dietary adherence, assessed using the weekly average score of the simplified DASH grading diet index score (see protocol paper for details [35]); medication adherence, assessed using the weekly average score of the Modified Morisky Scale (Chinese version-MMS-8, Certificate Number: 8538-1877-1559-6025-5310) [37][38][39]; and BPMA calculated as the ratio of the actual weekly frequency of BP monitoring to the total recommended weekly frequency. Secondary Outcomes Secondary outcomes were assessed before and after the intervention.Heart rate and subendocardial viability ratio (SEVR) were measured using the cardiovascular function monitor; brachial-ankle pulse wave velocity was measured using an arterial stiffness monitor (BX-AS-100, Intelligent Machine Institute); weight was measured using a body composition analyzer (BX-BCA-100, Intelligent Machine Institute); and lifestyle was assessed through an online questionnaire administered to participants pre-and postintervention. Participants were questioned about health-related behaviors, including the frequency of smoking, drinking, diet, and PA.For smoking, the lifelong smoking quantity of participants was calculated based on the quantity and weekly frequency of cigarettes smoked.For those who had quit smoking, this calculation also included their smoking history before quitting.The alcohol content in one bottle of the most popular alcoholic beverages in Anhui Province is as follows: beer (500 mL, 3.2% alcohol): 17.5 g; white liquor (450 mL, 42% alcohol): 210 g; and wine (750 mL, 13.5%-14% alcohol): 97.5 g.Daily alcohol consumption was calculated using these values.Missing values for smoking and drinking data were set to zero.Participants were asked about the types, duration (in minutes), and frequency (per week) of PA they engaged in.PA time was determined in minutes per metabolic equivalent (MET) per day (min/MET/day) based on activity codes and MET intensities in the "Compendium of Physical Activities."Weekly exercise time (MET-min/week) and weekly PA time (MET-min/week) were calculated, with missing values set to the median [40]. Statistical Analysis Data analysis was carried out using SPSS, with independent sample t tests and chi-square tests used to evaluate the significance of differences between the 2 groups.Paired t tests and McNemar tests were used to compare differences within the same group before and after the 3-month intervention.A P value <.05 was considered statistically significant. Ethical Considerations The study was approved by the Ethics Committee of Bengbu Medical College in June 2022 under approval number 2022-103, and the study began after informed consent was obtained from patients with hypertension and all participants allowed their data to be used anonymously.An independent data manager conducted weekly checks on the database to ensure its integrity and security.The implementation of data lockup aims to prevent any postmodification.All exported data must undergo anonymization by the data manager before statistical analysis can be conducted to safeguard the participants' information.Each participant who underwent the intervention and data collection at the community health center was given a gift of daily necessities valued at approximately US $6.87. Participants Between September 5, 2022, and September 19, 2022, a total of 68 patients with hypertension were recruited through phone or verbal invitations from the 2 community health centers in Anhui Province.Participants were re-invited to the community health service center for health data and information collection between December 20, 2022, and January 5, 2023.A total of 54 participants (30 women and 24 men; mean age 67. Baseline Data No statistically significant differences in health outcomes (eg, SBP, SEVR), adherence indicators (eg, exercise time, PA time, medication adherence), and learning performance were observed between the experimental group and control group at baseline. Changes in Weekly Adherence Indicators in the Experimental Group The experimental group collected weekly average adherence indicators from the first week after the assessing and preparing phase (the second week of the project) through the eleventh week (the twelfth week of the project), as depicted in the curve shown in Figure 10.From weeks 1 to 4, the exercise adherence curve indicated that this phase was when the intervention participants were most actively engaged in PA.They exceeded the prescribed exercise volume (>1).By week 5, the participants showed lower exercise adherence, followed by 2 weeks of rebound.Subsequently, exercise adherence sharply declined, reaching a stable state in weeks 10 and 11.The dietary adherence curve indicators exhibited the most noticeable fluctuations, reaching peaks in weeks 4 and 8 and valleys in weeks 6 and 10.Medication adherence gradually increased from week 1 to week 4, experienced fluctuations, and then steadily declined starting at week 7.Among the 4 indicators, the BPMA curve had smaller fluctuations.It steadily increased from week 1 to week 3, reached a low point in week 7, and remained relatively stable at 0.72 thereafter. Health Outcomes We observed varying degrees of positive impacts on health outcomes for patients with hypertension through this health behavioral digital intervention.The intervention demonstrated a significant effect on SBP control.It may also be effective at improving SEVR (with statistical differences observed between the 2 groups after the intervention).However, there is insufficient evidence to conclude that a WeChat-based digital intervention is more effective than a conventional intervention for weight improvement.Based on the current results, both interventions appear to have a positive effect on weight reduction.No statistically significant differences were found in other health outcome indicators. The improvements in BP after our program's intervention were similar to previously published findings.Using methods such as text messaging, electronic reminders, and sharing health education links, both SBP and DBP can be reduced [20][21][22].However, our project only demonstrated a significant reduction in SBP and not DBP, consistent with the results found by some scholars [41][42][43].The reduction in BP may have resulted from the intervention measures we designed based on BCW and the DMIC model, which enhanced patients' knowledge and adherence to healthy behaviors.Several behaviors interacted dynamically and influenced each other, such as increased disease knowledge leading to regular BP monitoring at home [8].Regular attention to BP can assist individuals with better managing their condition [44], ultimately enhancing BP control.Both the DASH diet and exercise, either alone or in combination, can relax the smooth muscle of blood vessel walls to some extent, promoting blood circulation and consequently XSL • FO RenderX lowering BP [45,46].Older adults with decreased vascular elasticity may have a tendency for elevated SBP and decreased DBP (as seen in our study with high mean SBP and normal mean DBP) [47].They may be more sensitive to reductions in SBP.Another possible reason for the reduction in SBP, but not DBP, could be attributed to variations in study populations, interventions, age groups, and medication usage.Regardless, the results of this study were generally consistent with the antihypertensive effects reported in the guidelines [48].Additionally, the exercise prescription aligned with the guideline recommendations of engaging in at least 30 minutes of moderate-intensity dynamic aerobic exercise per week (such as walking, jogging, cycling, or swimming) and a minimum of 2 to 3 days of resistance training per week.Simultaneously, the formulation of personalized behavior change strategies has further increased patient adherence to the exercise prescription.SEVR, a reliable indicator of myocardial oxygen supply and demand [49], may be more closely associated with exercise.Our previous study demonstrated an improvement in this indicator after an exercise intervention [13], and other researchers have also observed significant enhancements in SEVR across different age groups (18-80 years) following exercise interventions [50].Regarding weight, there was a reduction observed in both groups of patients with hypertension when comparing pre-and postintervention data.This reduction aligns with the results of our previous 1-arm before-and-after study [13].However, the WeChat-based intervention did not demonstrate superiority, possibly due to the limited sample size with this experiment.There were no statistically significant differences observed in heart rate and brachial-ankle pulse wave velocity.This finding contradicted our 1-year study results, which may be attributed to the small sample size and the relatively short duration of the study.Some scholars pointed out that exercise training to improve atherosclerosis requires higher exercise intensity and longer duration, while the participants in this study were older adults with chronic diseases and the exercise prescriptions mostly provided lower exercise intensity [51]. Adherence Indicators This program enhanced the health knowledge of community-based patients with hypertension and fostered compliance with medication, BP monitoring, exercise, and dietary guidelines, which was consistent with the findings in some previous literature [20][21][22].The improvement in knowledge highlighted the efficacy of educating patients with hypertension, serving as the cornerstone of healthy behavior change. The improvement in medication adherence in our research supports the findings of other research [52][53][54].This study indicated that the Morisky scores before and after the intervention (6.77-7.65)were within the moderate range, with slightly better outcomes compared with the results reported by Morawski et al [55] (same assessment tool: 12 weeks, 6-6.3).Medication education and adherence reminders based on video, text messages, and other mobile app functions were some of the most common interventions for medication adherence in cardiovascular diseases [56].These were key BCTs in our project.Foreman et al [57] suggested that medication text reminders reinforcing medication adherence can lead to higher oral medication adherence among patients with hypertension.Information reminders also help patients maintain higher compliance over time.However, long-term, high-frequency reminders may result in response fatigue in patients [58].This may explain why our research showed that medication adherence was highest during weeks 2 to 6 (commitment and planning stage) but significantly declined after week 7 (behavioral habit consolidation stage).This highlights the need for researchers to adopt additional behavior change strategies to address patient fatigue during this phase.The effectiveness of electronic reminders and health education in improving patient compliance with BP monitoring has also been confirmed [53].Based on our research findings, the curve for BPMA showed the least fluctuation.This suggests that BPMA is relatively stable for patients.It is crucial to emphasize the importance of BP monitoring to patients and establish corresponding health behaviors, particularly during the initial phase of the intervention.Additionally, monitoring BP is often the first health behavior action adopted by patients, and it plays a pivotal role in the transition to other health behaviors [44].Therefore, it should receive special attention during the initial stages. Most studies have consistently demonstrated the impact of various exercise types and intensities on BP improvement.Research involving mHealth or telehealth interventions through smartphones for patients with coronary heart disease or hypertension has also reported their positive effects on exercise compliance [22,59], which were consistent with our results.Our research revealed a significant rise and reduction in the exercise prescription adherence curve.This indicator was highest during weeks 2 to 6, but it significantly declined starting from week 7.This suggests that emphasis should be placed on maintaining exercise habits.The effectiveness of DASH in improving BP is unquestionable.However, the results of a systematic review indicated only weak evidence supporting the use of smartphone apps to enhance DASH dietary adherence and reduce BP [60].From our study's perspective, there was significant fluctuation in DASH dietary adherence.This variability may be attributed to the complexity of dietary management compared with other health behaviors, making it difficult to implement and provide feedback [61].This poses a greater challenge for researchers in designing interventions to improve dietary compliance among patients with hypertension. In this study, considering the decline in participant adherence indicators after a certain period of intervention, effective strategies may help alleviate participant adherence fatigue, thereby sustaining and enhancing patient engagement.For exercise adherence, research indicates that the effectiveness of interventions is not necessarily correlated with longer intervention periods or higher frequencies.Therefore, tailoring interventions to individual preferences, using different proven therapeutic intervention types for specific target populations, maintaining intervention frequencies above once per week, and ensuring a moderate planned duration may be crucial factors in promoting intervention adherence [62].Additionally, involving professionals from different disciplines (such as psychologists, doctors, and nurses), having professionals supervise the XSL • FO RenderX implementation of intervention plans, and actively engaging in social interactions with staff and other participants have proven effective in enhancing participant adherence and increasing engagement [62].Regular home visits [63], actively implementing strategies to increase participant self-efficacy [64], and incorporating gamification elements into interventions [65] are also effective strategies to address participant compliance fatigue.During the intervention implementation process, dynamically assessing and understanding the drivers and barriers of adherence based on the participant's stage and providing personalized decision support and motivation can effectively enhance participant adherence and engagement [66,67]. Notably, there is a lack of research on behavior change interventions for hypertension based on the BCW and DMIC.Therefore, a more intricate experimental design and thorough investigation are required to understand the precise mechanisms underlying the effectiveness of this project, including which components and specific BCTs are effective. Limitations This study has several limitations.First, we were unable to implement blinding for the personnel involved in the hypertensive health behavior interventions.To mitigate this, we established standardized intervention procedures, provided intervention strategies tailored to different patient types, edited intervention guidance language based on BCTs, incorporated the aforementioned content into the intervention manual, and conducted training for all intervention personnel.Second, the generalizability of our trial's results may be limited for populations of patients with hypertension residing elsewhere, as they may possess sociodemographic and comorbidity characteristics distinct from those of our study participants.Third, some measurement indicators relied on patient self-report, which could potentially affect the credibility of the results.Fourth, considering the workload, this study did not assess changes in compliance indicators in the control group nor did it collect more comprehensive dietary-related information.This requires correction in future experiments of app-based hypertension health behavior interventions. Conclusions The observations suggest that our program may have improved specific health outcomes and adherence to health behaviors in older adults with hypertension.In terms of health outcomes, participants observed significant improvements in SBP, SEVR, and weight.Moreover, there were noteworthy changes in adherence indicators, such as exercise duration, medication adherence, PA duration, frequency of BP monitoring, and learning performance.However, due to our small sample size and short intervention duration, a larger sample size and longer randomized controlled trial are needed to validate the intervention's effects, explore its mechanisms, and identify the specific design elements that are effective.Additionally, among the 4 adherence behaviors, dietary adherence is the most susceptible to external influences, and more BCTs targeting dietary adherence should be considered in intervention design. Figure 2 . Figure 2. Intelligent health promotion system architecture diagram. Figure 6 . Figure 6.Dietary recommendations and behavior correction suggestions. Figure 8 . Figure 8.The process of shaping the mobile health (mHealth) intervention scheme for adherence.BCT: behavior change technique; BCW: behavior change wheel; DMIC: digital micro-intervention care. 24, SD 4.19 years) were included in the final analysis: 23 in the experimental group and 31 in the control group.Exclusions were due to various reasons, including hospitalization for illness (2 individuals), inability to complete the postintervention assessment (3 individuals), health conditions deteriorating to the point of hindering PA (4 individuals), voluntary withdrawal from the study (4 individuals), and a change in antihypertensive medication (1 individual).Other patients did not change their hypertension medication during the intervention. Table 1 . Health education targets and content. a DASH: Dietary Approaches to Stop Hypertension. Table 2 displays the baseline and posttest results for both groups regarding health outcomes, adherence indicators, and learning performance.Significant changes were observed in SBP (-7.36 mm Hg, P=.05), SEVR (0.16, P=.01), exercise time (856.35MET-min/week,P=.03), medication adherence (0.56, P=.02), BP monitoring frequency (P=.046), and learning performance (3.23, P<.001) in the intergroup comparison after 12 weeks.The PA time increased for the experimental group in the before-and-after comparisons (P=.045).Both groups experienced a reduction in weight after the intervention (experimental: 1.2 kg, P=.002; control: 1.11 kg, P=.009).Furthermore, the Cohen d values reflecting effect size were greater than 0.5 for all variables except PA time, indicating at least an intermediate effect size.Among these variables, the health outcomes of SEVR, recommended diet types (eg, meeting recommendations occasionally, sometimes, often), recommended diet quantities (eg, meeting recommendations occasionally, sometimes, often), BP monitoring frequency (eg, measure daily, measure 1-3 times a week, measure whenever remember), and learning performance had Cohen d values >0.8, suggesting a large effect size. Table 2 . Effects of the health behavioral digital intervention for hypertensive patients (HBDIHP) effects (N=54).
7,781.2
2023-09-22T00:00:00.000
[ "Medicine", "Computer Science" ]
Gene expression at different cell stages of i n vitro -fertilized bovine embryos The birth rate of embryos produced in vitro (IVF) is still lower than that of embryos produced in vivo. Three major steps for the success of the IVF technique are maturation of immature oocytes, fertilization of matured oocytes, and culture of the resulting embryos. Studying mRNA expression in early embryonic development stages is important and can help to assess embryo quality and optimize production protocols in vitro. The current study aimed to determine the expression levels of developmentally important genes in different stages of bovine embryos produced in vitro. Cumulus-oocyte complexes (COCs) were collected from bovine ovaries and cultured in synthetic oviduct fluid (SOF) medium for 7 - 9 days. Embryos were collected at the time-points listed above, and mRNA expression of genes involved in pluripotency (OCT4) , DNA methylation (DNMT1) , apoptosis (BAX), and metabolism (GLUT1) and a heat shock protein (HSP70) was estimated from the 2-cell stage to the blastocyst stage of embryos. The results showed statistically significant differences in the relative abundance (RA) of OCT4, DNMT1, BAX , and GLUT1 gene transcripts among the different stages, whereas there were non-significant differences in the RA of HSP70 between these stages. In conclusion, gene expression levels differ among the developmental stages of embryos produced in vitro, possibly because of the timing of embryonic genome activation (EGA). Introduction In vitro embryo production is an important biotechnology in cattle husbandry and breeding, and the use of this technique has increased greatly. Bovine embryos are produced around the world by commercial companies (Camargo et al., 2006;Abd El-Aziz et al., 2016;Stoecklein et al., 2021;Blaschka et al., 2021). Certain steps are performed in in vitro production of embryos to mimic the in vivo conditions in different types of animals. These steps begin with the maturation of oocytes in vitro, which takes place within 20 -24 hours in bovines (Wrenzycki, 2018;Damayanti et al., 2020), and the extrusion of the first polar body, which are prerequisites to fertilization and the initiation of embryonic development (Mehlmann, 2005;Sirard, 2016;Turhan et al., 2021). Therefore, the maturation of the oocyte is important for fertilization and preimplantation development (Barakat et al., 2018). In vitro fertilization is the second step in in vitro production, which is characterized by the extrusion of the second polar body and the formation of the male and female pronuclei (Parrish, 2014). The in vitro culture of bovine embryos is the last step of in vitro production, which requires approximately 7 -9 days of culture from the zygote stage. The events that occur in the embryo during this step include the first cleavage division, embryonic genome activation, morula compaction, and blastocyst formation (Wrenzycki, 2018;Ramos-Deus et al., 2020;Nogueira et al., 2021). Maternal transcripts stored within the oocyte during oogenesis regulate early embryonic development. Maternally derived transcripts are degraded as development progresses, whereas embryonic genome activation begins from the time of maternal to zygotic transition (MZT) (Graf et al., 2014). Embryo quality can be assessed using genes that serve as genetic markers and play roles in the preand post-implantation development of embryos, where the expression of these genes correlates with the timing of embryonic genomic activation (Sadeesh et al., 2014a). These genes are involved in biological processes such as DNA methylation, which is accomplished by adding a methyl group to the fifth carbon atom of cytosine with the help of a group of enzymes known as DNA methyltransferases. This mechanism is critical in maintaining genome stability in preimplantation embryos (Sagirkaya et al., 2006;Urbanek-Olejnik et al., 2014;Uysal et al., 2015;Chen & Zhang, 2020). Pluripotent cell populations are maintained by the gene OCT4, which belongs to the POU (Pit-Oct-Unc) transcription factor family. This transcription factor is required for maintaining inner cell mass pluripotency, which is present in all cells at the morula stage, and is downregulated in the trophectoderm of the bovine blastocyst (Kurosaka et al., 2004;Hess et al., 2019). Glucose transport across the cell plasma membrane as a primary source of energy is mediated by the transporter GLUT1 gene. After entering embryonic cells, glucose is metabolized via glycolysis, which generates ATP. This metabolism increases with glucose uptake, which is correlated with the timing of compaction and blastulation (Lopes et al., 2007;Ostrowska et al., 2015). During preimplantation, genomic stability is necessary in the embryo, which is achieved by the maintenance of normal methylation patterns (Urbanek-Olejnik et al., 2014;Uysal et al., 2015;Chen & Zhang, 2020), homeostasis via apoptosis initiation (Korsmeyer, 1999;Li et al., 2009), and stress protection (Luft & Dix, 1999;Chen et al., 2018), all of which are critical in embryo development. This study aimed to determine the gene expression levels of developmentally important genes in different stages of bovine embryos produced in vitro during the preimplantation period. Materials and methods In vitro-fertilized bovine embryos were studied in different cleavage stages during the 7 -9 days of culture. Three replicates of each embryonic stage (2-cell stage (18 embryos for each replicate), 4-cell stage (14 embryos), 8-16-cell stage (12 embryos), morula (8 embryos), and blastocyst (6 blastocysts) were collected and washed at least twice in 0.1% PVA-PBS, then frozen in 5 µl of the same solution and held at -80 °C until RNA was extracted. (De Oliveira et al., 2005). The following genes were selected for the measurement of gene expression during the in vitro development of bovine embryos: OCT4 (pluripotency gene), DNMT1 (DNA methyltransferase), BAX (apoptosis gene), GLUT1 (glucose transporter), and HSP70 (heat shock protein). Figure 1 In vitro maturation of oocytes: a) immature oocytes after collection, b) mature oocytes after 21-24 hours of maturation, c) mature oocytes, d) fertilized oocytes. Matured COCs were washed and transferred to 60 mm Petri dishes in 10 drops of 50 µl each per dish, which were then covered with embryo-tested mineral oil. Ten to 15 oocytes were cultured in each droplet of in vitro fertilization medium (IVF-BO) supplemented with 1.25 mM pyruvate, 25 µg/ml gentamycin, 11.12 µg/mL heparin (Sigma H3149), and 3 mg/ml bovine serum albumin (Sigma A6003). Finally, motile sperm were prepared before being added to the COCs at a final concentration of 2x10 6 cells/mL, with 24 hours coincubation in a humidified atmosphere of 5% CO 2 at 39 °C (Cánepa et al., 2014a). Fertilized and unfertilized COCs were mechanically denuded by repeated pipetting with a glass Pasteur pipette in a hyaluronidase enzyme solution to completely remove cumulus cells and were washed in the in vitro culture medium (IVC-SOF) (Caisson IVL05). Then, 20 -25 zygotes were cultured in a droplet of the IVC-SOF supplemented with 0.34 mM sodium pyruvate, 1 mM L-glutamine, 50X MEM-essential amino acids (Sigma B6766), 100X MEM nonessential amino acids (Sigma M7145), 3 mg/mL BSA (Sigma A6003), 25 µg/ml gentamycin, 1.5 mM glucose, and 1 µg/ml EDTA in a 35 mm Petri dish, and the drops were covered with embryo-tested mineral oil. Embryos were cultured in an incubator for 7 -9 days at 39 °C in 5% CO 2 , 5% O 2 , and 90% N 2 atmosphere with high humidity inside the incubation chamber (Rodríguez-Alvarez et al., 2010;Cánepa et al., 2014a). The RNA was isolated from the embryos with the PureLink TM RNA mini kit (Cat. no. 12183-018A, Thermo Fisher Scientific, Waltham, Massachusetts, USA). Then 0.4 ml lysis buffer was added to each sample, followed by vortexing, and RNA was purified according to the manufacturer's instructions. Complementary DNA (cDNA) was synthesized with a high-capacity cDNA reverse transcription kit (Cat. no. 4368814, Thermo Fisher Scientific), with 10 μl RNA, 2 μl of 10X RT buffer, 2 μl random primers, 0.8 μl dNTP mix (100 mM), 1 μl multiscribe reverse transcriptase and 4.2 μl nuclease-free H 2 O were mixed in a 200 μl polymerase chain reaction (PCR) tube. Then, the samples were placed in a thermocycler according to the following program: 25 °C for 10 min, 37 °C for 120 min, and 85 °C for 5 min. The synthesized cDNA was stored at −20 °C prior to real-time PCR. Amplification with SYBR green master mix (Thermo Fisher Scientific) was performed on an Applied Biosystems ViiA 7 real-time PCR system (Thermo Fisher Scientific) in a 12.5 μl reaction to assess the gene expression of OCT4, DNMT1, BAX, GLUT1, and HSP70 relative to that of the housekeeping gene glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Forward and reverse primers used in these assays are shown in Table 1. All genes of interest were analysed in duplicate in clear 96-well plates containing multiple samples. Amplification was carried out in a 12.5 μl reaction mixture containing 6.25 μl of SYBR Green, 0.25 μl of each forward and reverse primer, 2 μl of cDNA template, and 3.75 μl of nuclease-free water. The RT-PCR program was as follows: 50 °C for 2 min; 95 °C for 10 min; 40 cycles of denaturation at 95 °C for 15 seconds, annealing at 60 °C for 1 min and extension at 95 °C for 15 seconds; and a final extension at 60 °C for 1 min. The comparative CT method (Schmittgen & Livak, 2008) was used for the relative quantification of target gene expression levels, which were normalized to the reference GAPDH gene. The ΔCT value was obtained by subtracting the GAPDH CT value for each sample from the target gene CT value. The ΔΔCT value was calculated by using the highest sample method ΔCT as an arbitrary constant, which was subtracted from all other ΔCT sample values. The changes in the gene expression of the target genes were determined by using the 2-ΔΔCT formula (Amarnath et al., 2007;EM et al., 2014). Statistical analysis was performed with SPSS 20 software. For the analysis of relative differential gene expression (IVP) in bovine embryos, differences among means were analysed by one-way ANOVA, followed by multiple pairwise comparisons using Duncan's test. Data were presented as the mean ± standard error). P-values of less than 0.05 were considered significant (EM et al., 2014). Results and Discussion Representative photomicrographs of the IVF bovine embryos at different stages from the 2-cell stage to the blastula stage are shown in Figure 2. The OCT4, DNMT1, BAX, GLUT1, and HSP70 transcripts were detected in several cleavage stages of bovine embryos collected on different days of culture, including the 2-cell, 4-cell, 8 -16 cell, morula, and blastocyst stages (Figure 2). The expression of the OCT4 gene was significantly higher in the morula stage than in the other stages according to the findings. Furthermore, relative to the earlier stages, the expression of the DNMT1 gene increased significantly in the 4-cell stage. BAX gene expression was significantly higher in the 4-cell stage than in the morula and blastocyst stages, but its level was nonsignificant in the 2-and 8-16-cell stages. The GLUT1 gene showed much higher expression in the morula stage than in the other stages. There were no significant differences in the expression of the HSP70 gene at any stage (Figure 3). Embryonic genome activation occurs in waves with the timing varying by species; it occurs at the 2cell stage in mouse embryos, the 4 -8-cell stage in pig embryos, the 8 -16-cell stage in bovine embryos (Sirard, 2012), and the 4 -8-cell stage in humans (Braude et al., 1988). The genes studied in this work play a crucial role in early embryonic development, and the current findings revealed changes in gene expression levels at different stages of embryonic development before and after embryonic genome activation. Some of these genes were down-regulated or up-regulated at distinct cell stages, which could be because of oxidative stress under the culture conditions, high maternal genome storage, or the expression of other genes that could alter the expression of these genes (Graf et al., 2014;Sadeesh et al., 2014a). The OCT4 gene is important for the maintenance of a pluripotent cell population in preimplantation embryos (Kurosaka et al., 2004;Hess et al., 2019), and the expression of this gene was significantly higher in the morula stage than in the other examined stages. This result agrees with those of Kurosaka et al. (2004), who showed that the OCT4 transcript level starts to increase after embryonic genome activation and presents a sharp increase after compaction. Therefore, this gene is present in all cells of the morula stage and is downregulated in the trophectoderm (TE) of bovine blastocysts. These differences may explain the decrease in the expression of this gene in the blastocyst. The DNMT1 protein is responsible for DNA methylation, which is essential to normal embryonic development and cellular differentiation by silencing differentiation-associated genes and activating the critical genes for embryo development (Sagirkaya et al., 2006;Uysal et al., 2018). The expression of this gene in the 2-and 4-cell stages was significantly higher than in the other stages (8 -16-cell, morula, and blastocyst), which showed gradual decreases in expression relative to the 2-and 4-cell stages. This result was consistent with those of Duan et al. (2019), who reported that although large amounts of DNMT1 mRNA were stored in oocytes, the DNMT1 mRNA level remained very low after embryonic genome activation in bovine embryos (Graf et al., 2014). This result was similar to the findings of Hou et al. (2007), who showed that the methylation level decreased after the 8-cell stage and that this decrease continued through the morula stage. Furthermore, expression was higher in the 4-cell stage than in the 2-cell stage, indicating that DNMT1 expression may have increased to suppress the expression of genes whose expression was not needed at this stage and to maintain the stability of gene expression states (Dor & Cedar, 2018). Apoptosis induced by the expression of the BAX gene occurs in response to environmental stressors as a normal feature of pre-implantation development (Matwee et al., 2000;Fahrudin et al., 2002). The current study revealed that although the expression of the BAX gene was greatest at the 4-cell stage it was not significantly higher than that in the 2-cell, 8-cell, and 16-cell stages, but was significantly higher than that at the morula and blastocyst stages. These results are not in accord with those of EM et al. (2014), who reported that apoptosis was first observed in bovine in vitro-fertilized embryos at the 8-to 16-cell stages. However, these results do agree with the findings of other researchers (Byrne et al., 1999;Fahrudin et al., 2002) that suboptimal conditions in the culture system can induce apoptosis in bovine embryos produced in vitro and with the observation of cell death resistance gene expression reported by Hardy (1997). These findings may be explained by the observation of Cánepa et al. (2014b) that there is an interaction between HSP70 and BAX gene expression, possibly as a result of stress conditions, causing apoptosis to occur in embryos and thus increasing the expression of HSP70 while BAX expression is down-regulated to protect the embryos. The cyto-protective factor HSP70 helps embryos recover from stress-induced damage (Cánepa et al., 2014b). This study revealed non-significant increased expression of HSP70 in the 8 to 16-cell and morula stages relative to the 2-and 4-cell and blastocyst stages. These results are consistent with those of Luft and Dix (1999), who reported that HSP70 was expressed beginning in the embryonic period for gene activation in cleavage-stage embryos. Thus, embryos at different developmental stages are exposed to a wide range of environmental stressors leading to the expression of HSP70. The GLUT1 gene plays an important role in the diffusion of glucose across the cell plasma membrane, and glucose is an important energy substrate for the development of embryos (Lopes et al., 2007;Ostrowska et al., 2015;Arhin et al., 2018). A significant sharp increase in GLUT1 gene expression was observed in the morula stage relative to the other stages. This result was corroborated by previous findings (Lequarre et al., 1997), which showed that glucose metabolism in bovine embryos is low during the first cleavages and increases sharply after the resumption genomic activity (8 -16 cells). Other authors (Lopes et al., 2007) reported that during compaction and blastocyst formation glucose uptake by the embryo increases and causes increased expression of GLUT1. The expression of GLUT1 also increased in the blastocyst stage but was not significantly higher than in the morula stage of development. There was a non-significant difference relative to the 8-16-cell stage, which may be in accord with the return to higher levels of Glut-1 in trophectoderm cells compared with the inner cell mass cells reported in (Wrenzycki et al., 2003) and agrees with the findings of Wrenzycki et al. (2003) and Lopes et al. (2007). Conclusions The results of this work indicated that the levels of gene expression differ between different cell stages in embryos produced in vitro due either to the timing of embryonic genome activation or to in vitro conditions that alter the expression level of genes. This gene expression is important in early development to assess the normality of bovine embryos produced in vitro. The longer-term application is development of biomarkers for success of in vitro production of embryos (i.e., embryo quality). These biomarkers could be applied to research studies testing different in vitro embryo production strategies on reproductive success.
3,948
2022-05-10T00:00:00.000
[ "Biology", "Agricultural and Food Sciences" ]
A Low-Cost Multi-Parameter Water Quality Monitoring System Multi-parameter water quality monitoring is crucial in resource-limited areas to provide persistent water safety. Conventional water monitoring techniques are time-consuming, require skilled personnel, are not user-friendly and are incompatible with operating on-site. Here, we develop a multi-parameter water quality monitoring system (MWQMS) that includes an array of low-cost, easy-to-use, high-sensitivity electrochemical sensors, as well as custom-designed sensor readout circuitry and smartphone application with wireless connectivity. The system overcomes the need of costly laboratory-based testing methods and the requirement of skilled workers. The proposed MWQMS system can simultaneously monitor pH, free chlorine, and temperature with sensitivities of 57.5 mV/pH, 186 nA/ppm and 16.9 mV/°C, respectively, as well as sensing of BPA with <10 nM limit of detection. The system also provides seamless interconnection between transduction of the sensors’ signal, signal processing, wireless data transfer and smartphone app-based operation. This interconnection was accomplished by fabricating nanomaterial and carbon nanotube-based sensors on a common substrate, integrating these sensors to a readout circuit and transmitting the sensor data to an Android application. The MWQMS system provides a general platform technology where an array of other water monitoring sensors can also be easily integrated and programmed. Such a system can offer tremendous opportunity for a broad range of environmental monitoring applications. Introduction Water quality monitoring is vital for water safety determination and associated public health [1][2][3][4][5]. Water quality parameters are decided by mutually dependent chemical, physical, and microbial features. Typical water quality parameters include pH, free chlorine, conductivity, dissolved oxygen, turbidity, and bacterial contamination [6,7]. Some of the most important but simple water quality parameters are pH, free chlorine and temperature due to their direct relationship with their water disinfection efficiency, as well as their influence to other parameters [8]. Although many water treatment plants use chloramines as disinfectants, the use of free chlorine is still the most common method for disinfection in isolated and resource-limited areas [9]. The presence of organic micropollutants such as bisphenol A (BPA) in water is another emerging water problem due to industrial effluents, and widespread use and disposal of plastics in the environment [10]. Standard water quality monitoring systems are complex, time-consuming and expensive due to transport of samples, use of sophisticated equipment and trained personnel [8]. Many conventional systems require independent collection of samples and assessment, and fail to deliver water quality parameters in real-time [11]. Real-time water quality monitoring technologies have made considerable advancement in recent years [11]. However, these technologies have limited applications monitoring source water or the operation of water treatment plants. Also, the high expenses related with the equipment, maintenance and calibration of water sensors have prevented them from being used in large distribution systems and for end-users. Furthermore, most of these sensor systems measure only one parameter at Fabrication of the Sensors The fabrication processes of the array of electrochemical sensors for the MWQMS were based on our recent study [26]. In ref. [26], we developed materials synthesis, sensor fabrication and their performance characterizations. In contrast, in this study, we developed a complete sensing system through the integration of the sensors with our custom-designed readout system, smartphone-controlled operation and real samples analysis. Although the sensors were developed and calibrated in our previous study [26], the sensors were calibrated again in this study with our custom-designed readout circuit and smartphone application in order to validate the sensors performance with the integrated MWQMS system. In brief, the pH sensor was a potentiometric sensor that was fabricated with Palla-dium (Pd) ink on polyimide film substrate. The free Cl sensor was an amperometric sensor that was fabricated with a carbon-based electrode with subsequent electrochemical modification by ammonium carbamate. The reference electrode was made of Ag/AgCl with commercial Ag/AgCl paste. The temperature sensor was based on resistance-change that was fabricated with p-type Si and poly (3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) film in a Wheatstone bridge configuration. The sensors for the organic micropollutants (i.e., BPA) was fabricated by attaching a commercial paper-based screen-printed carbon electrode (SPE) onto the substrate containing the pH and free Cl sensors, followed by modification with graphene oxide (GO) and β-cyclodextrin functionalized multi-walled carbon nanotubes (GO-MWCNT-βCD). The synthesis of the chemically functionalized MWCNT-βCD is based on our previous study [27]. The MWCNT-βCD is further mixed with GO with a 1:1 ratio, to enhance the electrochemical sensing performance towards detecting BPA. The SPE consists of a carbon-based counter electrode and an Ag/AgCl based printed reference electrode. The commercial Ag/AgCl pastes are composed of polymer resins such as polyvinyl butyral (PVB), which facilitates maintaining a stable potential over a long period of time in samples with different ionic strengths [17]. A photograph of the fabricated sensors on two glass substrates are shown in Figure 1. The pH, free Cl and BPA sensors are fabricated on one glass microscope slide, whereas the temperature sensor is fabricated on another glass slide. The two glass slides are then placed slide-by-side and connected to the readout printed circuit boards (PCB). The sensors are cleaned with tap water and stored in a dry place when not in use. Although the sensors were developed and calibrated in our previous study [26], the sensors were calibrated again in this study with our custom-designed readout circuit and smartphone application in order to validate the sensors performance with the integrated MWQMS system. In brief, the pH sensor was a potentiometric sensor that was fabricated with Palladium (Pd) ink on polyimide film substrate. The free Cl sensor was an amperometric sensor that was fabricated with a carbon-based electrode with subsequent electrochemical modification by ammonium carbamate. The reference electrode was made of Ag/AgCl with commercial Ag/AgCl paste. The temperature sensor was based on resistance-change that was fabricated with p-type Si and poly (3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) film in a Wheatstone bridge configuration. The sensors for the organic micropollutants (i.e., BPA) was fabricated by attaching a commercial paper-based screen-printed carbon electrode (SPE) onto the substrate containing the pH and free Cl sensors, followed by modification with graphene oxide (GO) and β-cyclodextrin functionalized multi-walled carbon nanotubes (GO-MWCNT-βCD). The synthesis of the chemically functionalized MWCNT-βCD is based on our previous study [27]. The MWCNT-βCD is further mixed with GO with a 1:1 ratio, to enhance the electrochemical sensing performance towards detecting BPA. The SPE consists of a carbon-based counter electrode and an Ag/AgCl based printed reference electrode. The commercial Ag/AgCl pastes are composed of polymer resins such as polyvinyl butyral (PVB), which facilitates maintaining a stable potential over a long period of time in samples with different ionic strengths [17]. A photograph of the fabricated sensors on two glass substrates are shown in Figure 1. The pH, free Cl and BPA sensors are fabricated on one glass microscope slide, whereas the temperature sensor is fabricated on another glass slide. The two glass slides are then placed slide-by-side and connected to the readout printed circuit boards (PCB). The sensors are cleaned with tap water and stored in a dry place when not in use. Cl and BPA sensors, and the lower one contains the temperature sensor. The photograph also shows Arduino Uno microcontroller-based printed circuit boards together with a Bluetooth transceiver for wireless connection and a custom-developed Android app. Readout System and Smartphone Application The MWQMS readout circuit board was developed on top of an Arduino Uno R3 (ATmega328P) 8-bit microcontroller. Additionally, a Water Quality Monitor (WQM) board and a potentiostat board was vertically staked on top of the Arduino board, as shown in Figure 1. The WQM and potentiostat boards contain the circuits associated with Figure 1. Photograph of MWQMS sensors on two glass slides. The upper one contains the pH, free Cl and BPA sensors, and the lower one contains the temperature sensor. The photograph also shows Arduino Uno microcontroller-based printed circuit boards together with a Bluetooth transceiver for wireless connection and a custom-developed Android app. Readout System and Smartphone Application The MWQMS readout circuit board was developed on top of an Arduino Uno R3 (ATmega328P) 8-bit microcontroller. Additionally, a Water Quality Monitor (WQM) board and a potentiostat board was vertically staked on top of the Arduino board, as shown in Figure 1. The WQM and potentiostat boards contain the circuits associated with analog and mixed signal sensors. The WQM board was created and programmed to read pH and temperature data in real time (e.g., one second interval), and free chlorine every 50 s. The potentiostat board was intended to operate independent voltametric sensing using the smartphone application. A Bluetooth transceiver was also connected to the microcontroller unit for data transmission wirelessly to our custom app on an Android smartphone. The Android app contains two main units that communicate with the WQM and potentiostat circuit boards. The WQM module can display real-time pH, free Cl and temperature data, and it also has settings options to input calibration information. The potentiostat module can run, save or create different types of voltametric sensing experiments. The total amount of time needed to run a single voltametric sensing experiment (i.e., BPA sensing) is the sum of pre-treatment time and scanning time (i.e., scan-rate × potential-range). The WQM and the potentiostat PCB boards are designed separately because of the fundamental difference of the sensor's operation. In the WQM board, the signal from the sensor is transmitted only in one direction, which is from the sensors towards the PCB. On the other hand, the potentiostat board has two-way communication, in which a sweeping potential is being applied towards the electrochemical sensor, and the resulting voltametric current signal is being read almost at the same time, while it is being transmitted towards the potentiostat PCB. Because of these differences in sensor operation, the WQM and potentiostat cannot run at the same time. Design of the MWQMS System The MWQMS system is designed for simultaneous in situ pH, free Chlorine (Cl) and temperature measurement, as well as click-on-demand BPA measurement in water. The fabrication of the sensors was based on two microscope glass slides, which offered an inexpensive sensors integration, as shown in Figure 1. The printed circuit boards (PCBs) were stacked together to integrate signal conditioning, processing and wireless data transmission on the same platform. This was possible with the ubiquitous and affordable integrated-circuits (ICs) and optimization of software and hardware designs [28]. The water quality parameters (i.e., pH and free Cl, temperature and BPA) were proposed to estimate the overall condition and drinkability of the water being tested. For instance, accurate monitoring of free Cl concentration is critical for the safety of public health, which is also advised by the World Health Organization (WHO). In contrast, free Cl concentration depends on the pH of water. Also, free Cl, and pH of water both depend on temperature. Therefore, temperature measurement is also important to compensate for the pH and free Cl sensor measurements. The selection of BPA was due to the increased presence of emerging organic micropollutants in water cycles, which may become a huge water problem worldwide, therefore requiring a user-friendly and inexpensive detection technique. Thus, the electrochemical sensing of BPA was chosen as an example of the versatile nature of our proposed MWQMS system. Signal Flow and PCB Design The design of the signal flow of sensors is an important step towards designing suitable printed circuit boards (PCBs) for sensor data acquisition and interfacing. Figure 2 illustrates the block diagram of the signal flow paths that consist of signal transduction, conditioning, processing and wireless transmission routes. The signal flow is designed to facilitate simultaneous (pH, free Cl and temperature) and click-on-demand (BPA) monitoring of these water quality parameters. In summary, this diagram shows the signal-conditioning route for every single sensor, which are employed with analogue circuit components such as low-pass filter, buffer, transimpedance amplifier, and potentiostat corresponding to the transduced sensor signals. The circuits are designed to ensure fine resolution of the sensor signal while keeping the signal amplitude in the input range of the analogue-to-digital converter (ADC). After that, the conditioned signals are compensated, and relayed to the Bluetooth wireless transceiver by the serial communication protocols of the Arduino microcontroller. The Bluetooth transceiver facilitates wireless data transmission to a Bluetooth-supported smartphone and a custom-designed app. The smartphone application consists of an interface to upload sensor measurement data to online storage. The signal flow diagram was used to design the PCBs of the MWQMS system. For example, the upper-left, upper-right and lower blocks of the signal flow ( Figure 2) diagram represent the potentiostat, the WQM and the Arduino Uno microcontroller PCBs, respectively. A photograph of the MWQMS system is shown in Figure 3a, depicting the PCBs that are connected to the sensors through a sensor connection assembly. The potentiostat and WQM PCBs are shown in detail in the photographs of Figure 3b,c, respectively. The size of the PCBs is designed with respect to the size of an Arduino Uno microcontroller, so that they can be vertically stacked easily. This approach significantly reduced the overall size of the system. The major functional circuit components are shown with yellow dashed rectangles. Each PCB consisted of their own power supply unit (1 and 8). The potentiostat PCB consisted of a potentiostat Circuit IC (2), a low-pass filter (LPF) (3) connected to digital-to-analog converter (DAC) (4) output filter, a transimpedance amplifier (TIA) (5), and another LPF (5) connected to an ADC input filter (7). The WQM PCB consisted of buffers (9), an LPF for the pH sensor (10), a TIA (11) and LPF (12) The circuits are designed to ensure fine resolution of the sensor signal while keeping the signal amplitude in the input range of the analogue-to-digital converter (ADC). After that, the conditioned signals are compensated, and relayed to the Bluetooth wireless transceiver by the serial communication protocols of the Arduino microcontroller. The Bluetooth transceiver facilitates wireless data transmission to a Bluetooth-supported smartphone and a custom-designed app. The smartphone application consists of an interface to upload sensor measurement data to online storage. The signal flow diagram was used to design the PCBs of the MWQMS system. For example, the upper-left, upper-right and lower blocks of the signal flow ( Figure 2) diagram represent the potentiostat, the WQM and the Arduino Uno microcontroller PCBs, respectively. A photograph of the MWQMS system is shown in Figure 3a, depicting the PCBs that are connected to the sensors through a sensor connection assembly. for the free Cl sensor, an ADC IC for both the pH and free Cl sensors (13), and an ADC for the temperature sensor (14). Figure 4. The curves show that the CV obtained from the MWQMS system is very similar to the one obtained from EMStat 3, as the redox peak positions and peak intensities are almost the same. There is a slight vertical shift of the curves, which corresponds to a shift in the baseline. However, the baseline shift did not change the peak positions and their absolute intensities. Such variations are very common in electrochemical measurements, as the two curves are obtained from two different measurement with two devices and one sample. The signal-to-noise ratio of the CV curves obtained from the MWQMS system is also very high (>100). The potentiostat and WQM PCBs are shown in detail in the photographs of Figure 3b,c, respectively. The size of the PCBs is designed with respect to the size of an Arduino Uno microcontroller, so that they can be vertically stacked easily. This approach significantly reduced the overall size of the system. The major functional circuit components are shown with yellow dashed rectangles. Each PCB consisted of their own power supply unit (1 and 8). The potentiostat PCB consisted of a potentiostat Circuit IC (2), a low-pass filter (LPF) (3) connected to digital-to-analog converter (DAC) (4) output filter, a transimpedance amplifier (TIA) (5), and another LPF (5) connected to an ADC input filter (7). The WQM PCB consisted of buffers (9), an LPF for the pH sensor (10), a TIA (11) and LPF (12) for the free Cl sensor, an ADC IC for both the pH and free Cl sensors (13), and an ADC for the temperature sensor (14). The signal quality of the potentiostat was verified by comparing the cyclic voltammetry (CV) measurement of a standard redox probe (5 mM concentration of K 3 [Fe(CN) 6 ]) with a screen printed carbon electrode (SPE) obtained from CH Instruments Inc. with EMStat 3 potentiostat from PalmSens.com. The comparative CV curves are shown in Figure 4. The curves show that the CV obtained from the MWQMS system is very similar to the one obtained from EMStat 3, as the redox peak positions and peak intensities are almost the same. There is a slight vertical shift of the curves, which corresponds to a shift in the baseline. However, the baseline shift did not change the peak positions and their absolute intensities. Such variations are very common in electrochemical measurements, as the two curves are obtained from two different measurement with two devices and one sample. The signal-to-noise ratio of the CV curves obtained from the MWQMS system is also very high (>100). one obtained from EMStat 3, as the redox peak positions and peak intensities are almost the same. There is a slight vertical shift of the curves, which corresponds to a shift in the baseline. However, the baseline shift did not change the peak positions and their absolute intensities. Such variations are very common in electrochemical measurements, as the two curves are obtained from two different measurement with two devices and one sample. The signal-to-noise ratio of the CV curves obtained from the MWQMS system is also very high (>100). pH Sensor The Pd/PdO based potentiometric pH sensor had high sensitivity (57.5 mV/pH) and stability, as shown in Figure 5a [26]. The pH measurement range for calibration was from pH 4 to 10; however, the sensor is capable of measuring pH from 2 to 12. Also, the spin-coating fabrication required a very small amount (<10 µL) of the Pd ink precursor solution resulting in a lower cost (<10 cent). The pH sensing was based on the following redox reaction [4]: The redox potential is determined using the Nernst equation: where E 0 is the standard electrode potential, R is the gas constant (8.31 J/mol/K), T is the absolute temperature and F is Faraday's constant (96,485.33 C/mol). The sensor's output voltage was between 0-400 mV, stable and with low-noise, allowing direct interfacing with the input of the ADC. The fast response time (~20 s) of the sensor enabled real-time pH monitoring. The MWQMS system can perform single-point (pH 7) and three-point calibration (at pH 4, 7, and 10) to accommodate sensors with different sensitivity and linear range. The temperature dependence of the pH sensor was compensated using [26,29]: Finally, Equation (3) was programmed into the MWQMS system, where E cal is the recorded voltage (in mV) while the sensor uses calibration solution of pH = 7, E meas is the measured voltage (in mV) throughout pH sensing, and T meas (in • C) is the temperature of the water. The pH sensor resolution was 0.17 pH which was calculated from its hysteresis value of 9.8 mV [26]. The pH sensor showed negligible interference with common ions as discussed in our previous study [26]. Free Cl Sensor The free chlorine sensor was fabricated by amine-modification of carbon electrode [30]. The amperometric free Cl sensing involves electrochemical reduction of HOCl (free Cl) corresponding to the following chemical reaction: HOCl 2e Cl OH , The resultant current value (at 50 s in each measurement) is related to the HOCl concentration [26]. The HOCl concentration is subsequently employed to measure the free chlorine concentration (both HOCl and OCl − ) that corresponds to the Cl2 concentration [30]. As shown in Figure 5b, the sensor output current range was between 0 and −5 µA, with free chlorine concentrations of 1 to 8 ppm. The sensor output was transformed into a voltage signal that ranged between 0 to 1 V utilizing a transimpedance amplifier. A lowpass filter was used to suppress the low-frequency noise in the transimpedance amplifier. The sensitivity of the free Cl sensor was 186 nA/ppm (Figure 5b). The temperature dependence of the free chlorine sensor was determined to be 6.2 nA/ppm/°C, which was used in the calibration equation of the free chlorine sensor [ where Iout (in nA) represent the output current, Tmeas (in °C) represent the temperature of water, and CNaOCl (in ppm) represent NaOCl concentration. The transfer function was calculated based on the transimpedance amplifier gains and low-pass filter as follows: Free Cl Sensor The free chlorine sensor was fabricated by amine-modification of carbon electrode [30]. The amperometric free Cl sensing involves electrochemical reduction of HOCl (free Cl) corresponding to the following chemical reaction: The resultant current value (at 50 s in each measurement) is related to the HOCl concentration [26]. The HOCl concentration is subsequently employed to measure the free chlorine concentration (both HOCl and OCl − ) that corresponds to the Cl 2 concentration [30]. As shown in Figure 5b, the sensor output current range was between 0 and −5 µA, with free chlorine concentrations of 1 to 8 ppm. The sensor output was transformed into a voltage signal that ranged between 0 to 1 V utilizing a transimpedance amplifier. A low-pass filter was used to suppress the low-frequency noise in the transimpedance amplifier. The sensitivity of the free Cl sensor was 186 nA/ppm (Figure 5b). The temperature dependence of the free chlorine sensor was determined to be 6.2 nA/ppm/ • C, which was used in the calibration equation of the free chlorine sensor [26]: where I out (in nA) represent the output current, T meas (in • C) represent the temperature of water, and C NaOCl (in ppm) represent NaOCl concentration. The transfer function was calculated based on the transimpedance amplifier gains and low-pass filter as follows: where V (in mV) represent the output voltage of the signal conditioning circuit. Lastly, the free chlorine concentration equation with pH and temperature corrected calibration equation was programmed into MWQMS system [26]: The free chlorine sensor resolution was calculated by changing the free chlorine concentration between 1 and 8 ppm and measuring the hysteresis. The hysteresis value was estimated to be 11 nA equivalent to a resolution of 0.06 ppm [26]. Temperature Sensor The temperature sensor was developed based on a Wheatstone bridge configuration. Among the four arms of the bridge, two were made of p-type silicon wafer (1 cm × 10 cm) with a positive TCR of 1%/ • C and two were fabricated by drop-casted PEDOT:PSS films with a negative TCR of −0.32%/ • C [26,31,32]. The high TCR values of the silicon and PE-DOT:PSS film provided high temperature sensitivity with negligible drift. The resistances of the four resistors were selected in such a way that the output voltage remains within the ADC input range of −1 to +1 V, towards measuring 0 • C-50 • C. The resistances were also optimized for highest sensitivity, reduced self-heating-induced drift, and the overall reduction of the area of the sensor. The measured sensitivity of the temperature sensor was 16.95 mV/ • C as shown in Figure 5c. A calibration equation was used in the MWQMS system for the determination of temperature as follows [26]: where T meas (in • C) represent the temperature of water and V out (in mV) represents the temperature sensor output voltage. BPA Sensor The fabricated BPA sensor was based on a screen-printed electrode (SPE), which was integrated on the same glass substrate of the pH and free Cl sensor, followed by drop-cast modification with GO-MWCNT-βCD. The GO-MWCNT-βCD solution was prepared based on our previous study [27]. Briefly, MWCNTs were covalently modified with βCD through a one-step Steglich esterification method. Then, a 2 mg/mL MWCNT-βCD suspension was combined with 1 mg/mL GO suspension with a 1:1 volume ratio. After that, the solution was ultrasonicated for 15 min. Due to contamination of the electrode surface with residues of oxidized BPA, a freshly prepared GO-MWCNT-βCD/SPE electrode was used only once for BPA sensing. However, the SPE electrode was reused by cleaning the GO-MWCNT-βCD(SE) with cleaning solvents and re-drop-casting GO-MWCNT-βCD solution on the SPE working electrode. Therefore, the SPE electrode is reusable for multiple BPA sensing experiments, which can significantly decrease the cost of the sensor. A typical BPA sensing experiment in this study requires~30 min, in which~30 min is required for electrode pre-treatment (magnetic stirring of samples) and~30 s is required for the actual voltametric potential sweeping [33]. Figure 5d shows the BPA sensing characteristics using the GO-MWCNT-βCD(SE)/SPE electrode with the MWQMS system. The BPA sensing was performed by Linear Sweep Voltammetry (LSV) from 50 nM to 5 µM BPA. The inset of Figure 5d illustrates the linear calibration curve for BPA with a slope of 10.3 µA/µM. The limit of detection (LoD = 3 s/m), was calculated to be 6 nM based on our previous study [27]. In this LoD calculation, s represents the standard deviation of the blank solution (30 nA) and m represents the slope of the calibration curve. It is worth mentioning that a significant novelty of the MWQMS system stems from the integration of GO-MWCNT-βCD(SE)/SPE electrode with a custom-designed potentiostat board, that resulted in a very inexpensive and easy-to-use system for water monitoring. In addition to BPA sensing, the same electrochemical sensor can also be used for detecting pharmaceutical contaminants such as acetaminophen and estrogen, and heavy metal such as lead and arsenic, which can be accomplished by simply using different voltametric parameters in the potentiostat. This allows the MWQMS system to operate in a multimodal sensing capability that is very cost-effective. Smartphone Application An Android-based smartphone application called "Water Testing Suite" was designed for the MQWMS system. The application was first developed in Java and then compiled into an Android application. The application is designed to acquire the sensor data, perform calibration, and display and save the data in a cloud-based webserver. Figure 6 shows screenshots of different measurement units of the application. The opening screen of the application (screenshot 1) prompts the user to select the type of measurement to Run, such as Water Quality Monitor (for pH, free Cl and Temperature sensing), and potentiostat electroanalysis (for BPA sensing). Drift and Interference The pH sensor does not need calibration for a continuous measurement of one hour. The drift of the pH sensor was investigated in our previous study, which showed a drift of 4 mV in pH 4 in an 8 h period that corresponded to a drift of 0.009 pH/h, as displayed in Figure 7a. The interference of the pH sensor was determined in the presence of interfering ions (CaCl2, KNO3, Na2(SO4), (NH4)2SO4, NaCl, and KCl). A maximum of a 0.24 pH change was observed with 0.1 M CaCl2 that was, in fact, due to transformation of the sample pH with the high concentrations of CaCl2. The selection of the Water Quality Monitor option in the opening screen leads to a screen containing a real-time display of measurement parameters for pH, free Cl and temperature of the water sample (screenshot 1). This screen also contains three graphs (to be scrolled down) for the corresponding sensor outputs. Therefore, this screen provides real-time update of the temporal measurement data. The sensor output is updated every second and the historical data is accumulated in the temporal graphs. The pH sensor output graph (screenshot 2) can be calibrated using an on-screen onepoint calibration button at a pH 7. This allows faster calibration of the pH sensor when the sensitivity of the sensor is very stable. However, direct calibration can also be done by going to the "Settings" option, where the user can put calibration information in terms of the sensor sensitivity and y-axis intercept potential values. This enables the ubiquitous nature of the MWQMS system to be used with other types of potentiometric pH sensors. The free Cl and temperature sensor output graphs (screenshots 3 and 4) are also updated at the same time with the pH sensor every second. The pH and temperature sensor data are used to calibrate the free Cl concentration measurement. The selection of the Potentiostat Electroanalysis option in the opening screen leads to a screen containing options (screenshot 5) to Run, Edit or Create a new voltametric measurement, such as Linear Sweep Voltammetry (LSV), Cyclic Voltammetry (CV), Differential Pulse Voltammetry (DPV) and Square Wave Voltammetry (SWV) (screenshot 6). A selection of previously saved experimental parameters can be accessed from a drop-down menu, as shown in screenshot 6. A new experiment with custom parameters can also be created, as shown in screenshot 7. A representative CV curve is shown in screenshot 8. Drift and Interference The pH sensor does not need calibration for a continuous measurement of one hour. The drift of the pH sensor was investigated in our previous study, which showed a drift of 4 mV in pH 4 in an 8 h period that corresponded to a drift of 0.009 pH/h, as displayed in Figure 7a. The interference of the pH sensor was determined in the presence of interfering ions (CaCl 2 , KNO 3 , Na 2 (SO 4 ), (NH 4 ) 2 SO 4 , NaCl, and KCl). A maximum of a 0.24 pH change was observed with 0.1 M CaCl 2 that was, in fact, due to transformation of the sample pH with the high concentrations of CaCl 2 . The temporal response of the free chlorine sensor also showed negligible drift (<0.1 ppm), as shown in Figure 7b. The selectivity of the free chlorine sensor was also investigated for 2 ppm free chlorine solutions with~400 ppm of interfering ions. A negligible change was observed with the interfering solution of KNO 3 , Na 2 (SO 4 ), NaCl, and KCl. However, the free chlorine response became 0 ppm when CaCl 2 and (NH 4 ) 2 SO 4 were added, as they chemically reacted with the free chlorine. The time-dependent drift performance of the temperature sensor was studied. The drift study showed negligible drift of~0.6 mV over a 24 h period, as indicated in Figure 7c. The high selectivity of the BPA sensor was also demonstrated with other interfering species such as Na 2+ , K + , ascorbic acid, dopamine, and acetaminophen, as shown in Figure 7d. The lifetime of the sensors was studied by performing long-term analysis with at least 6-month old sensors. A 7 day long measurement of pH, free chlorine, and temperature of real water samples taken from a lake, tap and pool was done by the MWQMS system [26]. The pH sensor showed a highest drift of 0.12 pH for tap water. The temperature sensor also showed a drift of ±0.25 • C. The free chlorine sensor showed gradual decay of free chlorine due to interaction with the sensor as well as exposure to the external environment. The lifetime of the sensors was studied by performing long-term analysis with at least 6-month old sensors. A 7 day long measurement of pH, free chlorine, and temperature of real water samples taken from a lake, tap and pool was done by the MWQMS system [26]. The pH sensor showed a highest drift of 0.12 pH for tap water. The temperature sensor also showed a drift of ±0.25 °C. The free chlorine sensor showed gradual decay of free chlorine due to interaction with the sensor as well as exposure to the external environment. Real Sample Analysis The MWQMS was used to measure real samples from tap water, lake water, and swimming pool water. A comparison of the pH, free chlorine, and temperature measured by the MWQMS system and using reference methods are shown in Table 1. A commercial glass electrode-based pH meter (HANNA HI98128) was utilized as the reference for pH and temperature sensing. A commercial DPD-based kit (LaMotte 2056 ColorQ PRO 7) was used as a reference for free chlorine sensing. Table 2 shows the difference of the BPA measurements done by the MWQMS system after spiking tap water with BPA. The measurement results of the MWQMS system were closely comparable to those of the commercial methods. Differences below 5% were observed for pH, free chlorine, and temperature Real Sample Analysis The MWQMS was used to measure real samples from tap water, lake water, and swimming pool water. A comparison of the pH, free chlorine, and temperature measured by the MWQMS system and using reference methods are shown in Table 1. A commercial glass electrode-based pH meter (HANNA HI98128) was utilized as the reference for pH and temperature sensing. A commercial DPD-based kit (LaMotte 2056 ColorQ PRO 7) was used as a reference for free chlorine sensing. Table 2 shows the difference of the BPA measurements done by the MWQMS system after spiking tap water with BPA. The measurement results of the MWQMS system were closely comparable to those of the commercial methods. Differences below 5% were observed for pH, free chlorine, and temperature sensors, while the BPA measurements showed recovery between 96% and 106%. The differences in our measurement and reference values are, therefore, reasonable in practical application scenarios. These differences may be attributed to errors/deviations in the commercial pH, free chlorine and temperature sensors themselves, or human errors. Limitations and Future Improvements The MWQMS system can be further improved by tackling some of its limitations that are related to both the sensors and readout systems. For example, the sensors were mostly hand-made by lab-oriented fabrication methods. However, these methods can be tailored for mass production such as screen-printing and solution processing, for scaling up the fabrication and reducing overall costs of the sensor. Such processes will improve sensor reproducibility and thus sensing performance. Also, reusability of the sensors can be improved by careful choice of sensing materials and sensing methods. Further, the readout system used the Arduino Uno based microcontroller, which can be re-designed with smaller microcontroller unit to reduce the footprint of the system towards a hand-held device configuration. Conclusions We developed an integrated multi-parameter water quality monitoring system (MWQMS) that can concurrently determine water parameters such as pH, free chlorine concentration and temperature, as well as determine bisphenol A on-demand. The MWQMS system was comprised of a Pd/PdO-based pH sensor, a carbon-based free chlorine sensor, a hand-drawn temperature sensor, and a graphene oxide, carbon nanotubes and β-cyclodextrin bisphenol A sensor, all of these fabricated on two glass slides. The sensors can measure water quality parameters such as pH and temperature in real time, free chlorine in every 50 s, and BPA in half an hour, with a high sensitivity of 57.5 mV/pH (pH), 186 nA/ppm (free chlorine), 16.95 mV/ • C (temperature) and 10.3 µA/ppm (bisphenol A), in a user-friendly manner. The MWQMS system is small and a simple-to-use measurement unit with a smartphone application. It is a promising step towards practical applications such as on-site water quality monitoring. The system also has the flexibility to be modified to accommodate additional water quality monitoring sensors including conductivity, dissolved oxygen and different types of ions. Finally, improvements are already underway in our ongoing research efforts towards commercialization of the water quality monitoring system and more extensive testing in application environments.
8,176
2021-05-29T00:00:00.000
[ "Computer Science" ]
“DompeKeys”: a set of novel substructure-based descriptors for efficient chemical space mapping, development and structural interpretation of machine learning models, and indexing of large databases The conversion of chemical structures into computer-readable descriptors, able to capture key structural aspects, is of pivotal importance in the field of cheminformatics and computer-aided drug design. Molecular fingerprints represent a widely employed class of descriptors; however, their generation process is time-consuming for large databases and is susceptible to bias. Therefore, descriptors able to accurately detect predefined structural fragments and devoid of lengthy generation procedures would be highly desirable. To meet additional needs, such descriptors should also be interpretable by medicinal chemists, and suitable for indexing databases with trillions of compounds. To this end, we developed—as integral part of EXSCALATE, Dompé’s end-to-end drug discovery platform—the DompeKeys (DK), a new substructure-based descriptor set, which encodes the chemical features that characterize compounds of pharmaceutical interest. DK represent an exhaustive collection of curated SMARTS strings, defining chemical features at different levels of complexity, from specific functional groups and structural patterns to simpler pharmacophoric points, corresponding to a network of hierarchically interconnected substructures. Because of their extended and hierarchical structure, DK can be used, with good performance, in different kinds of applications. In particular, we demonstrate how they are very well suited for effective mapping of chemical space, as well as substructure search and virtual screening. Notably, the incorporation of DK yields highly performing machine learning models for the prediction of both compounds’ activity and metabolic reaction occurrence. The protocol to generate the DK is freely available at https://dompekeys.exscalate.eu and is fully integrated with the Molecular Anatomy protocol for the generation and analysis of hierarchically interconnected molecular scaffolds and frameworks, thus providing a comprehensive and flexible tool for drug design applications. Supplementary Information The online version contains supplementary material available at 10.1186/s13321-024-00813-4. Introduction At the root of cheminformatics and computer-aided drug design, the capacity to encode molecular structures into a computer readable form represents a relevant need.According to their dimensionality, the molecular representations can be subdivided into: (i) one-dimensional (1D, e.g., alphanumeric strings), (ii) two-dimensional (2D, e.g., molecular graphs), and (iii) three-dimensional (3D, e.g., molecular coordinates).The SMILES (Simplified Molecular Input Line Entry System) notation is a very popular 1D representation, introduced in the 1980s [1], by which a molecule is represented as a simple sequence of characters with predefined atom ordering rules.Daylight uses an extension of SMILES called SMARTS to describe structure queries for searching chemical databases [2].The IUPAC International Chemical Identifier (InChI) was introduced in 2000 and was designed as a strictly unique standard chemical identifier [3].A compact hashed code was then derived from InChI (InChIKey). Along with notations able to unambiguously describe entire molecular structures, there are many representations able to describe features and substructures included in a given molecule.Molecular fingerprints [4] are computationally efficient representations, in which structural features are encoded as bits in a bit string or counts in a count vector, thus capturing the main structural characteristics and chemical properties. The MACCS keys encode the presence of predefined substructures into a vector of length 166 bits [5].The extended connectivity fingerprints (ECFP) are not based on substructure dictionaries but perceive the presence of substructures around each atom in a molecule, using a hash function to store information for each atom's neighborhood up to a predefined diameter [6]. Atom pair fingerprints encode molecular shape [7], and have been reported to be more suitable to represent large molecules, such as those exceeding the Lipinski limits [8]. In recent work, the atom-pair approach was combined with circular substructures to create a new descriptor, called MAP4 (MinHashed atom-pair fingerprint up to a diameter of four bonds), providing a unified description of molecules across different sizes and shapes [9]. In other work, neural network fingerprints were generated by training neural networks on target specific bioactivity datasets [10].As initial case study, the generic features that are most common amongst kinase inhibitors (e.g., a hinge-binding motif ), were considered.The best performing architecture was based on a Multilayer Perceptron (MLP) with the ECFPs as the input, trained for multitask classification (to predict the specific kinase target activity). Very recently, functional-group-like structural fragments (FGSFs) were implemented as a set of predefined structural moieties commonly found in organic molecules, annotated with reactivity parameters, and successfully applied to toxicophore identification and machine learning applications [11]. Generally speaking, these representations are of pivotal importance for storing chemical structures and for utilizing chemical structural information in similarity/ substructure searches, as well as for the construction of chemical space maps.The choice of the descriptor is critical to the success of a similarity search, because each descriptor focuses on different chemical properties.Also, the size of the bit space of the fingerprints was reported to have a significant effect on enrichments, that is, the ability to identify compounds with activity similar to a query molecule, with small bit spaces, such as 1024, resulting in collisions and in turn in a substantial reduction in enrichments compared to larger bit spaces [12].Further benchmark studies on fingerprints performance reported that the differences in enrichment and the number of collisions observed in the earlier study [12] are likely due to the use of different hashing functions and different bit densities across the fingerprints used [13].Consequently, descriptors devoid of possible biases caused by the fingerprint generation procedures are greatly needed. Therefore, we set out to design a new set of descriptors.The goals of these descriptors are multifold: 1) They must adequately separate molecules in chemical space, which belong to different classes, 2) They must be suitable for the development of Machine Learning models and the interpretation of such models in terms that are meaningful to medicinal chemists, 3) They must provide key characteristics of individual molecules at a glance, and 4) Together with Dompé's "Molecular Anatomy" [14], they must be able to efficiently index databases with more than 10s of trillions (10 13 ) of chemical structures. In this work we describe the DompeKeys (DK), a new set of substructure-based fingerprint descriptors, which encode patterns of functional groups and chemical features contained in compounds of pharmaceutical interest, and we also report their performance in terms of the aforementioned goals ( 1) and ( 2).The DK system collects 1064 curated SMARTS strings, encoding chemical structures at different levels of complexity, from well-defined structural moieties, like amino acids or metal binders and tox alerts, to generic pharmacophoric features, like H-bond donor or acceptors.Each functional group is either encoded as is or includes additional information about its chemical environment, thus constituting a network of hierarchically interconnected substructures (Fig. 1).In addition, we developed a validation protocol to demonstrate the integrity and correctness of the DK formalism. We demonstrate that DK are very well-suited to map the chemical space of databases of compounds that are different in terms of physicochemical properties.Moreover, we report the successful application of the DK for the prediction of compounds' activities and metabolic reactions by machine learning (ML) models, showing that they quickly identify the key chemical moieties for biological activity.Additionally, we show how the DK can be extremely helpful in substructural search and pharmacophoric filtering, making simpler and faster to screen and prioritize compounds possessing functional groups relevant for a certain biological activity among millions of molecules. The protocol to generate DK is freely available within the web interface https:// dompe keys.exsca late.eu, where it is fully integrated with the Molecular Anatomy approach, an in-house developed method to analyze large datasets of molecules by organizing them into a multidimensional network of hierarchically interconnected molecular frameworks. Descriptor design The DK, coded in the robust SMARTS language, are designed not only to describe simple functional groups but also to explore the chemical environment of each functional group, to search for fragments with specific reactivity or physicochemical properties or even structural toxicity alerts. We collected a list of 1064 manually curated SMARTS strings, each one encoding chemical structures and functional groups defining different levels of complexity (from level 0 to 4, ranging from the highest to the lowest molecular complexity, respectively). In detail, level 0 represents the highest level of molecular complexity, by including well-defined molecular structures, such as amino acids, natural products, drugs (Fig. 1).Level 1 features more specific representations than level 0, and contains mostly ring systems (such as pyridine, imidazole).Levels 2 and 3 describe the main functional groups, such as amines, amide groups, with the only difference that in level 2 we differentiate between the number and the nature of substituents, for instance if a given amine is primary, secondary or tertiary and also if the attached substituents are aliphatic or aromatic.As such, level 2 allows a more precise mapping of the chemical environment of a given functional group.Finally, level 4 represents the simple atoms with specific properties, such as sp2 carbons or nitrogen atoms that can function as H-bond acceptor, from which we can derive simple pharmacophoric points.Taken together, the five levels make up a network of hierarchically interconnected substructures. The DK were conceptualized following a knowledgebased approach: DK levels 4 through 2 were essentially hand-crafted, considering the most standard patterns and functional groups commonly found in organic druglike molecules.Higher levels 1 and 0 were collected taking into account amino acids, structural fragments and scaffolds (e.g., heterocycles) derived from analysis of approved drugs (pharmascaffold), commercial and natural compounds libraries.In addition, patterns and annotations such as toxicophores or metal binders were also included, based on a combination of literature search [15,16] and in-house expertise gained in the context of internal drug discovery projects.This hierarchical architecture makes DK able to capture key structural information in different types of applications, with the possibility to select even only subsets to be used on a case-by-case basis. A practical example is shown in Table 1.The example compound tucidinostat, a potent and orally bioavailable Histone Deacetylase inhibitor, has been analyzed using the protocol to generate DK.The protocol allows any (medicinal and computational) chemist to easily and quickly gain insights about the molecular structure at different hierarchical levels, such as the main functional groups, whether it contains undesirable functionalities and even the presence of structural fragments annotated with a specific pharmacological activity.For instance, at level 0 a fragment essential for the chelation of metals the n-(2-aminophenyl)acetamide, is flagged. The same fragment is mapped also by level 3 as generic amine, and by level 2 as aromatic primary amine, also flagging a possible structural alert because of the presence of the aniline moiety.Finally, level 4 specifies the pharmacophoric points found, such as donor, acceptor, halogen, aromatic carbon.This analysis provides a comprehensive overview of the molecule's chemical properties; the chemist is informed about the presence of possible toxicophores and functional groups annotated with a certain pharmacological activity, which helps the compound selection process. To further verify the validity of the curated list of DKs, we developed a unit-testing protocol in Pipeline Pilot [17].Specifically, we converted the DKs from level 0 and 1 to explicit connection tables in mol2 format and verified how the SMARTS string encoding each DK is able to exactly map the corresponding molecule, as well as fragments comprised into the molecule structure, without duplicating functional groups (Table 2).A second step of validation involved DKs corresponding to functional groups described by both a generic SMARTS string (level 3) and by more specific SMARTS accounting for different chemical environments.For each generic SMARTS from level 3, we considered all possible permutations, i.e., we substituted the free valence on the molecule with H, methyl and an aromatic ring; these latter were then converted in whole molecule and the substructure searches of both the generic (level 3) and the specific SMARTS (level 2) were performed to demonstrate that both queries are satisfied.As an example, Table 2 reports two molecules encoded by DK of level 0, namely the amino acid tryptophan and nicotinic acid, a natural product.However, they can also be mapped by DKs of level 1 encoding heteroaromatic rings.Therefore, query molecules can be retrieved at different search levels. Notably, DK include functional group descriptors at two different levels: in the first one, a given functional group is described by a generic SMARTS string encoding only its specific atoms and also excluding from its environment chemically invalid patterns; then, for the same functional group, more specific SMARTS strings are defined, considering the different classes of substituents (aromatic and/or aliphatic atoms).Table 3 reports an example of these SMARTS strings describing carbamate derivatives. This design feature of the DK allows mapping on each compound, both the presence of a generic functional group and its specific environment, to better evaluate the similarity between molecules.In contrast, the similarity of molecules containing the same functional group, but different substituents would be either overestimated by using only the descriptors of functional groups or underestimated by considering only descriptors focused on the surrounding atoms.Moreover, different description levels of DK could be useful when a given functional group or the specific fragment, of which it is part, should be recognized. Chemical space analysis A detailed comparison of the chemical space covered by structurally diverse libraries of compounds (as described in the Materials and Methods section) was performed by means of Tree-MAP algorithm (TMAP) [18], which recently proved to have superior interpretability and discriminative power compared to other well-known methods such as t-SNE and SOM. Figure 2 reports the TMAP plots to compare the capability of the DK descriptor in classifying libraries with specific structural characteristics, with respect to other structural fingerprints, such as MACCS, ECFC6, ECFP6, PubChem and RDKit.In particular, the DK descriptors are able to better cluster Secondary amides (aliphatic)/ (n-aliphatic)/Defined chemical environment functional group collections.In fact, several drugs and natural compounds are also commercially available, and several food products can be classified also as natural compounds. These results can be explained by the presence of several SMARTS that encode common functional groups, which occur within these classes of compounds.The TMAP analysis essentially involves qualitative, visual inspection.To quantify the ability of DK to correctly classify the chemical collections, a multi-class classification model was developed.(see Additional file 1: Table S2 and Fig. 3).Overall, all descriptor spaces showed good discriminative power, especially for the peptides (SE = 0.99-1.0)and food products (SE = 0.89-0.90).This is expected as such chemical classes have some distinctive chemotypes which are effectively encoded by employed descriptors.A drop in performance can be seen when considering the class of the drugs, with sensitivity ranging from 0.58 (ECFP) to 0.75 (DK) and rather low precision values (around 0.1).This is due to the fact that commercial and drug compounds are often misclassified, as there is a strong overlap of chemotypes between these two chemical classes. Strikingly, the DK and PubChem 881 showed better sensitivity in classifying the drugs class in comparison with the other descriptors (Fig. 3).This finding suggests that such descriptors, based on pre-defined fragments, are able to perceive the most important aspects of compounds' structures that have a crucial role for classification and retrieval.In the case of DK, whose fragments have been defined a priori with a high degree of coverage of functional groups and heterocycles present in drugs (i.e., pharmascaffold), they could form a "more compact" descriptor space, because the fragments are being precisely represented and might have an advantage over the descriptors generated automatically from the dataset, which might loss some chemical information. Ligand-based classification models In addition to chemical space mapping, DK are also intended for the development of machine learning models to predict the inhibitory activity against biological targets.For this purpose, we constructed a curated dataset from ChEMBL, comprising compounds with inhibitory activity against 46 targets that are relevant for toxicity profiling (see Materials and Methods section and Additional file 3 for more details).Figure 4 depicts the model performance averaged over the 46 modelled datasets.Overall, all employed descriptors showed comparable performances, with an average BA of 0.74 (SD = 0.01), MCC of 0.48 (SD = 0.02), SE of 0.78 (SD = 0.01) and SP = 0.69 (SD = 0.01).DK exhibited performances at least as good as all other descriptor spaces, which underlines their power in encoding key molecular features related to biological activity.Moreover, DK scored the best performance in terms of MCC in 18 out of 46 datasets, followed by 14, 7, 7, 5 for ECFC6, ECFP6, RDKIT, PubChem and MACCS.In terms of SE and SP there are some differences for some specific datasets (see Additional file 1: Table S3).For instance, for the target EDNRA (Endothelin-1 When the learning task is related to chemical structures, a single molecular descriptor rarely produces the best performance in all case studies, as each descriptor space encodes for its own specific chemical moieties.A possible strategy to overcome individual-descriptor limitations is to construct an ensemble of multiple models trained on different descriptor spaces [19]. We also investigated the influence of the different levels of DK on the model's discriminative power by rebuilding the ligand-based models using only specific DK levels.The worst performance (MCC = 0.40) is associated to level 4 DK (Additional file 1: Table S4) and significantly improves (p < 0.05) with the inclusion of higher levels DK (levels 0, 1 and 2).The highest performance is obtained by including all DK levels (MCC = 0.54) which supports the descriptor's levels complementarity and synergies. Hereafter, we calculated the frequency distribution of DK among the same dataset of ChEMBL compounds and reported the results in Additional file 1: Figure S2.As expected, the most frequently occurring DK are those belonging to level 4, i.e. the simple functional groups such as aromatic carbons, rings count, H-bond acceptors and H-bond donors.Moreover, there is a notable hotspot of amines, ethers and halogen-containing compounds, which are important reactive groups for drug-like molecules synthesis.Lastly, the most common heterocycles found were: piperidines, imidazoles, indoles and pyridines.Less represented DK (with a frequency value less than 5000) were grouped and shown in a single column labeled as "other" (Additional file 1: Figure S2). In the search of structural features determining accurate predictions within the 46 modelled datasets, we Subsequently, we pooled the most representative DK (i.e., the DK most frequently occurring in predicted actives as well as in correctly predicted actives), which were then mapped on two example ligands (Fig. 5).The full list of DK and their frequencies among hERG inhibitors is provided in Additional file 6.Besides the DK encoding very general substructures, such as aromatic rings or carbon chains, and thus occurring multiple times within a given active ligand, we were able to quickly identify specific functional groups having a key role on ligands' activity against a given target. With regard to hERG, we could identify some "privileged" DK such as amine derivatives (93%) and also ethers (53%) and amides (41%).Interestingly, a high percentage of the predicted actives feature a positively charged nitrogen (82%).This is consistent with the aliphatic tertiary amines being the most represented group within the active ligands (70%): in fact, it is known that this class of amines are protonated at physiological pH.In contrast, amine derivatives with aromatic substituents are less represented.Another interesting feature is the class of aliphatic/aromatic ethers, as hERG ligands are also characterized by bulky and aromatic scaffolds.It is worth noting that such functional groups can occur within a molecule structure multiple times; for instance, compound CHEMBL1642486 features two substituted ether groups: one "aliphatic/aliphatic" and one "aliphatic/aromatic", with the percentages shown referred to the count of each functional group within the active ligands.Thus, the count of DK of level 2 (specific classes of ether derivatives) should not be summed The identified features are consistent with a common "hERG pharmacophore models" reported in literature, involving a basic moiety, playing an important role for the binding to hERG channel, and aromatic rings able to form π-stacking or hydrophobic interactions within the hERG channel cavity [20].Hence, the hierarchically interconnected levels of DK allow quick perception of structural moieties that possess key roles in a ligand's activity and can also be helpful in model interpretation.To further support our findings, we built a decision tree using DK of levels 2 and 3 as descriptors to analyze the dataset of hERG inhibitors.Notably, when employing the more general descriptor, namely DK level 3, defining, for example, amides, amines and ethers derivates, the model correctly classified only 31% of true active compounds.In contrast, when including a more precise amine representation (defining the substitution levels and the nature of substituents), encoded by DK level 2, the percentage of true positives greatly increased to 74%, suggesting that more "fine-graded" DK descriptors are truly able to capture meaningful structure-activity relationships.A graphical representation of extracted rules is depicted in Additional file 1: Figure S1. Taken together, our results suggest that ML models for activity profiling based on DK showed performances as good as the models based on other popular 2D molecular descriptors; however, DK provide a more immediate overview of the peculiar structural features, allowing to quickly derive meaningful structure-activity relationships for the analyzed datasets. Drug design applications Molecular similarity and pharmacophore modeling are frequently used approaches in the ligand-based drug design process.By using the molecular fingerprints of known ligands, databases can be screened to find similar molecules.Common structural features of ligands can be found using pharmacophore modeling, which can then be used to virtual screen for molecules with these features.The DK were designed to recognize not only simple functional groups but also fragments that are essential for a molecule's activity against a specific target.They can therefore be useful in substructure searches, but they can also act as a pharmacophoric filter.Databases and libraries of trillions of compounds can be quickly queried to select compounds for acquisition and testing. Moreover, mapping the chemical neighbor of a functional group, DK are also able to predict its reactivity.In particular DK representing pharmacophoric points, can be considered as atom typing descriptors and then used in predictive models of metabolic reaction occurrence, representing the simplest way to represent knowledgebased metabolic rules. Case Study 1: identification of HDAC7 inhibitors In order to demonstrate the ability of DK to describe, in great detail, functional groups and chemical moieties, in particular identifying those responsible for a specific biological activity, we present, as case study, a screening campaign aimed at the identification of HDAC7 inhibitors, comparing the results, in terms of success rate, between a random library of 26,092 commercial compounds and an its targeted subset based on DK are key regulators of gene expression in cells and have been investigated as important therapeutic targets for cancer and other diseases [21].Different subtypes of HDACs appear to play various roles in the cells and are associated with specific diseases.Therefore, substantial effort has been made to develop subtype selective HDAC inhibitors.The random library of 26,092 compounds was assembled with the aim to repurpose existing commercially available compounds as HDAC inhibitors.Out of the 26,092 compounds screened in HDAC7 enzymatic assay, 201 turned out to be active with a percent inhibition greater than 33%, corresponding to a success rate of 0.77%.The compounds were stratified in different activity classes according to their percent inhibition of HDAC7 activity obtained at 10 μM inhibitor concentration (Additional file 1: Table S5). By applying a knowledge-based approach, codifying the known information related to the zinc binder functional group characteristic of HDAC inhibitors through DK, we could more easily prioritize compounds from the random library, thus increasing the success rate. For this purpose, we prepared a list of 40 DK (with some examples reported in Table 4) exhaustively identifying all possible known metal binders of metalloproteases.Then, we used this list of 40 SMARTS strings, encoding the metal binder fragments, as a substructure filter against all the HDAC inhibitors retrieved from the Clarivate's Cortellis database (800 molecules). We highlighted the DK that have been identified in the dataset compounds (Table 5) and used them for filtering the random library of 26,092 compounds.The recognized structures are not only chemical functional groups but also fragments (consisting of several connected atomic groups), able to bind metals (2-hydroxybenzoic acid, benzene-1,2-diol etc.). The random library was further reduced to 2176 chemical entities, 54 of which turned out to be true actives, Table 6 reports some examples.The hit rate thus increased from 0.77 to 2.5%. In addition, we performed an unbiased similarity search using the binary version of DK as fingerprint and comparing the results with ECFP6 and MACCS.Using the DK entire set of descriptors without prior knowledge, the success rate stands at 1.78%, 1.95% for MACCS and 1.66% for ECFP6.The combination of multiple descriptors led to a worsening of the result, considerably expanding the range of false positives.These findings further confirm the versatility of DK. Thus, DK were able to exhaustively describe the chemical space of metal-binding fragments and to recognize, in the targeted library, the required moieties for the HDAC7 inhibition, also through an unbiased similarity search approach.This approach could be useful for further steps, such as focusing on selecting analogs with similar structural features. Case Study 2: Prediction of drug metabolism and toxicity Previous studies have demonstrated that atom typing can be successfully utilized to predict the metabolic reactions a given substrate can undergo as well as the atom(s) undergoing the predicted reactions.The success of atom typing comes as no surprise, when considering that several predictive methods were based on a set of knowledge-based metabolic rules and atom typing can be seen as the simplest way to translate these rules in computationally tractable descriptors. To evaluate their performance, the DK were used to predict the occurrence of three conjugation reactions, which play a key role in determining the drug toxicity by reducing the formation of reactive electrophilic metabolites (namely the conjugations with glucuronic acid, the sulfate anion and glutathione).Moreover, they were also utilized to predict the mutagenicity which is often also caused by the formation of reactive species.In detail and for each reaction, the analysis was based on a dataset with equal numbers of known substrates and non-substrates.The datasets were generated based on the MetaQSAR database [22] focusing on first-generation metabolic reactions and considering the molecules in their ionized state.Our study entailed a comparison of results obtained with DK and Kier-Hall E-state atom types, respectively.Conceivably, better results might be obtained by considering additional atom types and/or fingerprints.However, such an extended comparative analysis goes beyond the scope of this study and the comparison here was focused on the Kier-Hall E-state atom types, since they had proven to be satisfactory in published predictive models that were based on the same MetaQSAR datasets [23,24]. Table 7 shows the performances of the classification models as obtained by Random Forest (RF) algorithm based on the two sets of descriptors for the three considered conjugations.The obtained performances underline the greater ability of the DK in encoding the Hydroxylamide aromatic Table 6 Examples of molecules retrieved in the random library Active molecules from random library DK mapped Chemical name substructures involved in the considered metabolic reactions.Conceivably, the enhancement is rather limited for the cases in which the KH atom types provided remarkable results (as seen in sulfonation).Gratifyingly, the enhancement increases in the cases in which KH atom types afforded poorer results suggesting that DK are particularly effective in the most challenging conditions. To better appreciate the DK enhanced ability to capture the reactive moieties, feature importance analysis was performed.As expected, sulfonation and glucuronidation share the most important features which correspond to aromatic and aliphatic hydroxyl groups as well as to aromatic rings.Nevertheless, the two types of hydroxyl groups play different roles in the two biotransformations since aromatic hydroxyl functions play a more relevant role in sulfonation, while aromatic and aliphatic hydroxyl groups show comparable relevance for glucuronidation.This difference is reflected in the other selected descriptors since DK encoding for rings, heterocycles and, in particular, N-containing heterocycles are included only in the model to predict sulfonation.In contrast, methyl groups, carboxylic acids, aliphatic chains play a favorable role in determining glucuronidation. Conceivably, the reaction with glutathione depends on largely different DK.In detail, the H-bonding atoms play a relevant role reasonably since they encode for the presence of electrophilic groups.As seen above, aromatic rings also have a significant positive role.Notably, the presence of positively charged groups plays a marked negative role for all considered biotransformations probably because polar and ionized molecules are generally poor substrates for all metabolic reactions. Clearly, these predictive models could be further enhanced by including stereo-electronic descriptors able to encode for the intrinsic reactivity of each atom.Nevertheless, our results emphasize the possibility of the DK to be extensively applied to the prediction of metabolic fate of a given molecule by finely recognizing the potentially reactive atoms.Further studies could also assess if DK can be similarly applied to predict the general organic reactions a given compound can undergo. Webserver The protocol to generate the DK is publicly available within the webservice https:// dompe keys.exsca late.eu (Fig. 6).The user either uploads a file containing one or more compounds, encoded as canonical SMILES, or inputs a SMILES string to generate an output table containing, for each compound (one compound for each row), the count of all the identified DK, each one reported as a separate column.Fragments corresponding to the DK present in each compound are highlighted in the molecule representation for visual analysis.The output table can be downloaded as .csvfile and can be subsequently used in combination with the Molecular Anatomy approach, for the efficient analysis of compound datasets as well as for ML applications. Conclusions In this work we report the DK, a substructure-based descriptor that accurately describes the key characteristics of compounds belonging to different chemical classes including, but not limited to, compounds of pharmacological interest, natural products and food components.The DK are an integral and essential part of EXSCA-LATE, Dompé's end-to-end drug discovery platform.The DK are based on a comprehensive and curated list of functional groups, built using the robust SMARTS language, and organized in different levels of complexity to precisely represent molecular structures.In fact, the DK provide a very fine-grained molecular topology: for each group of interest DK also describe its chemical environment, such as the presence of aromatic or aliphatic substituents, allowing for the formulation of very precise queries.Consequently, they are very well suited to compare and assess the diversity of compound libraries, to efficiently perform substructure/similarity searches, virtual screening campaigns, and chemical space mapping.For instance, in the search of HDAC inhibitors, as illustrated in case study 1, the DK increased the hit rate of the virtual screening campaign by prioritizing compounds bearing the chemical moieties responsible for a specific biological effect, namely metal binding. One key advantage of DK, besides their broad applicability, is that they can be rapidly precomputed and used to index large databases of compounds, whereas fingerprint-based indexing will result in redundant computations and storage space issues, and will return results that often have few or no substructures in common.By its very nature, the DK are also easily interpretable, a significant advantage in rational drug design efforts. Lastly, DK showed adequate performance in machine learning models, predicting compounds' chemical class and activity, in several cases outperforming other state-of-the-art descriptors as well as in predicting the occurrence of crucial metabolic reactions, namely the conjugation with glucuronic acid, the sulfate anion and glutathione, and mutagenicity.As detailed in case study 2, DK outperformed the KH descriptors in the most challenging predictions, thus proving to be well suited in recognizing the potentially reactive atoms and estimating the metabolic fate of a compound or its possible toxicity. As a part of this study, we made freely available the full list of DK (1064 SMARTS, annotated with hierarchical levels, see Additional file 5), a Knime protocol (see Materials and Methods and Additional file 4) to generate DK, as well as a webservice at https:// dompe keys.exsca late.eu, fully integrated with Dompé's Molecular Anatomy approach for the generation and analysis of hierarchically interconnected molecular scaffolds and frameworks.With the DK approach, we go one step further by enabling clustering of molecules at different levels of chemical representation, exploiting both the scaffold-based representation encoded by Molecular Anatomy and substructure-based queries encoded by DK.Taken together, these resources enable retrieval of all most relevant information in compound libraries analysis, from both the scaffold-based representation and the functional groups identification.This provides a very thorough and integrated approach that will significantly enhance the speed and quality of the drug discovery process. Dataset definition Structurally diverse compound libraries of pharmaceutical interest were used as dataset for Chemical Space Comparison Analysis.The whole dataset (Additional file 1: Table S1) collects: (i) "drugs", including the set of safe in man drugs, commercialized or under active development in clinical phases; (ii) "peptides", comprising di-, tri-, tetra-and pentapeptides generated by means of the VEGA suite of programs [25]; (iii) "food" and (iv) "natural" products extracted from COCONUT database [26]; (v) commercially available compounds retrieved from various sources such as ZINC [27] and eMolecules. Diverse subsets, corresponding to the 10% of the initial datasets, were used for both commercial compounds and peptides libraries, to balance their size respect to the other dataset.In particular the subsets were generated maximizing their physico-chemical diversity by the application of the fingerprint-based Maximum Dissimilarity method, to maintain the same physico-chemical profile of the initial dataset. Duplicates among the libraries were removed identifying overlap subsets including compounds belonging to more than a library, useful to highlight the regions of intersection in the analysis of the chemical space. Machine learning algorithms Concerning the chemical space analysis, to demonstrate the capability of the different descriptor spaces to discriminate chemical classes, a multi-class classification model has been trained on the annotated library.The compounds libraries were used to train the model to discriminate a given compound's chemical class, i.e.: peptides, natural products, food products, drugs and commercial compounds.Tree-based gradient boosting models have been trained with Knime native gradient boosting learner using default settings (i.e.number of trees = 100, learning rate = 0.1, tree depth = 20). Regarding biological activity modelling, inhibitory data has been retrieved from ChEMBL for a set of 46 targets relevant for liability profiling during in-vitro drug discovery campaigns (Supplementary Information, Additional file 3).These targets account for a total of 7 liability types, such as: cardiotoxicity, central nervous system toxicity, gastrointestinal toxicity, endocrine disruption, pulmonary toxicity, renal toxicity and immune system toxicity. Experimental inhibitory data has been collected from ChEMBL by UniProt identifiers.Only activity values of "IC 50 ", "EC 50 ", "K i or "K d " measured on "human" sources have been retained.Inhibitory values were normalized to the negative log unit molar concentration and binned into two class classification problem using the cutoff of 6.5 log units (which corresponds to 300 nM).A data record above and below this cutoff has been labeled as "active" and "inactive", respectively.This cutoff has been suggested in order to avoid class imbalanced problems and bias towards the active class [28].Compound's canonical SMILES notation has been used to compute molecular fingerprints. The same above-described settings of the gradient boosting algorithm were used.Models have been validated by internal and external validation.For the former, a 70% stratified sampling has been used for train and test set definition.For the latter, fivefold cross validation (iterated 5 times) has been used.Performance comparison has been carried out by means of standard binary classification metrics, including balanced accuracy (BA), sensitivity (SE), specificity (SP), Matthews's correlation coefficient (MCC). The metabolism studies were based on the same datasets of first-generation metabolic reactions already utilized to develop the MetaClass tool [24].The Kier-Hall E-states are computed by the VEGA suite of programs [25] accordingly to [29].The models were generated by Random Forest algorithm by using Weka and applying the default settings since they provided the best performances in the previous study [23]. The public Molecular ACCess System (MACCS) structural keys [30], consisting of a dictionary of 166 pre-defined structural fragments, represent a classical descriptor in cheminformatics and were originally designed for substructure search. The Extended-connectivity fingerprints (ECFPs) belong to the class of topological fingerprints and were specifically developed for structure-activity modeling [6].This descriptor encodes the presence of specific circular substructures around each atom in a given molecule up to a certain bond radius.ECFPs are categorized by this parameter, in fact the maximum diameter is appended at the end of the name: ECFP4 indicates that the maximum diameter is set to 4, whereas ECFP6 denotes diameter 6.Besides the maximum diameter, the other two key parameters are the fingerprint length and identifier counts.Usually, the length of the bit string representation is kept to 1024, though a larger length reduces the possibility of bit collision.The identifier counts define if each atom identifier in an input molecule is stored only once or multiple times in case a specific substructural feature is present multiple times. The RDKit topological fingerprints are a binary-based further implementation of the Daylight-like fingerprints in which the atom types are set based on the atomic number and aromaticity (RDKit: Cheminformatics and Machine Learning Software.http:// www.rdkit.org).The PubChem 881 structural key is a 881-bit-long fingerprint implemented in PubChem for similarity search and neighboring (ftp:// ftp.ncbi.nlm.nih.gov/ pubch em/ speci ficat ions/ pubch em_ finge rprin ts.pdf ). Validation protocol A Pipeline Pilot protocol was implemented to validate the ability of DK to correctly map structural moieties and pharmacophoric features.In particular two steps of validation were applied, the first one to verify that the DKs corresponding to structural moieties (level 0 and 1) were able to retrieve themselves and that no duplicates are found within each level.The validation process was thus iterated on the DKs of level 0 and 1, by converting them in whole molecules and performing, in parallel, substructure searches of both the single corresponding DK and of the entire list of DKs.This procedure allows to visually analyze the molecular structure corresponding to a given DK and to verify the correctness of each SMARTS string.Moreover, it enables to analyze molecules mapping more than one DK, excluding overlaps and demonstrating the complementarity between structural information encoded by DKs belonging to different levels.The second step of validation involved DK from level 3 and 2. All possible permutations (filling the molecule free valences with H, methyl and aromatic ring) for each generic SMARTS, conversion into molecules, and substructure search (using as query both the generic SMARTS of level 3 and the more specific SMARTS of level 2) were accomplished by using a custom pilot script. Web interface implementation The web interface was implemented using LAMP (Linux Apache MariaDB PHP), an open-source web development platform enabling optimal performances in displaying and handling the user's input and output data.The DK are calculated on the fly through an underlying, completely automated Pipeline Pilot workflow. Knime implementation A protocol was implemented in Knime [31] to carry out DK calculation for an input file of compounds in SMILES format.As an example, the ChEMBL dataset employed for biological activity modelling was used as input.The protocol performs DK calculation and count on the basis of a curated list of 77 representative SMARTS, selected among the most relevant chemical classes and covering all the hierarchical levels. Fig. 2 Fig. 2 Chemical space analysis of structurally diverse libraries (drugs, peptides, natural products, food products, commercial compounds) by means of TMAP using DK in comparison with other descriptors Fig. 3 Fig. 3 Bar chart representation for overall accuracy, sensitivity and specificity for the considered descriptors over the five chemical library classes.Performances are computed in external validation.Where: DK DompeKeys, EC extended connectivity, FC extended connectivity feature invariant, MC MACCS keys, RD RDKit, PC PubChem 881 Fig. 4 Fig. 4 Box plot representation for balanced accuracy (BA), Matthews's correlation coefficient (MCC), sensitivity (SE) and specificity (SP) evaluated in external and internal validation for the considered fingerprint types over the 46 modelled datasets.Where: DK DompeKeys, EC extended connectivity, FC extended connectivity feature invariant, MC MACCS keys, RD RDKit, PC PubChem 881 Fig. 6 Fig. 6 Snapshot of the webserver with the interface for DK generation Table 1 Key structural information for tucidinostat from DK mapping Table 3 Chemical environment of the functional groups (description levels 2 and 3) Table 4 Example HDAC inhibitors and their metal binder groups mapped by DK Table 5 HDAC inhibitors DK used for the substructure filter Table 7 Performances of the classification models obtained by Random Forest (RF) algorithm based on the two sets of descriptors for the three considered conjugations
9,312.8
2024-02-23T00:00:00.000
[ "Chemistry", "Computer Science" ]
A Pedestrian Evacuation Model with Leaders during the Smoke Dispersion Based on a Social Force Model the original Introduction During the recent decades, pedestrian evacuation in emergency cases, such as fire, human stampedes, or overcrowding incidents, has become an important issue. An example of a major incident in Thailand is the Santika Pub fire on January 1, 2009. Sixty-six people were killed, and more than 200 were injured. The deaths were partly caused by the improper design of the buildings and by the disregard for human safety. Laboratory experiments and real-life data show that smoke can affect the pedestrians in two ways [9]. First, smoke can harm a human's health, since it contains some poisonous substances. It provokes pedestrians to lose their steadiness in a way that they are unable to escape from the gases. Second, the visibility range of a human can be reduced when the smoke concentration increases. In the thick irritant smoke, pedestrians are not able to open their eyes for a long time. Their tears run so heavily that they cannot see the words on signs. Therefore, the study of human behaviours and their motions during the propagation of smoke is significant. It can be used to reduce the causalities under smoke conditions. There are numerous simulation methods to model pedestrian dynamic, for example, the social force model [6], the optimal-velocity model [15], the magnetic force model [16], the cellular automata models [12], and the discrete choice model [2]. Evacuations are essential in the process towards inevitable disasters and emergencies. Some experimental works reveal that it is very important to have leaders inside the building in an emergency situation [17,21]. Leaders are agents who are trained and have complete knowledge about the inside geometry of a building on fire. They can be distinguished easily by pedestrians and help others during the evacuation procedure. The knowledge gained from the model can help designers to plan the building with respect to safety issues. This will reduce the amount of losses of both life and property in an event of emergency. Wang et al. [21] simulated the pedestrian evacuation in public places using a multi-agent-based congestion evacuation model. The panic behaviour of agents is incorporated in their model. Their simulations show that the evacuation is more efficient by adding a virtual leader if the exit is partially clogged. Weifeng and Hai [23] applied a cellular automaton model to simulate the human behaviour termed 'flow with the stream' from a large smoke-filled compartment. In their experiments, the effect of leaders is taken into account. The results of their numerical tests show that the effect of leaders is significant to the evacuation. The evacuation in a scenario without guider is slower than that in a scenario with leaders. Other studies that apply the social force model to study pedestrian evacuation processes are as follows: Frank and Dorso [3] adopted the social force model to study pedestrian evacuation under limited visibility. In this model, pedestrians have to find way out under low visibility conditions. The effect of guiders is not regarded. Three kinds of pedestrian behavioral patterns are analyzed: individualistic behaviour, herding-like behaviour, and the walls following. They obtained unexpected results that some low visibility situations may enhance evacuation performance. In the work of Pelechano and Badler [17], they have developed a multi-agent communication for evacuation simulations (Maces). It combines the local motion driven by the social force model. They simulated crowd behaviour under two conditions: agents communicate building route knowledge on the one hand, and agents take different roles such as trained personnel, leaders, and followers on the other hand. They performed 25 simulations using a crowd size of 100 with 0, 25, 50, 75, and 100 percent trained agents. The results show that the evacuation time decreases, as the number of trained agents in the environment increases. In reference [22], they performed the effect of leaders in pedestrian evacuation process based on the modified social force model. Three evacuation strategies are investigated: situations that there is no leader, there is a leader nearby, and situation that individuals follow the leader with a certain probability. Their simulations show that the evacuation rate is no more than 30% in situation that there is no leader. The effect of herding behaviour has a slightly better evacuation rate than pedestrians moving alone to exit. The effect of an increase in the number of guiders on evacuation time is not obvious in their model. Zhou et al. [25] proposed a hybrid bi-level model to optimize the number, initial locations, and routes of leaders in evacuation process. The social force model and its modifications are employed to study crowds with leaders in large-scale public places. The initial locations of leaders are generated by the upper level model. The evacuation routes of leaders are defined by a co-simulation heuristic approach in the lower level model. Simulation results show the importance of the initial locations of leaders and the improvement of evacuation by applying a leader coordination mechanism. Their proposed optimal evacuation strategy has demonstrated best evacuation performance. In our recent published articles, we adopted a cellular automaton and the social force models to study the motions of pedestrians influenced by smoke spreading [11,12]. In these models, the roles of leaders are not considered. Therefore, we would like to extend our previous studies to consider the case with and without leaders by adopting the social force model. The advection-diffusion equation [20] is applied for the propagation of smoke. The movement of a guider in our model is defined by the solution of the Eikonal equation. It is the traveling cost to reach a destination, which depends on pedestrian and smoke density in his visibility. The human behaviour terms 'following the wall' [3] and 'flow with the stream' [23] are also investigated in our model. The framework of this paper is organized as follows. The social force model with guiders and a way to couple it with the rule of 'flow with the stream', 'following the wall', the advection-diffusion, and the Eikonal equation is Modelling and Simulation in Engineering demonstrated in Section 2. Then, the numerical methods, which are used to approximate the solutions of the social force model, the Eikonal equation, and the advection equation, are displayed in Section 3. Numerical experiments and results are shown in Section 4. In the end, discussions and conclusions are presented in Section 5. Model We study pedestrian evacuation in one and two exit domain with sources of smoke. We assume that the smoke is not harmful to pedestrians' health, but it affects the visibility range. The effect of guiders on evacuation is investigated in our model. Guider is a person who is familiar with the geometry of simulation domain. He knows where the exits are located. He can lead other pedestrians to the exit although his visibility is limited due to smoke. A microscopic social force model [6] is applied to simulate individuals' positions and velocities. It exploits the idea that pedestrians' movements rely on their own desire to reach a certain destination as well as other environment factors. To simplify the model, all pedestrians are assumed to have eight movement directions as in references [11] and [23] (see Figure 1). The desired direction of a guider or a person who can see the exit is defined to follow the solution of the Eikonal equation. It depends on smoke density and pedestrian's desired velocity. The movement directions of individuals who are not guiders or not see any exit, are followed the psychological human behaviour termed 'flow with the stream' [23] and 'following the wall' [3]. For the dispersion of smoke, the linear advection-diffusion equation is employed. The microscopic social force equations, together with the Eikonal equation, the advection equation, and human behaviour terms 'flow with the stream' and 'following the wall', are prescribed as the following: with location x i ∈ ℝ 2 and velocity v i ∈ ℝ 2 , i = 1, 2, ⋯, N. N is the total number of pedestrians. f d i ðtÞ is the desire force of pedestrian i at time t. It represents the own desire of a pedestrian to reach his destination with a certain desired speed v d in a given desired direction e d . It is expressed by the following: Modelling and Simulation in Engineering where v i ðtÞ is the actual velocity and τ i is the relaxation time in which the pedestrian adapts his actual velocity to the intended velocity. e d i ðtÞ is the unit vector pointing to the desired direction. For a guider or a pedestrian who see the exit, the moving desired direction is assumed to follow the negative gradient of the Eikonal solution, i.e. where Tðx i Þ is the travel cost of the pedestrians to reach his destination at point x i . It is the solution of the Eikonal equation [10]: where Ω is a simulation domain, and TðxÞ is the arrival time of the front crossing the point x. TðxÞ is set to 0 for Modelling and Simulation in Engineering the destination areas. FðxÞ > 0 is the moving speed of the front and relies on the position of x. We set it as follows: where Ω b represents the areas that are obstructed by obstacles [10] or areas with high smoke density. U is the speeddensity function. It shows the relationship between the speed and the density of pedestrians. Many speed-density functions are available for use. The following functions are adopted in our simulations [18]: where U max and ρ max are the maximum speed and the density of pedestrians, respectively. R v is the visibility distance of a pedestrian in smoke area. ρðx, tÞ is the pedestrian density in a circle with radius R v . Experiments on human behavior in fire smoke show that the actual visibility distance captured by light reflecting objects can be estimated through the following equation [3,24]: where c represents the value that depends on whether the sign is light-emitting or light-reflecting. For the light emitting, its value is 8. It is 3 for the light-reflecting. V is the volume of the domain where the fire origin is. K m = 7:6 m 2 /g is applied for soot produced during flaming combustion of wood and plastics, whereas K m = 4:42 m 2 /g is used for soot produced during pyrolysis of these materials. M s is the mass of smoke emission and can be calculated through the following equation: where M is the weight of the burning material, and ϵ is the smoke conversion factor [23]. v d i ðtÞ is the desired speed of pedestrian i at time t. It is the speed that pedestrian i adapts his actual velocity v i ðtÞ to the desired speed. In our model, the desired speed of a pedestrian depends on the pedestrian density in the visibility distance. It is calculated through where f soc ij ðtÞ is the repulsive social force. It results in a repulsive effect to avoid getting too close or to keep a Human-human interaction strength Human-human range of repulsive interaction B i 0.21 [13,14] Contact distance r ij 0.5 [11,13,14] Anisotropic parameter λ i 0.61 [11,13,14] Body force coefficient k n 0.1 [11,13,14] Friction force coefficient Velocity field in y-direction Diffusion constant κ d 0.05 [11,13,14] Space grid size in Space grid size in y Δy 0.2 [11] Time step size Δt 0.02 [11,13] where A i and B i are the parameters that show the individual interaction strength and range. d ij = |x i − x j | is the distance between the centres of mass of the pedestrians i and j. r ij = r i + r j is the sum of the pedestrians' radii r i and r j , and n ij ij Þ = x i ðtÞ − x j ðtÞ/d ij ðtÞ is the normalized vector pointing in the direction from pedestrian j to pedestrian i. λ i is a value in the range ½0, 1, and λ i < 1 reflects an anisotropy effect. It shows that the situation in front of individual i has more impact on its behaviour than the situation behind. cos ðφ ij Þ = −n ij ðtÞ ⋅ e i ðtÞ, where e i ðtÞ = v i ðtÞ/ |v i ðtÞ | is the direction of motion of pedestrian i, and φ ij ðt Þ denotes the angle between the direction of motion of pedestrian i (e i ðtÞ) and the direction to pedestrian j. f ph ij ðtÞ refers the physical interaction force. It is applied to separate two persons when they have physical body contact: where k n Hðr ij − d ij Þn ij is a body force for body compression and k t Hðr ij − d ij ÞΔv t ji t ij is sliding friction force for relative tangential motion. H is the Heaviside function. Its value is ij Þ is the unit tangential vector and orthogonal to n ij , Δv t ji = ðv j − v i Þ ⋅ t ij is the tangential velocity difference, and k n and k t are the normal and tangential constants, respectively. For the dispersion of smoke, the following advection diffusion equation [20] is applied: Modelling and Simulation in Engineering with the Dirichlet boundary conditions on ∂Ω. w = ðw 1 , w 2 Þ ∈ ℝ 2 is the velocity field of smoke, and κ d > 0 is the diffusion constant. We suppose that the smoke source emits gas at a constant rate Q c ½g/s from a single source point c s = ðx s , y s Þ. Therefore, the source term is written as follows: where δ is the Dirac delta function given by For an individual who is not a guider and does not see any exit, he determines his movement direction by the rule of 'flow with the stream' and 'following the wall' as in references [11] and [23]. It is operated as follows. 1. At time t, check whether there is a guider in his visibility. If it is true, he follows the guider. Otherwise, proceed to the next step. If there are more than one guiders in his visibility, he selects randomly one of them to follow. 2. Check whether he see any exit; if it is true, he follows the nearest wall with probability 0.5 to turn left or turn right. 3. Based on the state at time t − 1, count the number of individuals in his visibility and divide them into groups according their movement directions. There are eight possible movement directions as defined in Figure 1. Hence, the maximum number of groups is also eight. He follows the leading group, which is the group that contains most pedestrians moving in the same direction. If there are more than one leading group, one of them is chosen randomly to follow. The procedures for the pedestrian to follow a guider, a wall, or a leading group are as follows. 1. The target can be a guider, a wall, or a leading group. 2. A probability of α is set for him to give up following the target. He moves along a direction selected randomly. He follows the target as a result of (1-α) probability. 3. If he decides to follow the target, a probability of β is set for him to move toward the target. A probability of ð1 − βÞ is set for him to move along movement direction of the target. From a qualitative study, as stated in [23], α is set as 0.2 and β as 0.3. In the case that a pedestrian is near a wall and his movement direction would lead him to move into the wall in the next time step, he changes his direction randomly to avoid encountering the wall, as shown in Figure 2. Red arrows refer to movement directions that lead into the wall, whereas blue arrows refer to possible movement directions leading away from this boarder. If there is more than one movement direction that gives the minimum angle, one of them is chosen randomly. Algorithm 1 is used to update an individuals' position and velocity in each time step. Numerical Methods In this section, we present the numerical methods that are adopted to approximate the solution of the social force models (1) and (2), the Eikonal equation (5), and the advection-diffusion equation (14). 3.1. Solving the Social Force Model. We apply the two-stage second-order Runge-Kutta method to approximate the solution of the social force model. To apply this method, first, we write equations (1) and (2) as follows: where uðt 0 Þ = u 0 is the initial condition. We generate the equidistant grid Ω t with respect to time t as Ω t = t k , t k = kΔt, k = 0,1,2, ::: , M and Δt = 1 M : The two-stage second-order Runge-Kutta method is as follows: Table 4: Average number of evacuees of ten trial runs of 100 individuals with 3% guiders in simulations. There are one and two smoke sources. For a source of smoke, it is located in the middle of the room. Two sources of smoke are placed in the middle of the room and in front of exit 1. The simulation domain is a room with two exits as Figure 3 10 Modelling and Simulation in Engineering where u k = uðt k Þ, u k+1 = uðt k+1 Þ, and t k+1 = t k + Δt. The solution of u in the next time step is obtained from equation (22). Solving the Eikonal Equation. To solve the Eikonal equation, there are a quite number of numerical methods existing to approximate the solution of the Eikonal equation, for example, the fast marching method [19], the fast marching level set method [19], the fast sweeping method [5], and the fast iterative method [8]. In our experiments, the fast marching method is applied in all simulations. Details of this method can be reviewed in reference [19]. The operator splitting method is applied to approximate the solution of the advection-diffusion equation (14). This method is performed on the two-dimensional advection-diffusion equation in the x-direction and the y-direction separately over two time steps. For details, we refer to references [11], [12], and [12]. The convergence of this method is shown in reference [11]. Numerical Experiments and Results We perform numerical experiments of pedestrian evacuation during the smoke dispersion in the case with and without guiders in a room of size 16 m × 20 m. We consider the simulation domain with one or two exits. The width of exits are set to 2 m, which is enough to allow pedestrians to escape simultaneously. The exit is located on the right side of the room for one exit room (see Figure 3(a)). For two exit room, the exits are placed at the bottom and on the right side of the room. They are labeled as Exit 1 and Exit 2, respectively (see Figure 3(b)). A crowd of size 50, 100, and 200 individuals and 0%, 1%, 3%, and 5% guiders are considered in our study. The process starts with randomly distributed pedestrians throughout the room at initial time. Each individual are assumed to has eight movement directions numbered from d 1 to d 8 at each time step as in reference [11] (see Figure 1). Initial velocity of an individual is selected randomly from the eight movement directions. For the studied examples, we assume that there is 1 kg of polystyrene burned in the flame inside the experiment room. The smoke conversion factor of polystyrene is assigned to 0.15 as in reference [23]. By equation (10), we attain the mass of smoke emission in the room as follows: The pedestrian's visibility distance during the smoke dispersion is calculated through equation (9): In reality, the visibility range of individual is not constant. It changes all the time dependent on the burning rate of material. Therefore, we assume that the visibility range of an individual is decreased linearly from 3.37 to 2 m in a given time of simulation for a source of smoke. For the smoke dispersion, the smoke density at the source point is relatively high at the initial time and emits a constant smoke density subsequently, i.e. At each time step, the velocity field ðw 1 , w 2 Þ of the convection-diffusion in equation (14) is assumed to vary on the interval [−0.5, 0.5]. Ten trial runs are executed for each example, and their average is applied. The computations are conducted on a HP Intel(R) Core(TM) i7-7700CPU, 3.6 GHz. We implement all programs in MATLAB R2023a. Parameters that are used in all simulations are displayed in Table 1. Experiment 1. In the first experiment, we consider evacuation process of 100 pedestrians with 0% and 3% guiders in simulations. The simulation domain is a room with one and two exits. A room with one exit is set as Figure 3(a), whereas a room with two exits is as Figure 3(b). The entire evacuation time period of a simulation is set to 50 s. The results of the first experiment are displayed in Table 2. The pedestrian evacuation process in a room with two exits provides higher average number of evacuees than the process in a room with one exit. This coincides well with real situations that individuals have more options to evacuate out of the room. To accelerate evacuation process, it is better to have a room with two exits than a room with one exit. Our results are consistent with results of Aik and Choon [1]. Then, we consider the situations with 0% and 3% guiders in experiments. Both one and two exit rooms give similar results that the average number of evacuees is higher in case with 3% guiders than without guider in simulations. In the presence of guiders, we attain that the average number of evacuees is rather high in the domain with two exits compared with the domain with one exit. The plot of results in first experiment is shown in Figure 4. Experiment 2. We investigate numerical experiments of 100 individuals with 0% and 3% guiders in a room with two exits (see Figure 3(b)). Different evacuation time periods are regarded. They are set to 20 s, 30 s, and 50 s. The results of these experiments are presented in Table 3. Apparently, the average number of evacuees increases when simulation time period increases both with 3% guiders and without guider in simulations. More evacuees can leave the room with more time period. In all setting evacuation time periods, the average number of evacuees in case with 3% guiders in simulations is higher than in case without guiders. When simulation time period is lower, guiders are still important on evacuation process. The average number of evacuees is increased in presence of guiders. The comparison plot of the average number of evacuees with different simulation time periods is demonstrated in Figure 5. Table 5: Average number of outside pedestrians of ten trial runs. 50, 100, and 200 individuals with 0%, 1%, 3%, and 5% guiders are considered in simulations. The simulation domain is a room with two exits as Figure 3(b). The entire time period of a simulation is 50 s. Number of guiders Average number of evacuees in 50 s ( Experiment 3. We perform evacuation process of 100 individuals with 3% guiders in a room with two exits. One and two sources of smoke are considered in this experiment. For a source of smoke, it is placed in the middle of the room. For two smoke sources, they are located in the middle of the room and in front of exit 1 (see Figure 6). Time period for a simulation is set to 50 s. In this experiment, the visibility range of a pedestrian is assumed to be constant both with one source and two sources of smoke. It is calculated through equation (9). We obtain the visibility range 3.37 m for a smoke source, and it is 1.68 m for two sources of smoke. Table 4 displays results of this experiment. A high average number of evacuees is received in the domain with a smoke source compared with when there are two smoke sourced presented. Two sources of smoke can produce more smoke density that it causes to reduce visibility of individuals. Individuals have less chance to see an exit; see a guider or move with others in a small visibility range. The comparison plot of the average number of evacuees where there are one and two smoke sources is presented in Figure 7. Movements of individuals in one and two smoke sources' situations at time 1 and 5 s are demonstrated in Figure 6. Small groups are formed and observed in the case with two smoke sources. Figure 8 shows contour plots of the solution of the Eikonal equation in simulations of 100 pedestrians during propagation of smoke at t = 5 s. Figure 8(a) shows a source of smoke. Figure 8(b) shows two sources of smoke. From the plots, we see that the traveling time to reach a destination is absolutely high in regions where the smoke sources or the wall grids are located. Hence, guiders or individuals who see exit or walls will move away from these areas. This is by reason of the moving speed F(x) in equation (6) that it is assigned to small value for areas with high smoke density or wall grid regions. Experiment 4. In this experiment, a crowd of size 50, 100, and 200 individuals with 0%, 1%, 3%, and 5% guiders is considered. The simulation domain is a room with two exits (see Figure 3(b)). The smoke source is located in the middle of the room. The entire evacuation time period is set to 50 s. Table 5 displays the average number of outside pedestrians in case of 0%, 1%, 3%, and 5% guiders in experiments. It shows that the average number of outside pedestrians in case of with and without guiders is not very Table 6: CPU time of ten trial runs of 50, 100, and 200 individuals with 0%, 1%, 3%, and 5% guiders in simulations. The simulation domain is a room with two exits as Figure 3(b). The entire time period of a simulation is 50 s. 18 Modelling and Simulation in Engineering The comparison plots of the average number of outside pedestrians of crowd of size 50, 100, and 200 with 0%, 1%, 3%, and 5% guiders are shown in Figure 9. From Figure 9(a), the average number of outside pedestrians in case of with and without guiders is approximately the same in the beginning of simulation, i.e., at time 0-5 s. After that, the average number of outside pedestrians in case of 5% guiders provides highest results. At the end of given simulation time, we see that the average number of outside pedestrians is roughly the same in all cases. In the events of 100 pedestrians in simulations (Figure 9(b)), the results of the average number of outside pedestrians are similar in case with and without guiders in the beginning of simulation, i.e., at time 0-3 s. After that ,the average number of outside pedestrians in case of with guiders is obviously higher than that in the case without guider. At time from 3 to 25 s, the average number of outside pedestrians in case of 3% and 5% guiders is higher than that in case of 1% guiders in experiment. After the time from 25 s until end of given period time, the average number of outside pedestrians is roughly the same in case of 1% and 5% guiders. It is highest when there are 3% guiders in experiment. This result is agreeable to reference [22]. When the number of leaders is larger than a certain value, the effect of increasing number of guiders on evacuation time is not evident. Increasing the number of guiders does not always increase the number of evacuees. When there are 200 individuals in experiments (Figure 9(c)), the significance of having guiders in simulations is evidently seen. The difference in the average number of outside pedestrians in case of with and without guiders is clearly observed from the plot. After the time from 5 s until end of simulation period, the average number of outside pedestrians is increased when the number of guiders is increased. It is highest when there are 5% guiders in testing. From Figure 9, we can conclude that all case studies provide similar results of the average number of outside pedestrians in the beginning of simulations. This is because the visibility distance of an individual is large in the beginning. Pedestrians who near or see the exit can evacuate out of the room without difficulty. Therefore, guiders have no effect on evacuation in the beginning of simulation. As time is increased, the visibility range of an individual is reduced. Pedestrians who cannot see any exit cannot evacuate out of the room easily. If they are near a guider, they will follow him. Guider can lead them to the exit. This can increase the number of outside pedestrians. When there are large number of individuals in experiments, the effect of having guiders on evacuation is clearly dominant. Table 6 shows computation time of ten trial runs of 50, 100, and 200 pedestrians in simulations. The plot of computation times of 50, 100, and 200 pedestrians in case of 5% guiders in simulations is displayed in Figure 10. It is seen that as the number of pedestrians is increased, the computation time is increased exponentially. Figure 11 shows the footprints of five chosen pedestrians out of 200 pedestrians in case of 5% guiders in simulation. Black, blue, and cyan colors are footprints of guiders, whereas red and green colors are footprints of individuals that are not guider. The initial positions of black, blue, cyan, red, and green color individuals are at (13.5292, 8.4356), (12.2952, 4.1928), (0.8082, 14.9861), (18.2861, 1.5468), and (15.9470, 3.9792), respectively. It can be seen that the footprints of guiders point directly to the exit since they know well the geometry of the room and know where the exits are located. Hence, they can move directly to the exit. On account of the limited visibility, the red and green pedestrians cannot find the exit directly. They follow others by the rule of 'flow with the stream' and 'following the wall'. The traces of them are overlapped, and they fail to evacuate out of the room in a given period of time. Movements of 200 pedestrians during smoke spreading at time 0, 5, 20, and 40 s in the case that there is no guider in the simulation are demonstrated in Figures 12(a), 12(c), 12(e), and 12(g). At initial time, pedestrians are randomly distributed through out the room. The initial movement direction of an individual is chosen randomly from the eight movement directions. At time 5 s, several groups of individuals are observed in this stage. Individuals who cannot see the exit move in the direction determined by the rule of 'flow with the stream'. The movement directions of pedestrians in each group point about in the same direction. When the crowds are near or see walls, they would follow the wall. The human behaviour term 'the wall following' is also observed in our model (see Figure 12(e)). The crowds move by the rule of 'flow with the stream' until they see an exit and move out. Because of the limited visibility, one group of individuals cannot find the exit directly and fails to move out of the room in a given period of time. Discussion and Conclusions In this research, we consider individuals' movements during smoke dispersion in case with and without guiders. The human behaviour terms 'the flow with the stream' and 'following the wall' are regarded in our model. Our model is based on the social force model, which is applied for pedestrians' motions. It is coupled with the Eikonal equation and the advection-diffusion equation. The Eikonal equation is used to guide the movement direction of a guider or a pedestrian who can see the exit. The advection-diffusion equation is employed for the propagation of smoke. In our experiments, it shows that guiders are important for evacuation, especially when there are large number of pedestrians in simulation. They can lead others around them to exit. Therefore, it increases the number of evacuees. This result is consistent with that in references [17], [22], and [23] in case that there are large number of individuals in simulation. The average number of outside pedestrians in case with guiders is higher than that in case without guider. For a small number of pedestrians, the impact of guiders on 20 Modelling and Simulation in Engineering evacuation time is not obvious in our model. The effect of increasing the number of guiders leading to an increase in the number of evacuee is obtained in the event that there are 200 pedestrians in experiment. For 100 individuals in simulation, this effect is not clear. This gives similar result as reported in reference [22]. Considering the simulation domain with one and two exits, the number of evacuees in a room with two exits is absolutely higher than the number of evacuees in a room with one exit. This result is agreeable with the work by Aik and Choon [1]. Regarding safety, it is preferable to build a room with two exits than a room with one exit. This experiment confirm that guiders are important on evacuation both in the domain with one and two exits. On the study of different simulation time periods, the average number of evacuees increases in the presence of guiders. With an additional smoke source, the visibility range of the individual is reduced. It leads to a decrease in the number of evacuees. Human behaviours terms, the clogging [6,7], 'the flow with the stream' [23] and 'following the wall' [3] effects are also observed in our model. For further study, we can consider the effect of the initial locations of guiders on evacuation process. Conflicts of Interest The author(s) declare that they have no conflicts of interest.
8,005.2
2023-04-20T00:00:00.000
[ "Physics" ]
Development of software product for processing of operating parameters of reverse osmosis systems for the purpose of mathematical determination of water chemistry management methods The article considers the problems of enterprises using baromembrane water treatment technologies in technological processes. At present, the basic operational data required to monitor the operation of the equipment and to further analyze the efficiency of its use are collected manually, with the collected parameters recorded in the paper operational statements. To avoid the introduction of erroneous indicators and reduce the labor intensity of the process during manual data collection, it is proposed to develop and implement a software product that allows normalizing the parameters of the reverse osmosis installation by recalculating operational indicators taking into account correction factors. In addition, the developed software product allows you to build graphical dependencies of normalized performance, selectivity and pressure differences of reverse osmosis systems. In the future, the program code of the electronic program tracks critical deviations of the calculated normalized parameters and signals to the user the need for corrective actions. The article presents a study of the operation of the membrane plant for the life cycle of membrane elements (3 years). The results of the study of these processes with the help of the developed software product showed a high efficiency of identifying possible problems during the operation of reverse osmosis plants in the early stages, such as the formation of deposits and contaminants on membrane elements, which will significantly increase the service life of membrane elements in various industries. Introduction At present, at various enterprises using baromembrane water treatment technologies in technological processes, the collection of basic operational data, which are necessary for monitoring the operation of equipment and further analysis of its use efficiency, is carried out manually, with the collected parameters recorded in paper operational statements. As a rule, this leads:  to errors and typos related to the human factor;  labour intensity of analysis of the obtained data, since all parameters before comparison must be recalculated to normalized, taking into account correction factors;  complexity of operation because it is difficult to quickly detect changes in operating parameters and make the necessary decisions on further operation (for example, send the plant for chemical washing). The experience of using such water treatment equipment shows that IOP Conf. Series: Earth and Environmental Science 751 (2021) 012099 IOP Publishing doi:10.1088/1755-1315/751/1/012099 2 the chemical washing performed on time significantly shortens the service life of the membrane elements [1]. In addition, the studies carried out showed that most optimization measures at enterprises operating reverse osmosis plants mainly include the selection of chemical reagents for washing. We propose an integrated approach that includes:  analysis of operational data for the purpose of computer prediction of processes, training of the model to identify flushes at actuations of reverse osmotic installation, calculation of their duration and quantity;  laboratory monitoring;  introduction of automated data collection and control system. Materials and methods To solve this problem, it is proposed to implement a software product that allows processing the operational parameters of reverse osmosis plants. In the first step of creating a software product, it is possible to input operational data into the user interface of the program. At the same time, the possibility of entering erroneous information is excluded. This is facilitated by established boundary conditions. After that, the program analyzes the collected data, builds the corresponding graphical dependencies and provides recommendations for further operation of the reverse osmosis plant. In particular, it is necessary or not to wash the membrane elements, whether or not they need to be replaced. The remaining service life of the membranes is reflected in this mode of operation. Another problem with the operation of baromembrane plants is the issues related to chemical washing, which must be carried out when one of the parameters is reached:  reduction of plant productivity by 10-15% taking into account temperature correction at constant pressure;  increase of membrane unit resistance by 10-15% while maintaining constant productivity;  decrease of membrane selectivity by 10-15%. In addition, chemical washing can be carried out after a given period of operation of the plant, which is determined experimentally [2][3][4]. A study of processes related to improving the efficiency of water treatment suggests the need to normalize the operating parameters of the plant, because this procedure allows you to identify possible problems in the early stages (for example, the formation of deposits or contaminants), if such normalized data will be recorded daily [5]. It should be borne in mind that corrective actions are much more effective if implemented early. Results With the help of the developed software product, a study of the performance of the membrane plant over the life cycle of membrane elements for three years was carried out. This plant is operated at a thermal power station in the Irkutsk region. Graphical dependencies of normalized productivity, selectivity and pressure differences are built. These dependencies are shown in figures 1-3. At the second stage of research, a laboratory reverse osmosis plant was connected to an industrial plant. The plants operated in parallel on the same water. The developed program product also allows calculating the required flow rate of water, permeate, the specific conductivity of permeate and concentrate in all nodes of the circuit, the amount and percentage of wastewater produced. In addition, the software product allows you to determine and calculate the optimal operating modes of reverse osmosis plants, which will reduce the amount of wastewater generated. The programme can be adapted to specific water treatment schemes in different industries. Figure 3. Change of normalized selectivity operation period of baromembrane system. To determine the qualitative and quantitative composition of contaminants of the membrane elements, their visual inspection was carried out, which showed a strong contamination of the ends of the element with a dark brown mass [6][7][8]. Opening of the element showed the average contamination of the membrane surface with a loose clay-like substance, the humidity of which was 85%, the total amount of contaminants in the element in terms of dry residue -6.7 g. Chemical analysis of the contaminants of the membrane element showed the following results shown in table 1. Discussion In industrial plants, chemical washes are usually performed after a given time interval, since in practice in real conditions it is quite difficult to detect a change in operational parameters in time. This is due to the fact that these parameters of a baromembrane system are influenced by the composition of the source water, the pressure of the source water, the temperature, as well as the degree of concentration [9]. For example, a 4 ° C drop in the temperature of the source water will reduce permeate flow by about 10%. However, this is considered normal. In order to distinguish between such normal phenomena and changes in operating parameters due to any contamination or problems, the measured permeate and salt flow rates must be presented in the so-called normalized form [10][11][12]. Normalization is the process of comparing actual operating parameters with some given reference parameters, while taking into account the factors of influence on operating parameters. Reference operating parameters may be calculated operating parameters or measured initial operating parameters. Normalization, in relation to the design (or guaranteed) operating parameters of the system, is useful as a test of the fact that this unit does provide some predetermined (or guaranteed) operating parameters [13][14][15]. Normalization with respect to the initial operating parameters of the system is necessary to detect any kind of changes in operating parameters between the first day of operation and the current date. The proposed software product also allows normalization of operating parameters of the reverse osmosis plant and has the possibility of graphical representation of normalized permeate flow rate, salt permeation and pressure drop [16]. The program recalculates the operating parameters of the reverse osmosis installation into normalized ones taking into account correction factors, tracks the slightest deviations of the parameters and signals the user about this. The warning signal that was heard indicates the need for chemical flushing of the equipmen [17,18,20]. Conclusion Thus, the results of the studies made it possible to conclude that the reverse osmosis plant receives water with a high content of iron hydroxide, polysilicates, silicic acid and organic matter. For the most effective regeneration of membranes and extension of their service life based on the results of autopsy, washing of membranes in several stages is proposed, namely: 1. Washing with 2% sodium tripolyphosphate solution for 1.5 hours at temperature of 30° C [19]. 2. Rinse with purified water for 0.5 hour. 3. Washing with aqueous solution containing 2% ammonium fluoride and 1% citric acid for 1.5 hours at temperature of 30° C. Such regeneration of membrane elements should be carried out in place with reduction of one of its normalized parameters by 10% of initial values after setting to operating mode. In addition, chemical flushes of reverse osmosis membranes are carried out at the tested plant after the specified time interval determined by the experimental method. Periodicity of membrane regeneration is once every six months. However, a study of the obtained normalized parameters of the baromembrane system showed that the need for washing occurs once every 4 months, that is, 2 months ahead of the regulated period. Carrying out the flushing after a given period of time will be more efficient, and will also contribute to better cleaning of the membrane elements of reverse osmosis plants. Thus, this approach is now relevant, since the number of smart sensors and controllers is growing every year. In proportion to this growth, there is an increase in the amount of data coming from them that Data Scientists will work with in the field of energy. The competitive advantage of the proposed technology is complete automation of the process, excluding manual labor, the use of machine learning, which allows you to build accurate forecasts of the behavior of technological equipment, monitor the operation of the plant, as well as timely detect and quickly respond to malfunctions. Understanding the processes and their analysis will allow solving a number of problems at thermal power plants, which will increase the reliability of water treatment equipment, increase the service life of membrane elements, as well as reduce operating costs in general.
2,398.2
2021-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Mathematics" ]
Redefining intestinal immunity with single-cell transcriptomics The intestinal immune system represents the largest collection of immune cells in the body and is continually exposed to antigens from food and the microbiota. Here we discuss the contribution of single-cell transcriptomics in shaping our understanding of this complex system. We consider the impact on resolving early intestine development, engagement with the neighbouring microbiota, diversity of intestinal immune cells, compartmentalisation within the intestines and interactions with non-immune cells. Finally, we offer a perspective on open questions about gut immunity that evolving single-cell technologies are well placed to address. INTRODUCTION The intestinal tract contains a plethora of immune cells that are essential for normal physiology and defending the body against potential pathogens, but may also contribute to disease when their responses are exacerbated. Since the recognition of a localised intestinal immune system in 1919 1 , evolving technologies and experimental systems have helped refine our understanding of this complex cellular network. The invention of single-cell RNA sequencing (scRNAseq) in 2009 2 has revolutionized the field of immunology, revealing an unappreciated complexity of immune cell subsets, identifying new cell types and states, redefining cellular ontogeny and enabling inference of cell fate trajectories and function 3,4 . ScRNAseq is able to piece together existing knowledge of cell markers, ontology and interactions into an integrative picture of the building blocks of human tissues. Applied to human mucosal immunity, scRNAseq is particularly powerful as it allows for systematic analysis of cells within these complex and highly-immunologically active tissues, thereby making the most of small and often difficult to obtain clinical samples. Although transcriptional expression is not a perfect readout of protein expression 5 , scRNAseq allows for the hypothesis-generating phase of research to begin with and be guided by tissue-specific clues. Targeted experiments in model systems can then be used to support findings and test biological mechanisms. In this way and spurred on by the conception of the Human Cell Atlas (HCA) initiative in 2016, scRNAseq has been applied with great effect to several human barrier tissues including skin 6 , reproductive organs 7,8 and mouth 9 , and recently in the context of SARS-CoV-2 infection [10][11][12][13][14] . In this review, we focus on the immune system of the intestinal tract and specifically discuss how single-cell transcriptomics has advanced knowledge in this field. We provide an introduction to scRNAseq methods and analysis tools with particular use in this area and highlight studies that have shed light on the origins of intestinal immunity, cell diversity and plasticity, interactions with non-immune cells and compartmentalisation within the tissue architecture. SCRNASEQ APPROACHES TO STUDYING INTESTINAL IMMUNITY The scRNAseq field is rapidly evolving, with the number of cells captured per experiment now in the millions. Approaches to single-cell profiling intestinal tissues vary between studies and depend on tissue availability and biological questions being asked. Current studies on intestinal immunity compare cells of healthy or IBD patients [15][16][17][18] , focus on regional differences [19][20][21] or investigate intestinal development [22][23][24] applying either in-depth or high-throughput methods, and increasingly combining other technologies such as V(D)J sequencing and spatial transcriptomics to better understand cell heterogeneity, lineage relationships and spatial locations in tissue 20,21,24 . Below we outline the current and emerging technologies and analysis tools for studying intestinal immunity. ScRNAseq platforms There is a range of scRNAseq methods available with different benefits for studying mucosal immunology [25][26][27][28] . Platforms relying on the isolation of dissociated cells by fluorescence-activated cell sorting (FACS) or microfluidic devices include STRT-seq 29 , CEL2seq 30 , MARS-seq 31 and SMART-seq2-3 [32][33][34] . Use of FACS provides auxiliary information of proteins targeted by a select panel of fluorophore-tagged antibodies and can help unite transcriptional profiles to traditional cell type identities. These methods are lower in throughput due to limited capture sites, ranging from 100-1000's of cells per experiment (Fig. 1). A benefit of SMARTseq methods in particular is that they provide sequencing of fulllength transcripts such that highly variable transcripts including B and T cell receptors (BCR and TCR respectively) are automatically included, and they generally have greater coverage of the transcriptome compared to high-throughput approaches detailed below. Together, these approaches are especially useful for indepth and targeted analysis of immune cell types. High-throughput approaches rely on capture through microfluidic devices of single cells in water droplets in an oil phase (10x Genomics Chromium Gene Expression, Drop-Seq 35 and inDrops 36 ) or in microwells (Seq-Well S 3 37 and STRT-seq-2i 38 ). These methods tag either the 3′ or 5′ end of mRNA, incorporating a unique molecular identifier and applying a cell-specific barcode early after cell capture. Total mRNA can then be pooled for downstream library preparation allowing for processing of 1000's-millions of cells per experiment (Fig. 1). A major drawback of tagging either end of the mRNA is that highly variable transcripts such as splice variants and antigen receptors are not reliably captured. However, targeted amplification of TCRs and BCRs can be included as an additional step for the 10x Genomics Chromium 5′ platform. The high-throughput, less targeted nature of these methods makes them ideal for tissue atlasing or hypothesis-generating experiments. Recent advances in the realm of multi-omics technology, in which multiple cell features are simultaneously measured, are building on scRNAseq methods to also shed light on diversity in cell genotypes, transcriptional regulation and protein expression. These approaches include genotyping plus transcriptomics available as G&T-seq 39 , chromatin-accessibility with transcriptomics available via the 10x Genomics Multiome ATAC + Gene Expression platform and targeted protein quantification plus transcriptomics available as CITE-seq 40 or REAP-seq 41 . Spatial transcriptomics is another rapidly evolving area and has already provided spatial context for cell identities or cell-cell interactions identified from scRNAseq studies of the gut mucosa 24,42 . Available platforms include 10x Genomics Visium and nanoString GeoMx Digital Spatial Profiler, with the former currently offering whole transcriptome capture of zones covering in the order of 10 cells, and the second providing simultaneous fluorescent imaging at single cell resolution and whole transcriptome profiling from tissue regions of interest. Analysis of scRNAseq data Pre-processing and analysis of scRNAseq data from low-and highthroughput platforms follows the same general workflow and is detailed in a number of review articles and online tutorials 43,44 . In short, raw sequencing data undergoes read quality control, assignment to cellular barcode, mapping to a reference genome and read quantification to obtain a cell by gene matrix, and can be done in pipelines such as Cell Ranger 45 , indrops 46 , SEQC 47 , or zUMIs 48 . The data can then be handled with well documented computational packages as part of Seurat 49 , Scanpy 50 and OSCA 51 for quality control to remove empty droplets/poor quality cells, normalisation of the data and dimensionality reduction in preparation for visualisation. Downstream analysis of scRNAseq data typically involves cell clustering and cell type/state annotation, trajectory analysis 52 and ligand-receptor expression analysis 53 . Manual cell type annotation of clustered scRNAseq data is an iterative and laborious process. The most recent advances in the scRNAseq analysis include the development of automated tools for this step in the analysis pipeline. Amongst these methods, recently reviewed by ref. 54 , are correlation based methods that require a reference dataset (e.g., scmap, CellTypist 55 and Azimuth 49 ) and neural network based algorithms without a prior reference (e.g., scNym 56 and scQUERY 57 ) (Fig. 1). Reference-based methods are gaining popularity, but their output relies greatly on the relevance and quality of the cell type reference. For example, CellTypist 55 provides a collection of comprehensive and carefully curated immune cell profiles from multiple organs suited for the annotation of human tissue immune cells. Trajectory analysis for 1000's-millions of cells Hypothesis-generating & atlasing Multi-omics -VDJ-seq -CTE-seq -G&T-seq -10x Genomics Multiome 63 leverage splice variant information held within scRNAseq data to also map cellular response and developmental kinetics. Finally, recent tools to map cell signatures determined from scRNAseq data into spatially resolved transcriptomics data include cell2location (preprint available 42 ) and within the Seurat framework. DEVELOPMENT OF INTESTINAL IMMUNITY Formation of gut-associated lymphoid structures Intestinal immunity is established during development in utero. Prior to the single-cell genomics era, survey of immune populations during human development has been challenging, owing to tissue access, and there has been limited knowledge about the populations and markers expressed at this stage. Single-cell transcriptomics have been instrumental in understanding the diversity of immune and non-immune populations in these precious human intestinal developmental tissues [22][23][24]64 . The intestinal immune system is supported and regulated by gut-associated lymphoid tissues (GALTs), including mesenteric lymph nodes, Peyer's Patches (PPs) and cryptopatches. Despite obvious differences in gut lymphoid tissue size and location between species 65 , our understanding of the development of these structures has previously relied on experiments in animals 66 . In mouse studies, early GALT formation has been described to involve interactions between mesenchymal lymphoid tissue organising (mLTo), endothelial LTo (eLTo) and lymphoid tissue inducer (LTi; related to innate lymphoid cells (ILCs)) cells 66 . Interactions between these cell types are critical for recruiting and retaining immune cells at the sites of developing lymphoid structures. Through scRNAseq of human fetal gut samples, central players in secondary lymphoid organ formation have been resolved in humans, and their communication programs in initiating PP formation are defined from as early as 12 weeks post-conception ( Fig. 2) 21,24 . In addition, multiple subsets with LTi characteristics have been identified, proposing differences between human and mouse development 21 . Importantly, by comparing single-cell transcriptional profiles, equivalent stromal populations are predicted to be involved in the formation of ectopic lymphoid structures during inflammatory bowel disease (IBD), suggesting reactivation of developmental programs to support intestinal inflammation 21,24 . Fawkner-Corbett et al. described a population of mLTo-like stromal cells (i.e., with CCL19, CCL21 and CXCL13 expression) with similarities to a subtype of stromal cells expanded in ulcerative colitis (UC) 17,24 . Taking advantage of recent spatial transcriptomics technology, the authors showed localisation of these cells and confirmed the likelihood of relevant cell-cell interactions in lymphoid follicle formation in situ 24 . This shows that the formation of secondary organs is not restricted to development and is required for proper maturation and response of the immune system. First encounters with the microbiota We live in an era where the relationship between our immune system and microbes has never received such unprecedented attention. Characterising human-associated microbiotas and their role in health and disease has become the holy grail of current medicine. It is well-established that the human-associated microbiota contains a wide and complex community of microorganisms that is unique to individuals and constantly evolves in response to its environment 67,68 . Microbial dysbiosis is well recognised in diseases such as IBD, colorectal cancer, metabolic disorders and in conditions including pregnancy although mechanistic insight into the host:microbial relationship remains in its infancy 69 . Whether the interaction is between the host and its resident microbiota or a direct response to a specific infectious entity, microbes communicate with the host through attachment to mucosal surfaces, binding specifically to host receptors, production of metabolites such as short chain fatty acids and bile acids or adapting their growth and metabolism based on changes we make to their local environment. In parallel host immune responses attempt to continuously decipher between microbial friend or foe. The question of when host-microbe interactions become established has become a topic of intense investigation, with the presence of microbiota during in utero development still highly debated. A recent study of the meconium microbiota in human neonates (at term) before birth, controlling for process/ delivery mode-induced contamination indicated that microbial colonization most likely occurs either during birth via maternal seeding or post-birth via environmental seeding 70 . Conversely, microscopic images of bacterial-like structures with mucin threads within the gut lumen during the second gestational trimester provide compelling evidence for in utero seeding as well as aligning with other studies that detail in utero antigenic priming of the fetal immune system 71 . However, these studies are caveated with the potential of contamination with environmental sources of microbes (reviewed in ref. 72 ) making their physiological relevance questionable. Nevertheless, priming of the immune system and unexpected activation of immune cells have been suggested to be linked to the early microbial colonisation in fetal organs, especially the gut 73 . In particular, multiple scRNAseq studies have shown that memory CD4+ and CD8+ T cells are present and clonally expanded in the intestines in the first and second trimester of development ( Fig. 2) [74][75][76] . However, in a scRNA-seq study by Schreurs et al. fetal intestinal CD4+ T cells had a distinct gene expression profile from those in the post-natal intestine, and were characterized by high expression of genes regulating cell cycle, WNT signalling, and tissue development 77 . This supports the role of CD4+ T cells in fetal intestines promoting tissue development. A study using cytometry by time of flight (CyTOF) in combination with BCR sequencing showed that B cells are immature during second-trimester human development compared to those found in infants 75 . Our scRNAseq of human fetal intestines up to 17 weeks post conception also showed no evidence of B cell clonal expansion, class-switching or germinal centre formation 21 . Prenatal B cells may similarly be involved in development of lymphoid structures and have no need for class switching, while postnatal B cells undergo these events due to the presence of microbiome 75 . Through more precise analysis of cell phenotypes, these single cell studies promote the emerging concept that immune cell activation at least until second-trimester development is a product of their support of a highly controlled process of tissue generation rather than due to microbial seeding. Whether this is also the case in the third trimester of human development remains to be determined. Necrotising enterocolitis It has been argued that the epithelial barrier in preterm infants is immature; unable to sustain the ensuing microbial colonisation due to epithelial leakiness. Necrotising enterocolitis (NEC) is a devastating intestinal disease that occurs primarily in premature infants, resulting in impairment of the epithelial barrier and in extreme cases causing intestinal perforation and tissue necrosis 78 . Studies have consistently highlighted differences in bacterial gut communities associated with NEC that result in an imbalance between pro-and anti-inflammatory gut immune mediators 79,80 . Work by Cho et al. used mouse models to highlight an imbalance within the adaptive immune system in the NEC intestinal environment typified by type 3/T helper (Th)17 polarization, with reduced Th1, Th2, and Treg responses 81 . These findings were further supported by scRNAseq studies showing preferential presence of TNF-α-producing CD4+ T cells in early intestinal development and an enrichment for these cells in the intestines of preterm infants with NEC ( Fig. 2) 77 . The TNF-α overloaded microenvironment likely contributes to NEC-associated epithelial damage 77 . In addition to the effects of IL-10 in promoting selfrenewal of stem cells, the potential of T cell cytokines IFNy, IL-17A and IL-13 in promoting differentiation of epithelial cells towards mature cell types has been shown in adult mice 82 . The capacity of T cells to inform epithelial cell differentiation and maturation provides the opportunity to harness this interaction in clinical practice for treatment of NEC and other gut disorders. Bacterial dysbalance in the premature intestines is considered one of the key factors contributing to NEC. No single microbe has been identified as the mea culpa for NEC although increased abundance of Proteobacteria are frequently reported in NEC infants 83,84 . A recent study analysed microbial features predictive of NEC and identified Enterobacteriaceae overgrowth; including specifically Klebsiella-known to possess secondary metabolite gene clusters related to quorum sensing and bacteriocin production to be replicating more rapidly in the days prior to NEC diagnosis 85 . The transcriptional and proportional cell changes are likely reflected in these preterm infants and future single-cell studies will be instrumental in defining these changes. HETEROGENEITY AND PLASTICITY OF INTESTINAL IMMUNE CELLS The immune system must exhibit diversity and plasticity to respond to the countless challenges incurred throughout life. The conventional approach to studying diversity in immune cells has been top down-focussing on a cell type and iteratively subdividing it into more distinct subsets based on marker gene expression. This approach has been essential in understanding the intestinal immune system, but relies on pre-selection of markers and is limited in resolving heterogeneity within distinct cell groups. A strength of scRNAseq is its ability to explore heterogeneity from the bottom up-dividing cells into distinct groups and then defining molecular profiles that best describe each population 4 . In this way scRNAseq has refined classical immune cell type labels, defined new populations and predicted the role of cell types and states in the intestinal immune system of both mice 86 and humans 16,[87][88][89] . ILCs are innate immune cells that defend against both intra-and extracellular infections and are particularly abundant in mucosal tissues 90 . While they do not possess a functional TCR, they draw parallels in function and subtype classification with Th cells. ILCs are typically divided into 5 types-ILC1, ILC2, ILC3, natural killer and LTi cells 91 . A study by Muzzurana et al. compared sorted CD127+ ILCs from human blood, lung, colon and a past tonsil dataset 92 using the Smartseq2 platform 87 . Adopting a bottom-up approach to classifying ILC subtypes, they performed unbiased clustering followed by differential expression analysis and correlation analysis on the pooled data. ILCs subdivided into 20 subsets, clustering largely by tissue origin and FACS phenotype. Highest ILC3-associated gene expression was detected in the colon as expected 90 , but also the highest degree of diversity covering a spectrum of signatures ranging from migratory (expressing SELL, S1PR1, ITGAX and GPR183) to activation and tissue residence (expression of IL22, NCR2, GRM7 and LTA4H) 87 . ILC heterogeneity has similarly been shown with scRNAseq of mouse intestines 86 . To show the influence of the neighbouring microbiota on ILC signatures, the authors of this study treated mice with antibiotics prior to scRNAseq analysis. In antibody-treated mice, profiles of ILC1 and ILC2 more closely resembled ILC3 cells (with increased Atf5, Cxcl9 and Gpx1) compared to mice with an intact microbiota 86 . This points towards ILC3 representing the "default" phenotype with environmental factors driving diversification. Amongst the most diverse immune cells are CD4+ T cells, which have classically been partitioned into discrete subsets according to their expression of key transcription factors and cytokines (e.g., Th1 and Th2 cells expressing IFNy/TBET and IL4/ IL5/IL13/GATA3, respectively). However, plasticity or merging between these subsets has been a frequent observation in mice and humans 93 . A scRNAseq study by Kiner et al. observed extensive heterogeneity and blended signatures of colonic T cells in specific pathogen-free mice 88 . In an attempt to drive Th differentiation they infected mice with Citrobacter rodentium, an inducer of Th17 cells (determined by IL-17 expression), Heligmosomoides polygyrus and Nippostrongylus brasiliensis, both inducers of Th2 (IL-5 and IL-13) responses or Salmonella enterica, a bacterial infection inducing Th1 (IFNγ) responses. While FACS of T cells from infected mice confirmed the expected skewing of Th differentiation, scRNAseq analysis and unbiased clustering separated cells by infection system rather than characteristic Th genes. Expression of canonical Th cytokines dominated opposing sides of the same clusters in their data, arguing against discrete subsets and in favour of a polarised continuum of Th phenotypes driven by the infection setting 88 (Fig. 2, inlay). Other scRNAseq studies of acute immune responses in mice have reported skewed Th signatures including Th1 and Tfh in peripheral blood of mice infected with Plasmodium 94 , Th2 in lungs of mice exposed to dust mites 95 and Th2 in spleen and lymph nodes of mice infected with Nippostrongylus brasiliensis 96 , but have also showed heterogeneity and blending of canonical marker genes between clusters. Spectrums of Th phenotypes have also been resolved at singlecell level within the human breast cancer tumour microenvironment 47 , asthmatic lung 97 and in blood of SARS-CoV-2 infected individuals 98 . ScRNAseq studies of human intestinal disease have similarly added to our extensive understanding of the diversity in T cell phenotypes and highlighted specific enriched populations of likely significance to pathology. One such study observed a Th17like population of CD4+ CD8+ cells expanded in UC 16 . Given the known association between Th17 cells and IBD 99 , the authors hypothesised the role of the Th17-like cells in driving inflammation, although this remains to be confirmed 16 . In contrast, an independent UC study showed expansion of a IL26-expressing subset of Th17-like CD8+ T cells with an immunoregulatory signature 89 . Trajectory and TCR sequencing analysis of the single cell profiles further characterised this population as clonally expanded and arising from tissue-resident T cells or representing a post-effector state. To understand the significance of IL26 expression by these cells during inflammation the authors compared pathology of dextran sodium sulfate (DSS)-induced acute colitis in wild-type mice to humanised IL26-expressing transgenic mice -the Il26 gene does not naturally exist in rodents 89 . The IL26 transgenic mice experienced less severe disease, a phenotype that could be reversed with the administration of an IL26-antibody 89 . This suggests a possible role for IL26 in protecting against inflammation. In the context of colorectal carcinoma, paired TCR and transcriptome sequencing identified 8 distinct populations of CD8+ T cells, with signatures ranging from naive, central and effector memory cells, recently activated effector memory/effector cells (TEMRA; with PRF1, GZMB and GZMH expression) to dysfunctional exhausted cells (expressing PDCD1 and HAVCR2) 100 . Distinct TCR clonal populations and trajectory mapping supported two possible differentiation paths for T effector memory cells-either towards TEMRA or exhausted T cell states. The authors suggest that skewing differentiation towards beneficial TEMRA and away from a state of exhaustion could represent a possible avenue for therapeutic intervention 100 . Furthermore, this study showed tumour-specific T cell responses, with enrichment and clonal expansion of pro-inflammatory CXCL13+ BHLHE40+ TH1-like cells in tumours with microsatellite instability, but moderate enrichment for Th17 cells in those with microsatellite stability 100 . The enrichment of CXCL13+ T cells in tumours with high mutational burden was supported by a second scRNAseq study and offers a possible explanation for why this patient cohort responds better to checkpoint blockade therapy 100,101 . Together these studies highlight how scRNAseq can assist in understanding the complexity of intestinal diseases and the nuanced involvement of cell types and states in disease progression and control. ZONATION OF INTESTINAL IMMUNITY Immune cells do not act in isolation, rather their phenotype and response is shaped by their local environment. The intestinal tract in particular comprises unique microenvironments at the macroanatomical level in terms of distinct tissue regions and at the microanatomical level within the cross-sectional layers of the intestinal wall. Single-cell transcriptomics studies have built upon knowledge of zonation of cells within the human and mouse intestinal tract through providing the full breadth of molecular profiles of cell states, suggesting distinct roles for cells between different zones and how these contribute to the physiological functions of the intestines. Zonation between anatomical gut regions With roles in segregation of luminal contents and gut tissue and mediating absorption of nutrients and transfer of signals 102 , the epithelial barrier cells have notable variability between small and large intestines. For example, small intestinal epithelium forms villi and crypts while large intestines form only crypts, and Paneth cells that secrete antimicrobial peptides are only present in the small intestine 103 , while mucus secreting goblet cells are more abundant in the large intestine where they maintain a thicker mucus layer 104 . Through unbiased analysis of gene expression, scRNAseq has shown variability in the expression of nutrient absorption and antimicrobial defence genes by the epithelia between small and large intestines, leading to identification of a Paneth-like cell in the latter 19 . ScRNAseq has also resolved further rare subtypes based on distinct gene expression and shown that these change by gut regions. BEST2+ goblet cells that are restricted to the colon 105,106 , have been deeply profiled at single cell level in humans. This analysis revealed their specific expression of Kallikreins KLK15 and KLK3, and protease inhibitors WFDC2 and WFDC3 compared to other colonocytes 21 . Similarly BEST4+ epithelial cells, first identified in the intestinal tract by Ito et al. 105 , have been shown to be transcriptionally distinct from other epithelial cells in the human intestines 15,16 . Building on previous work in which a rare subset of small intestinal epithelial cells was reported to highly express CFTR, encoding a key channel mutated in cystic fibrosis 107,108 , further scRNAseq studies showed that BEST4+ epithelial cells of the human small, but not large, intestine co-expressed CFTR (Fig. 3) 21,109 . Based on their transcriptional profile and co-localisation with goblet cells, it has been proposed that these cells specifically in the upper intestinal tract support mucus secretion 21,109 and could be implicated in intestinal symptoms experienced by many cystic fibrosis patients 110 . Zonation in plasma B cells is similarly described between small and large intestines in humans, with previous studies describing a dominance of IgA1 isotype in small intestine versus IgA2 in the large intestine and an overall trend toward greater abundance of dimeric IgA plasma cells in the latter 111 . A recent single-cell study looked more closely at how these cells changed within the healthy human colon 20 . ScRNAseq of multiple colonic regions from the same individuals not only showed the increasing abundance of IgA+ plasma cells from proximal to distal colon, but transcriptional signatures suggesting this was at least in part due to increased retention/recruitment (Fig. 3). BCR repertoire analysis of the same cells indicated that distal colonic plasma cells were also more clonally expanded and somatically mutated, demonstrating the wealth of information that can be simultaneously obtained through scRNAseq approaches. Paired analysis of the neighbouring microbiota linked the increasing gradient of plasma cell response to recognition of a richer microbiota 20 . Zonation at the microanatomical level The intestinal mucosa can be divided into three compartmentsepithelium, lamina propria and muscularis mucosae 112 . These layers are colonised by distinct communities of cells, with substantial interaction and movement between them. The majority of intestinal CD8+ and γδ T cells exist within the intraepithelial layer, while CD4+ T cells typically reside in the lamina propria. Separating these two compartments prior to scRNAseq processing has revealed further surprising details of the zonation of these cell types 113,114 and their adaptations 115 and contributions to disease 116,117 . A study by Sullivan et al. of the mouse small intestinal epithelium showed up-regulation of enteric and pancreatic genes involved in digestion and absorption in response to a high-carbohydrate diet 115 . This gene program was defective in mice depleted of γδ T cells. Following this observation, the authors performed scRNAseq on sorted intraepithelial and lamina propria γδ T cells and identified four transcriptionally distinct populations across both compartments. Surprisingly, while cells of the intraepithelial space would have better access to the epithelium and luminal content, it was γδ T of the lamina propria with the necessary transcriptional profile (i.e., Notch1, Notch2, Maml1 and Hes1) to permit communication with the epithelium and support its remodelling in response to diet (Fig. 3) 115 . In human coeliac disease, scRNAseq has shown reorganisation of the lamina propria lymphocytes, with natural killer cells of this compartment during health, completely absent during disease 117 . While results of both studies required further validation, they point to finer grain variability of immune cells between intestinal compartments. ScRNAseq has similarly been applied to better resolve the compartmentalisation of T cells and expression of known risk factors 118 during Crohn's disease (CD). Th17 cells and their cytokines are known to be key mediators of the pathogenesis of CD 119 . ScRNAseq has further shown that Th17 cells accumulate within the intraepithelial space at the expense of CD8+ T, γδ T, Tfh and T regulatory cells during active CD compared to controls 120 . Studies of pediatric colitis reported a decreased abundance of CD8-ENTPD1 (expressing the gene encoding CD39) and γδT-ENTPD1 cells in the intraepithelial compartment 116 . The transcriptional profile of these specific cell subsets led the authors to hypothesise that a defective cAMP pathway was at play and contributing to disease pathogenesis. To test this theory, the phosphodiesterase inhibitor and anti-platelet drug, dipyridamole, was used to drive the cAMP pathway in a mouse model of colitis and in patients, resulting in a dose-dependent increase in T cell CD39 expression and improved epithelial integrity and decreased colitis severity 116 . Separate populations of macrophages exist within the lamina propria, submucosa and muscularis propria. A wealth of earlier research has described diverse roles for these populations appropriate to their microenvironment-lamina propria macrophages phagocytose bacterial antigens and produce mediators that drive epithelial cell renewal and muscularis macrophages interact closely with the enteric nervous system 121 . Bulk RNA sequencing (RNAseq) analysis of macrophages from these physically separated compartments showed separate expression profiles 122 . However, while fluorescence microscopy of mouse intestinal tissue highlighted at least two morphologically distinct populations of muscularis macrophages 122 , the nature and origins of further subsets of macrophages within each compartment remained a mystery. An unpublished study by Domanska et al. implemented scRNASeq of adjacent normal colorectal cancer tissues to address these questions 113 . They showed that lamina propria macrophages comprise 13 transcriptionally distinct subsets with a spectrum of proinflammatory signature (IL-1B, IL-1A, IL-6, IL23A, CXCL2, CXCL3 and CXCL8 or CXCL9, CXCL10, CXCL11, IDO1, GBP1, GBP2, GBP4 and GBP5) or high antigen presenting and phagocytic capacity (high levels of HLA class II genes and gene ontology pathways enrichment for endocytosis). Trajectory analysis predicted the majority of these subtypes arise from bone marrow-derived monocytes 113 . In the submucosal space, the majority of macrophages expressed LYVE1 (associated with vasculature) and COLEC12 (associated with neurons) and had low antigen presenting capacity, but high chemotactic and tissueprotective properties (Fig. 3). Twelve transcriptionally distinct populations of macrophages were present in the muscularis propria with proinflammatory properties (e.g., expression of IL1A, IL1B, CXCL2, CXCL3, CXCL8, CCL3 and CCL4) and homeostatic properties (e.g., expression of LILRB5, MARCO, LYVE1, FOLR2 and COLEC12). Homeostatic muscularis macrophages were also positive for PMP22 and EMP1, genes expressed by Schwann cells, suggesting these macrophages phagocytose Schwann cells and are in close contact with neurons 113 . Macrophages in both compartments showed ligand/receptor expression enabling them to interact extensively with tissue resident cells indicating that their expression profile is heavily influenced by their local microenvironment 113 . Intestinal epithelial cells arise from a common stem cell at the crypt base and transdifferentiate as they move towards the villus tip. Although the positions of enterocytes along the villus axis correlate with their age 123 , exposure to morphogen gradients 124 , and hypoxia 125 , low-resolution approaches were unable to determine the positional effects on enterocyte function in mice 126,127 or humans 128 . A study by Moor et al. applied laser capture microdissection of mouse enterocytes followed by scRNAseq to elegantly resolve a continuous gradient of transdifferentiation along the villus axis 114 . The villus tip enterocytes expressed an immune-modulatory program with the capacity to modulate immune reaction to the microbiota in the gut lumen (Fig. 3) 114 . Follicle-associated epithelium covering the lymphoid structures (i.e., PPs) possess characteristics distinct from villus epithelium [129][130][131] . Microdissection and RNAseq of mouse intestinal epithelium followed by single-cell validation of gene expression with single molecule fluorescence in situ hybridization showed that follicle-associated epithelium expresses lower levels of antimicrobial and nutrient absorption genes 132 . This suggests that epithelium at these sites is tuned for the optimal and efficient sampling of bacterial antigens by M cells and immune cells, rather than nutrient absorption and antimicrobial activity (Fig. 3). INTESTINAL IMMUNITY SHAPED BY NON-IMMUNE INTERACTIONS Key components of tissue microenvironments are the resident non-hematopoietic cells that have multiple established roles in immune responses and inflammation in mucosal surfaces. While previously this involvement was thought to be passive, with research focusing on fibrosis, tumour progression and wound healing, scRNAseq studies are highlighting the extent of active engagement of non-immune cells in shaping mucosal immunity with implication for health and disease progression 133,134 . Mesenchymal or stromal cells of the intestine reside in the subepithelial layers and contribute largely to structural integrity. Three recent studies comparing healthy and IBD intestinal tissue have applied scRNAseq to not only map the diversity of intestinal stromal subtypes, but also pinpoint which cell subtypes and interactions are at play during inflammation [16][17][18] . Kinchen et al. defined four distinct stromal populations with unique transcriptional signatures 17 . One of these stromal types termed stromal 4 cells, marked by expression of genes involved in cytokine signalling, T cell activation and cell adhesion, was scarce in healthy controls and enriched in UC. Crucially, IL-6 and TNFSF14 were additionally expressed by stromal 4 cells during disease and shown to prevent epithelial regeneration in follow up intestinal organoids experiments. Martin et al. similarly observed stromal cells contributing to the cellular response of CD 18 . However, here they defined a collective cell module (termed GIMATS) consisting of IgG plasma cells, inflammatory mononuclear phagocytes, activated T cells, and stromal cells that corresponded with failure to achieve durable corticosteroid-free remission upon anti-TNF therapy in a fraction of patients. A real strength of scRNAseq here was the capacity to compare ligand receptor pair expression between equivalent cells of patient groups. In this way, the authors showed that enriched cellular interactions between myeloid cells, activated endothelial cells and activated CCL2+ CCL7+ stromal cells were generating a positive inflammatory feedback loop in the GIMATS samples 18 . Last, Smillie et al. identified a population of inflammation associated fibroblasts (IAFs) that were expanded 189-fold in biopsies from UC patients versus controls 16 . The profile of IAFs was comparable to cancerassociated fibroblasts, a key player in creating an immune tolerant tumour environment. IAFs also highly express OSMR, a predictor of resistance to anti-TNF therapy in UC patients, and ranked high for a resistance gene signature determined from bulk RNAseq data. The gene encoding the ligand for OSMR, OSM was most strongly expressed by inflammatory monocytes and cDC2s in the scRNAseq data, implicating interactions between these cells in resistance to treatment 16 . Tuft cells are chemosensors of the gut epithelium, transmitting messages in the form of a spectrum of biological effector molecules to immune and neuronal cells 135 . Previous bulk RNAseq had identified neuronal and inflammation gene signatures from these cells 136 , but was unable to resolve whether these programs were from one population of cells or distinct subtypes. Using scRNAseq Haber et al. carried out unbiased clustering of Tuft cells from the small intestines of mice and identified two distinct subsets contributing these profiles 137 . Tuft-2 cells were enriched for immune-related genes particularly those supporting Th2 responses (Il4ra and Il13ra1 and Il17rb). Incredibly, this population also expressed Ptprc (encoding the pan-immune marker CD45), the first recording of this in non-hematopoietic cells and blurring the lines of the traditional definition of immune cells 137 . While equivalent findings have not been made from single cell analysis of human intestinal tuft cells, a fraction of human and mouse Tuft cells were shown to express immune signalling machinery, specifically activating and inhibitory Fc gamma receptors and downstream mediators for IgG signalling 21 . This could facilitate direct activation of Tuft cells in response to signals from plasma cells. These findings formed the basis of experiments in mouse models of intestinal colitis in which Tuft cells upregulated the inhibitory receptor suggested their potential as a rheostat of intestinal inflammation 21 . Antigen presentation is a critical step in the transmission of immune activation to the adaptive immune system with primary antigen presenters regarded as conventional DCs, macrophages and naive B cells. A body of prior work has extended this role to various epithelial cell types via MHC-II expression, with particular roles for microfold cells localised to PPs 138 . Recent scRNAseq experiments have taken this further to pinpoint exact subpopulations. Work from the Xavier and Regev laboratories combined scRNAseq, flow cytometry and immunofluorescence assays to define three novel subtypes of Lgr5+ intestinal stem cells in the mouse small intestine 82 . Although not as high as DCs, two of these populations expressed MHC-II at significant levels and were capable of presenting antigen to antigen-specific T cells in co-culture experiments. While the exact role of antigen presentation by these cells is unknown, the authors speculated that it could be a non-essential means for the epithelial layer to respond to infection or be a means by which T helper cells can interact with ISCs and shape their appropriate differentiation into mature epithelial cell types. The latter explanation is particularly interesting in light of further results showing Tregs cells promote ISC renewal while Th1 and Th17 cells promote differentiation 82 . CONCLUDING REMARKS AND FUTURE PERSPECTIVES Single-cell studies have provided a wealth of knowledge about the complex cellular landscape of the intestinal immune system. In their short history, they have detailed the spectrum of cell phenotypes, provided resolution of zonation of immune cells and shown the impact of their engagement with the neighbouring microbiota in health and disease. As these methods continue to evolve, there is little doubt that they will continue to provide insights into the field of intestinal immunity (Box 1). Spatial transcriptomics, while not yet at the single-cell with whole transcriptome level, has already placed gut immune signatures in their tissue context 21,24 and will be a key feature of future studies. The application of scRNAseq to in vitro systems and experimental models will offer the ability to look in detail at the mechanism of therapeutic and biological agents (e.g., faecal microbiota) on intestinal immune cells. ScRNAseq has already been adapted for capture of bacterial RNA in pioneering studies 139 , opening the possibility to study the function of specific bacteria. Integration of modalities for example combining scRNAseq of host cells with single-cell metatranscriptomics and patient genotype data will also provide the opportunity to study the interaction between these factors in shaping intestinal immune environments. Last, as the first chapter of the HCA approaches completion 140 , studies of individual organ systems will be combined to provide a global picture of human biology. We anticipate that this will bring with it studies of the contribution of gut immune cells to human biology and disease at a systemswide level. Box 1. Areas of open investigation Resolving the positioning of intestinal cells using in situ transcriptomics. • • • • Integration of scRNAseq with other modalities e.g., metatranscriptomics, metabolomics, proteomics. Systematic analysis of therapeutic mechanisms in in vitro and models systems. Cross tissue analysis i.e., common cell types and movement of cells between organ systems.
9,055.2
2021-11-30T00:00:00.000
[ "Biology" ]
Development and Maintenance of a Cross-mixed Mating System in the Orchid Bulbophyllum Orientale Outbreeding is usually advantageous because inbreeding suffers from depression. Nevertheless, mixed mating is very common in nature. We found two co-existing plant types, self-compatible and self-incompatible, in populations of the orchid Bulbophyllum orientale. The floral parts of this plant form a device to promote cross-pollination. Rancid substances are excreted to lure pollinators to the labellum, and pollens are attached to pollinators through a delicate mechanism. Given that many inflorescences and flowers are present on a clone and each inflorescence, respectively, pollinating insects may continuously visit inflorescences of the same clone and flowers of the same inflorescence but rarely continuously visit different populations separated by large distances. Consequently, self-compatible plants produce seeds from both crossing and selfing, and self-incompatible plants only bear crossing seeds. Thus, a crossmixed mating system is created in the population. Individuals capable of producing both crossed and selfed seeds have better chances in natural selection. The strict crossing system is broken down, and a cross-mixed mating system consisting of both mixed mating and strict crossing is formed. The cross-mixed mating system fluctuates with varying behavior of pollinating insects. The mixed mating system is favored because a population has many clone individuals and because each individual has many multi-flower inflorescences in B. orientale. The partial strict crossing is retained, and it can counteract the latent harm caused by selfing and assist in the maintenance of this cross-mixed mating system. The successful evolution of flowering plants is demonstrated by the mode of attraction for pollinating insects, the smart use of the cross-pollinating facility, and the tradeoff between crossing and mixed mating. *Corresponding author: Zhong-Jian Liu, Shenzhen Key Laboratory for Orchid Conservation and Utilization, The National Orchid Conservation Center of China and The Orchid Conservation & Research Center of Shenzhen, Shenzhen 518114, China, Tel.: 86-755-25712359; Fax: 86-755-25711928; E-mail<EMAIL_ADDRESS>Received March 30, 2014; Accepted March 31, 2014; Published April 10, 2013 Citation: Chen LJ, Zhang GQ, Li LQ, Zhang YT, Rao WH (2014) Development and Maintenance of a Cross-mixed Mating System in the Orchid Bulbophyllum orientale. J Phylogen Evolution Biol 2: 124. doi:10.4172/2329-9002.1000124 Copyright: © 2014 Chen LJ, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Plant breeding systems comprise inbreeding, outbreeding, and apomixes [1]. Among these systems, outbreeding is considered the primary driver of pollination diversity. Flowering plants have developed many mechanisms, such as dioecy, dichogamy, herkogamy, self-incompatibility [2], and flexistyly [3,4], to avoid inbreeding and promote outbreeding. Selfing has many benefits, such as opening up new habitats [5,6], avoiding unresponsive pollinators [7][8][9][10], and automatically transferring genes to offspring [11]. However, these benefits are severely abated because of the high mating costs incurred from the pollen and ovule discounting. Angiosperm evolution has recurrent alternating dominance of cross-pollination and selfpollination [12]. The inbreeding depression caused by self-pollination is deemed to be a selective pressure in the evolution of plant breeding systems [4,7]. Nevertheless, the reproduction assurance of selfing has a greater effect than the inbreeding depression in facilitating the evolution of mating systems. Consequently, the plant may adopt a mixed mating system as a tradeoff between crossing and selfing [13]. Geitonogamy is related to the evolution of inflorescence [14] and pollinator behavior [13]. These relationships can influence population dynamics [15,16]. Some plants maintain their mixed mating systems with strict crosspollination in single flowers and geitonogamy in the inflorescence [17]. The mixed mating system is a compromise between the plant and the environmental conditions [18]. Inbreeding depression is a selective pressure that thwarts the fixation of selfing-favorable genotypes, prevents self-evolution [19], and therefore preserves the mixed mating system in the long term [20]. The great diversity of pollination mechanisms in angiosperms is most evident in breeding systems of animal pollination [21]. The changes in pollination environment and pollinator behavior directly affect plant mating [6,9,13]. Natural selection is believed to render the floral structure adaptive to cross-fertilization [22,23]. Orchidaceae, an advanced evolutionary family of angiosperm, has a highly specialized floral structure for insect pollination. Although orchids have developed many techniques for cross-pollination [3,6,15,[23][24][25][26][27], many species use mixed mating as a breeding strategy [28]. Recently, Bulbophyllum orientale was found to have many large clone individuals, each with many multi-flower inflorescences. However, 3.5% of individuals within a single population have a very low natural fruit ratio, and preliminary selfing tests show that they are all self-incompatible. The type of breeding system used by B. orientale is a subject worth extensive research. Given that pollination pattern and pollinator feeding behavior affect development of the breeding system, this study aims to examine the pollination mechanism, including flowering phenology, flower structure, stigma receptivity, pollen activity and floral odor and breeding system of Bulbophyllum orientale, to investigate the evolutionary dynamics of the mating system of B. orientale, and to reveal the development and maintenance of its breeding system diversity. Observation of flowering phenology and floral structural features Observation of flowering phenology: The flowering phenology of all plants in natural populations was observed from 2004 to 2008. Different flowering stages, namely, early, full, and late, were recorded. Each year, 20 unopened inflorescences were randomly marked and observed. For each single flower, the opening time, the corolla withering time, and the shape changes of the labellum and corolla were recorded. Labellum and stigma shapes and positional relationship between the anther and stigma Freshly opened flowers were collected, and perianths were partly removed. The labellum and stigma shapes, as well as the positional relationship between the anther and stigma, were examined and photographed under a stereomicroscope. Flowering biology characteristics Histochemistry of the labellum secretion: Ten flower labella from different inflorescences were randomly collected, immersed in the staining solution for 10 min, and observed under a microscope. A blue or black color on the labellum surface after staining with iodine potassium iodide (IKI) indicates the presence of starch, and a red color after staining with Sudan IV confirms lipid secretion [30]. Each experiment was repeated five times, and the colors were recorded. Stigma receptivity, pollen activity, and seed-setting rate at different pollination periods: Fresh flowers were randomly collected at different growth stages (e.g., at 0, 0.5, 1, 2, 3 … 10 d after opening). Pollen activity and stigma receptivity were tested by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium (MTT) method [31]. Half of the fresh pollinium and half of the column (stigma, longitudinal cutting) were placed on the slide. The other halves of both samples were placed in water in a vessel, boiled for 15 min, and then placed on the slide for comparison. One drop of MTT solution was added to the sampled pollinium and stigma. After the samples were thoroughly mixed and air-dried, another drop of MTT solution was added. The coloration of the pollinium and stigma was observed under a microscope after the samples were completely dried. A blue or black color of the pollinium indicates positive pollinium activity and stigma receptivity. A yellowish brown color or no color change indicates pollen inactivity and no stigma receptivity. The following experiments were performed to determine the fruit-set rate of B. orientale at different pollination periods. In the first experiment, the flowers were bagged before opening. These flowers were artificially cross-pollinated at 1, 2, 3, 4, 5, 6, 7, and 8 d after opening using pollen grains from flowers of different plants that had opened for 1 or 2 d and then bagged again. In the second experiment, the flowers were bagged before opening. They were artificially crosspollinated at 1, 2, 3, 4, 5, 6, 7, and 8 d after opening (or when the corolla withered) with pollen grains from different flowers on the same inflorescence (geitonogamy) and then bagged again. The fruit-set rate in each experiment was calculated. Floral odor: Five plants bearing B. orientale flowers were randomly selected from the populations. The inflorescences were separately placed in 100-mL bottles, which were then sealed. The odor was extracted with CAR/PDMS 75 μm (30 min), desorbed (3 min, 200°C), and then analyzed on a Finnigan TRACE GC-MS (25°C, 65% humidity). The volatile components were analyzed at The Analysis Center of South China Agricultural University. Visiting insects and their visiting behavior Species and visiting behavior of the visiting insects: Each year, 10 opened inflorescences were randomly selected from the populations, marked, and continuously observed for 2 d from 8:00 to 19:00. The number of visiting insects on each marked inflorescence was counted, and the visiting behavior of each insect species was described, photographed, and videoed. The following items were recorded for each inflorescence: the type of visiting insect, the visit frequency, the visit duration, and the number of flowers visited by one visitor at one time. Specimens of the insects were also captured. Attraction of floral features to pollinating insects: Ten freshly opened self-compatible inflorescences in the populations were marked to examine the attraction of floral features to pollinating insects. The labella were removed in five inflorescences, and the visiting frequency and visiting behavior of the pollinating insects to all inflorescences were compared. This experiment was performed three times. (2) soon before opening, the pollen grains were removed, and the flowers were bagged; (3) the flowers were bagged before opening, artificially cross-pollinated (with different plants) after opening, and then bagged again; (4) the flowers were bagged before opening, artificially self-pollinated (geitonogamy) after opening, and then bagged again; and (5) the flowers were bagged before opening and then stored until they withered. The numbers of flowers and inflorescences were recorded. The fruit-set rate of each group was calculated. Inbreeding depression From 2004 to 2008, eight fruits at maturity stage were randomly selected each year from the cross-and self-pollination experiments of the inbreeding system examinations (a total of 40 fruits over 5 years for each experiment), and the seed number in each fruit was counted. The seeds of a cross-fruit and a self-fruit were paired into a sample group and separately sown on the same artificial culture medium. After germination, both the budded and un-budded seeds were counted. The seed number and the average seed germination rate from the self-and cross-offspring were examined in pairs by T-test or μ-test to calculate the benefit and the degree of inbreeding depression [32][33][34]. Cross-pollinator shortage rate and selfing rate Given that many inflorescences and flowers are present on a clone and each inflorescence, respectively, we need to examine the degree of geitonogamy to compare their fruit ratios, inferring pollinator's shortage rate. We hypothesize that all of the flowers in a many-flowered inflorescence should be pollinated when pollinators are sufficient, generating the highest fruit ratio. We used the artificial pollination to imitate the condition of enough pollinators and determine the highest possible fruit ratio. Theoretically, all of the flowers should fruit after pollination, giving a fruit ratio of 100%. The pollinator storage rate is derived by subtracting the fruit ratio in the natural condition from the theoretical fruit ratio and then dividing by the highest possible fruit ratio. In the same way, the pollinator storage rate within a population and among populations can be calculated by comparing artificial pollination and the natural fruit ratio. The following examinations of the cross-pollinator shortage rate and selfing rate were carried out from 2006 to 2008. Within inflorescences: Each year, among the 10 populations, 10 self-incompatible inflorescences (or self-compatible inflorescences whose stamens were removed) were paired with 10 self-compatible inflorescences. These inflorescences were pollinated by pollinators under natural conditions. Another 10 inflorescences were artificially cross-pollinated. The following attributes were calculated: pollinator shortage rate of inflorescences = 1 − natural fruit-set rate of selfcompatible flowers / fruit-set rate of artificial cross-pollination; inflorescence selfing-rate = natural fruit-set rate − natural fruit-set rate of self-incompatible inflorescences (self-compatible inflorescences with stamens removed); inflorescence crossing-rate = natural fruit-set rate of self-incompatible inflorescences (self-compatible inflorescences with stamens removed); and cross-pollinator shortage rate of inflorescences = 1 − natural fruit-set rate of self-incompatible inflorescences (selfcompatible inflorescences with stamens removed). Within population: In the 10 populations, 10 inflorescences each year were bagged before flower opening, had their stamens removed after flower opening (two inflorescences from each population), and were paired with whole flowers from another 10 inflorescences (two inflorescences from each population). All flowers were then pollinated under natural conditions. The following attributes were calculated: pollinator shortage rate within population = 1 − natural fruit-set rate of whole flowers / artificial crossing fruit-set rate of whole flowers; and selfing (inbreeding) rate within population (mating between flowers in the same population) = natural fruit-set rate of whole flowers − fruit-set rate of stamenless flowers among populations (the results from Section "Among populations" were used). Among populations: B. orientale is pollinated by very small insects. Can these pollinators pollinate among populations and be used to test effect of outbreeding of B. orientale? In two adjacent populations located 30 m apart from each other, all inflorescences of one population were bagged each year before flower opening, had their stamens removed after flower opening, and were then paired with whole inflorescences from the other population to receive natural pollination. The following attributes were calculated: pollinator shortage rate among populations = 1 − fruit-set rate of stamenless flowers; and crossing (outbreeding) rate among populations = fruit-set rate of stamenless flowers. Results Shape and structure of flowers and flowering phenology characteristics Shape, structure and flowering phenology: Each clonal series of B. orientale can include up to 300 to 400 related individuals, and each individual can simultaneously produce two to three inflorescences. The flowering season begins in early June. Full blossom is in mid-July, and late blossom is in early October. The flower opens when the scape has grown to 10 cm to 20 cm in length. Under natural conditions, an unpollinated single flower has a blossom that lasts for 7.3 ± 2.3 d (n = 50), and an unfertilized flower will completely fall off from the rachis. The ovary begins to swell 2 d after pollination. After a 2 month gestation, the capsule matures and splits to scatter the seeds. Labellum and stigma shapes and positional relationship between the anther and stigma: The labellum of B. orientale is attached to the column base with a movable joint, to form a see-saw lip of 4 mm in length and 2 mm in width. The fore lip has a groove in the middle that connects to the shallow spoon-shaped concavity on the back lip. The mastoid glands on the groove fringe secrete mucous substances that flow into the concavity on the back lip. The stigma is largely cylindrical, ca. 3 mm long and 2 mm wide, and has one 1.5 mm long column tooth on both sides at the top. The stigma cavity has an ovate-elliptic shape, and it is located at the upper part of the column. It is 1.4 mm to 1.5 mm long and 0.8 mm to 0.9 mm wide, and it is usually full of a glutinous substance after flower opening Figure 1A. Four elliptic pollinaria (two pairs, each with no viscid disc) are exposed when the anther cap on the lateral fringe of stigma cavity splits off during flower opening Figures 1B and 1C. A lip-shaped rostellum exists between the stigma cavity and the anther bed. The lower surface of the rostellum breaks upon gentle squeezing and exudes glutinous substances Figure 1D. Flowering biology characteristics Histological chemistry of secretory glands on labellum: After staining with IKI, the original color of the labellum persisted, and Fruit-set rate, pollen activity, and stigma receptivity at different pollination periods: All sampled pairs of pollinium and stigma were stained blue-black, and the reference pairs showed no color change. Therefore, the pollen was active, and the stigmas were receptive during the flowering season. No dichogamy was observed. The fruit-set rates at different pollination periods are summarized in Table 1. Before flower fading, the cross-pollination of all flowers had high affinity, and the self-pollination exhibited both self-compatibility and self-incompatibility among plants, regardless of the pollination period. Odor analysis: According to the gas chromatogram of the volatile elements from B. orientale Figure 2 and the NIST database, the odor of B. orientale contained sesquiterpene, α-copaene, and several lipoids. These substances are the food of pollinating insects and produce rancid smells [35]. landed on the upper part of the labellum and pushed the labellum downwards to make an interstice between the lower labellum and the column. The visitor then foraged into the interstice and became clamped on its head and chest as the labellum closed upwards because of the reduced pressing weight. The ridged back of the visitor became pressed into the stigma cavity Figure 1E. When the visitor struggled to escape, it had to exit through the entry route because of the obstacles of the labellum lobelet and column arm. It stretched its belly upwards and kicked the upper labellum with its back legs to increase the space between the labellum and the column. The limited flexibility of the labellum forced the visitor to break the rostellum with its chest and back and carry the glutinous secretion with the uncovered pollinium before it could escape Figure 1F. When the pollinium-carrying insect repeated the above process on another flower, the foreign pollinium on the insect's back was smeared into the stigma cavity when the insect attempted to exit. Then, the new pollinium was passed on and taken away, thereby achieving cross-pollination. Given that the B. orientale raceme has many flowers, some visitors foraged on other plants and effectuated cross-pollination after having escaped from one flower. Nevertheless, many more visitors continued to forage on the same inflorescence or the same clonal plant, resulting in the crosspollination of the same plant. Breeding system No difference was observed between the fruit-set rate for artificial self-pollination (90.43% ± 6.93%, n = 30) and that for artificial crosspollination (91.55% ± 6.30%, n = 30) for self-compatible plants (t = 0.65631, d.f. = 58, P > 0.05). The absence of differences indicates that both selfing and crossing are effective breeding methods for B. orientale. The fruit-set rate of the flowers that were bagged before opening was zero. This result indicates that B. orientale cannot achieve sexual reproduction by automatic self-pollination or generate asexual seeds by apomixis under natural conditions. The stamenless flowers bagged before opening had no seed set, which also confirmed the absence of apomixis. The difference between the control and the artificial selfpollination and more notably that between the control (10.89% ± 6.43%, n = 30) and the artificial cross-pollination (t = 49.1082, d.f. = 58, P < 0.01) revealed the shortage of pollinators for complete insemination under natural conditions. Therefore, the mating system of the self-compatible B. orientale plants is a hybrid of crossing and selfing. For the self-incompatible plants, both the bagged natural flowers and the bagged stamenless flowers had no seed set. No difference exists between the cross-pollination of the self-incompatible plants (92.22% ± 7.44%, n = 30) and that of the self-compatible plants (t = 0.37453, d.f. = 58, P > 0.05). The fruit-set rates of self-pollination and same-plant cross-pollination were zero. This finding indicates that these plants are self-incompatible and that the mating system is strict crossing. The natural fruit-set rate of the self-incompatible plants (5.78% ± 6.49%, n = 30) was substantially different from that of the self-compatible plants (t = 3.06498, d.f. = 58, P < 0.01). This result confirms that the insects effect the same-plant cross-pollination and that the fruit-set rate differs because of self-incompatibility and selfcompatibility Table 2. Inbreeding depression The seed number and seed germination rate of each fruit from selfing and crossing were calculated Table 3. No difference was found between selfing and crossing either in the seed number (μ = 0.1156, d.f. = 78, P > 0.05) or in the seed germination rate (μ = 0.09888, d.f. = 78, P > 0.05). Compared with self-incompatible plants, the self-compatible plants could use insect pollination much more effectively to produce more seeds. This result suggests that selfing can increase the seed number of the population. Using the seed amount to calculate selfing relative fitness, Ws/Wc=0.9763, inbreeding depression δ=0.024. Table 4 shows the pollinator shortage rate and the selfing rate of each population. The natural fruit-set rate of B. orientale was 10.89%, and the crossing and selfing rates were 5.50% (50.51% of the total fruitset rate) and 5.39% (49.49% of the total fruit-set rate), respectively. The pollinator shortage rate = 1 − 10.89 / 91.55 = 88.10%. Thus, the behavior of the pollinating insects induced the same-plant cross-pollination in B. orientale. The percentages of self-pollination and cross-pollination were almost equal. The cross-pollinator (behavior) shortage rate of the inflorescences = 1 − 5.50 / 91.55= 93.99%. The pollinator shortage rate within the population = 1 − 8.59 / 91.55 = 90.62%. The inbreeding (within population) rate was 5.11%, the outbreeding (among populations) rate was only 0.34%, and the cross-pollinator (behavior) shortage rate was 99.63% (1-0.34%). These results show that inbreeding (within population) is predominant in B. orientale. Discussion The floral shape is one of the most important aspects of the interaction between plants and pollinators, determining the efficiency of nectar fetching by the pollinator, the pollen attachment, and the pollen acquisition by the stigma from pollinating agents [36]. The observation of flowering phenology and flowering biology of characteristics of B. Orientale showed that the structures of the flower, such as the lip and stigma morphology and the anther and stigma position are suitable for pollination by small insects. The longer opening period of the flowers and the pollen viability and stigma receptivity of the overall florescence improve the pollination success rate under the condition of insufficient pollinators. The floral odor they emit, such as 2-copaene and lipoids, emits food signals to the pollinators and attracts their visits. The successful pollination of B. orientale depends on the flower's excretion. This phenomenon is similar to putrid berries and thus attracts and rewards pollinating insects to complete pollination on the unique flower structure. The cross-pollination mechanism that utilizes the floral parts and glutinous substance as pollination implements is achieved by a series of ingenious "designs" rather than by the functions of a single component. The unique floral structure has a clear predisposition for cross-pollination. The natural fruit ratio and rate of pollinator scarcity show that both selfing and outcrossing mainly occur within the same population (the fruit ratio is 5.11%), while the outbreeding rate of two populations 30 m apart is 0.34%, which is 6.65% of the fruit rate within population. Therefore, pollination in natural conditions occurs mainly within a population, and chances of selfing (including geitonogamy) and outbreeding (dioecious) are equal (5.5% vs. 5.39%), showing that inbreeding is predominant in B. orientale. However, the visiting insects may bring the pollinaria into the stigma cavity of the same inflorescence or a different inflorescence of the same plant because each inflorescence has many flowers. Self-compatible plants using both crossing and selfing could acquire more opportunity in pollination to improve the gross seed output and increase the reproductive assurance coefficient despite the pollinator deficiency. The self-incompatible plants in the populations use strict crossing to prevent self-pollination or same plant cross-insemination. This mixed mating system may have developed to reduce the selective pressure caused by frequent selfing and ensure both the reproduction and the crossing predominance in the populations, forming the cross-mixed mating system. Inbreeding depression may greatly offset the benefits of inbreeding, whereas outbreeding is easily affected by environmental conditions and thus costly (in this experiment, 99.63% of the flowers were not fertilized). The evolutionary compromise gave rise to the mixed mating system as a trade-off between inbreeding and outbreeding. The self-compatibility and self-incompatibility in the plants and the partial application of same-plant cross-insemination also represent this trade-off. The selfing and crossing offspring of the plants in each population were analyzed by comparing the seed number and the seed germination rate of each capsule to estimate the degree of inbreeding depression. No difference was found, and selfing did not show a higher mating cost than crossing. Therefore, considering seed generation and seed germination rate as indicators of inbreeding depression, selfing would evolve as long as pollinators are lacking. This phenomenon is attributed to the absence of high costs in pollen and seed discounting Figure 2. The change in pollination environment in the three flowering seasons was recorded. The fruit-set rates of natural, stamenless, and cross-unabridged flowers were compared Figure 3. A significant difference was found between the cross-pollination rate and the selfpollination rate within and among the flowering seasons. The low visiting rate of cross-pollinators led to the remarkably low fruit-set rate of stamenless flowers and self-incompatible flowers. More importantly, the difference in the fruit-set rate among the stamenless flowers, selfincompatible flowers, and unabridged flowers demonstrated that selfing of different flowers on the same plant could increase the fruit-set rate when pollinators were lacking or absent. The absence of pollination in many flowers during the flowering season suggested that cross-and self-pollination were both extremely rare. The breeding assurance that persisted in the biological environment of the B. orientale population resulted in the evolution of B. orientale from strict crossing to mixed mating. Therefore, the existence of 3.5% of individuals in the population as self-incompatible can be considered a remnant of strict crossing, showing that B. orientale originally used a strategy of strict crossing. Given the low inbreeding depression (δ=0.024), strict crossing is broken down by the lack of pollinators. However, retaining some strict crossing individuals would benefit by balancing or by preventing over-selfing. We compared the non-seed-set percentage of the stamenless flowers (or self-incompatible flowers) under natural conditions with that of the artificially cross-pollinated flowers to quantify the visiting rate of pollinators. During the 3 year observation, only some of the flowers in the populations had seed set after being visited, regardless of how the pollinators changed. The selfing rate fluctuated only in a small range among populations and with time Figure 3. Consistently, almost 90% of the flowers in the populations were not pollinated, regardless of how many flowers were visited by pollinators each year, and the crossing rate was always low. The pollination rate (including the self-pollination rate) still has a rather large potential for improvement, and the concurrence of selfing and crossing is ecologically favorable. The data thoroughly demonstrated the occurrence and maintenance of the cross-mixed mating system affected by pollinating behavior. The pollinators successively visited neighboring flowers and inflorescences of self-compatible and self-incompatible plant populations. These flowers and inflorescences had many clones, and each clone had many multi-flower inflorescences. The findings not only illustrate the reproductive countermeasure of selfing in the mating system evolution of B. orientale but also extended the significance of our research to the evolution of the mixed mating system. In wild populations, uncertainty in the pollinator environment is normal [9,27,37]. The developmental mechanism of the flower not only promotes crossing but also maximizes the pollinator-mediated crossing and affects partial selfing. This phenomenon addresses the uncertain pollinator environment and maintains both the advantage of crossing and the breeding assurance of selfing [9]. Thus, a cross-mixed mating system that fluctuates with a varying pollinator environment is developed and maintained.
6,323.8
2014-01-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Clinical and Functional Characterization of a Novel Mutation in Lamin A/C Gene in a Multigenerational Family with Arrhythmogenic Cardiac Laminopathy Mutations in the lamin A/C gene (LMNA) were associated with dilated cardiomyopathy (DCM) and, recently, were related to severe forms of arrhythmogenic right ventricular cardiomyopathy (ARVC). Both genetic and phenotypic overlap between DCM and ARVC was observed; molecular pathomechanisms leading to the cardiac phenotypes caused by LMNA mutations are not yet fully elucidated. This study involved a large Italian family, spanning 4 generations, with arrhythmogenic cardiomyopathy of different phenotypes, including ARVC, DCM, system conduction defects, ventricular arrhythmias, and sudden cardiac death. Mutation screening of LMNA and ARVC-related genes PKP2, DSP, DSG2, DSC2, JUP, and CTNNA3 was performed. We identified a novel heterozygous mutation (c.418_438dup) in LMNA gene exon 2, occurring in a highly conserved protein domain across several species. This newly identified variant was not found in 250 ethnically-matched control subjects. Genotype-phenotype correlation studies suggested a co-segregation of the LMNA mutation with the disease phenotype and an incomplete and age-related penetrance. Based on clinical, pedigree, and molecular genetic data, this mutation was considered likely disease-causing. To clarify its potential pathophysiologic impact, functional characterization of this LMNA mutant was performed in cultured cardiomyocytes expressing EGFP-tagged wild-type and mutated LMNA constructs, and indicated an increased nuclear envelope fragility, leading to stress-induced apoptosis as the main pathogenetic mechanism. This study further expands the role of the LMNA gene in the pathogenesis of cardiac laminopathies, suggesting that LMNA should be included in mutation screening of patients with suspected arrhythmogenic cardiomyopathy, particularly when they have ECG evidence for conduction defects. The combination of clinical, genetic, and functional data contribute insights into the pathogenesis of this form of life-threatening arrhythmogenic cardiac laminopathy. Introduction Lamins A and C, encoded by the lamin A/C gene (LMNA), are major structural components of the nuclear lamina, a protein meshwork supporting the inner nuclear membrane [1]. In addition to sustaining the structural integrity and mechanical stability of the nuclear envelope, lamins are involved in multiple cellular processes, such as chromatin organization, DNA replication, gene regulation, and nucleo-cytoskeletal coupling [2]. LMNA gene mutations are implicated in a wide spectrum of laminopathies, inherited diseases characterized by phenotypic heterogeneity, including cardiac and skeletal myopathies, lipodystrophy, peripheral neuropathy, and premature aging syndromes [1,3]. The cardiac phenotype of laminopathies is characterized by conduction system disorders (CD), arrhythmias, and dilated cardiomyopathy (DCM) [4]. Many LMNA mutation carriers have a poor prognosis [5], due to a high rate of major cardiac events, such as sudden cardiac death (SCD), life-threatening ventricular arrhythmias, extreme bradycardia due to high-degree atrioventricular block, and progression to end-stage heart failure [4]. In addition to LMNA DCM-CD, some atypical forms of LMNA-related cardiac diseases were reported [6,7]. Recently, severe forms of arrhythmogenic right ventricular cardiomyopathy (ARVC) have been linked to lamin A/C gene mutations [8], and both genetic and phenotypic overlap between DCM and ARVC was observed [8][9][10][11]. Although the role of lamins in cell functions has been widely investigated, the pathophysiological mechanisms leading to cardiac phenotypes caused by LMNA mutations are not yet fully understood [1][2][3]. In this study, we detected a novel LMNA gene mutation in a large family with arrhythmogenic cardiomyopathy of different phenotypes, including ARVC, DCM, conduction disturbances, arrhythmias, and sudden cardiac death (SCD). We investigated the involvement of the LMNA gene in the pathogenesis of this arrhythmogenic, familial cardiac laminopathy and functionally characterized the newly-identified LMNA mutant. Ethics Statement All participants provided written informed consent. The Ethics Committee of University Hospital Consortium, Policlinico of Bari, Italy approved the study. This study conforms to the principles outlined in the Declaration of Helsinki (World Medical Association Declaration of Helsinki). Genetic Analysis and Mutation Detection Genomic DNA was obtained from peripheral blood samples using the Wizard Genomic DNA Purification kit (Promega Corporation, Madison, Wisconsin, USA), as recommended by the manufacturer. Mutation screening of plakophilin-2 (PKP2), desmoplakin (DSP), desmoglein-2 (DSG2), desmocollin-2 (DSC2), plakoglobin (JUP), and αT-catenin (CTNNA3) genes was performed as previously reported [14,15]. All coding exons of LMNA gene were amplified by PCR and analyzed by High Resolution Melting, as reported in G. Millat et al [16]. The PCR products of exon 2 and 10 were analyzed by direct sequencing on a 310 ABI sequencer. Numbering of the LMNA nucleotides refers to GenBank accession number NM_170707.2. A control group of 250 healthy and unrelated Italian subjects (500 alleles) was used to exclude the possibility that any identified variation could be due to DNA polymorphism. All controls were unrelated healthy volunteers matched to the index patient by ancestry from the general Italian population. Moreover, all identified variants were systematically searched for in dbSNP (http:// www.ncbi.nlm.nih.gov/projects/SNP/), in the 1000 Genomes Project database (http://www. 1000genomes.org), or in the Exome Variant Server (http://evs.gs.washington.edu/EVS/). Functional Studies in HL-1 Cardiomyocytes The functional characterization of this mutated lamin A protein (LMNA) was performed in cultured HL-1 cardiomyocytes expressing EGFP-N-terminal-tagged wild-type (WT) and mutated LMNA. Live-imaging experiments were carried out in a BioStation IM device (Nikon). The acquisition timing was set to every 5 minutes for 16-32 hours, and up to 10 cell fields were captured at each time point. Hyperosmotic stress was induced by incubating HL-1 cells in culture medium with 300 mM mannitol for 2h. Hypoxic stress was performed in a Hypoxia Modular Incubator Chamber (Billups-Rothenberg Inc) and a flow rate of 4 liters/minute of 100% N 2 was applied for 15 min. Cells, in the hypoxia chamber saturated with N 2 , were placed at 37°C for 8h. Oxidative stress was induced by incubating HL-1 cells in culture medium plus 300 μM H 2 O 2 for 4 h. For further details, see S1 Methods. Statistical Analysis Continuous variables were expressed as mean values ± standard deviation, and frequencies as the number and percentage of patients. Between-group comparisons were made using the nonparametric Wilcoxon rank-sum test. Frequencies were compared using the Fisher's exact test. The analyses were performed using STATA software, version 12 (StataCorp, College Station, TX, USA). A P-value of <0.05 was considered statistically significant. Genetic Analysis of seven amino acids (LLNSKEA), from position 140 to position 146, in the lamin A/C protein, without a frame shift in the open reading frame and affects a highly conserved amino acid region across several species (Fig. 1B), suggesting a possible pathogenetic role. This LMNA variant was subsequently detected in 10 of 20 family members who underwent family cascade genetic screening (Fig. 1C). The index patient (subject II-7 in Fig. 1C) was also screened for mutations in the ARVC-related genes PKP2, DSP, DSG2, DSC2, JUP, and CTNNA3 without positive findings. The novel LMNA variant was not present in 250 healthy control individuals nor found in the above-listed GenBank databases. Clinical Findings Pedigree structure (Fig. 1C) and clinical characteristics of all evaluated subjects (Table 1) are presented. ECG, Holter, and cardiac structural abnormalities of family members carrying the LMNA mutated variant are summarized in Table 2. Pedigree was consistent with autosomal dominant transmission (Fig. 1C). The index patient (II-7) in 2001 presented with palpitation. In the ECG, first-degree atrioventricular (AV) block and premature ventricular complexes (PVCs) of left bundle-branch block (LBBB) morphology were detected ( Fig. 2A). Frequent (up to 35000 per day) multifocal PVCs and runs of non-sustained ventricular tachycardia (NSVT) were recorded using 24-hour Holter monitoring. An echocardiogram showed normal size and preserved global function of both ventricles. Coronary angiography was normal, and right ventricle (RV) angiography showed dyskinetic areas with bulging at the RV free wall. On the basis of clinical and instrumental features fulfilling the presence of 1 major plus 2 minor criteria, the index patient was diagnosed with ARVC, based on the original TFC [12], and received a prophylactic implantable cardioverter-defibrillator (ICD). He had a positive family history for DCM and SCD; his mother (I-2) died suddenly at rest at the age of 39 years and his brother (II-1) died from SCD at the age of 43 years while playing soccer. No autopsy data were available. Another brother (II-5), suffering from DCM, underwent cardiac transplantation at the age of 48 years, and died 14 years later due to refractory heart failure. Among the 10 LMNA mutated variant carriers, subject II-4 in 2002, at the age of 58 years, received a pacemaker due to atrial fibrillation (AF) with a slow ventricular response alternating with sinus bradycardia. Subject III-1 developed DCM. Subject III-5, who fulfilled modified TFC for borderline ARVC diagnosis [17], showed left ventricle (LV) systolic dysfunction without dilatation ( Table 2). The distribution of major and minor criteria, according to the modified TFC [17], is reported in Table 2. Abnormal ECG findings were present only in family members carrying the mutated LMNA variant, seven (70%) of whom had, at clinical presentation or went on to develop, conduction disturbances (sinus bradycardia and/or first-, second-, or third-degree AV block) ( Table 2; Fig. 2A, B, and C); 2 subjects developed AF. CMR imaging was performed in 15 subjects, 7 of whom carried the mutated LMNA variant. Four of these 7 (57%) had RV involvement with a reduced RVEF (Table 2), and one (subject III-5) had dyskinetic areas with bulging at the RV free wall ( Fig. 3Aa; for video file, see S1 Movie). Taken together, mean RVEF values in LMNA mutation carriers were significantly reduced in comparison with those assessed in LMNA mutation-negative subjects (Table 1). Myocardial fibrosis by late gadolinium enhancement (LGE) imaging was detected in 4 of 7 (57%) of the LMNA mutationpositive patients ( Table 2) and none of the 8 LMNA mutation-negative subjects (Table 1). LGEpositive and LMNA mutation-positive subjects were characterized by older age (39 ± 11 vs. 15 ± 4 years, p = 0.034), and longer PR interval (268 ± 77 vs. 143 ± 6 msec, p = 0.032), compared with LGE-negative and LMNA mutation-positive subjects, suggesting an age-related phenotype expression. Furthermore, all LGE-positive subjects had NSVT, and one developed sustained VT. LGE was located in the basal interventricular septum and LV inferior wall, and the pattern was linear and localized in the midwall myocardium (Fig. 3) in all subjects. Patients receiving ICD, before device implantation, underwent coronary angiography showing normal coronary arteries. During a median follow-up of 122 (range: 12 to 162) months, five patients (II-7, III-1, III-3, III-5, and IV-2) received an ICD in primary prevention, 4 of whom had AV conduction defects. Subject III-1 underwent ICD implantation, unsuccessful VT transcatheter ablation and, 7 years later, at the age of 45, heart transplantation for both sustained VT recurrences in storm (Fig. 2D) and subsequent LV function deterioration (LVEF 30%). Subject IV-2, at the age of 24 years, received a prophylactic ICD that, 4 months after device implantation, discontinued sustained VT (mean cycle length 281 msec) by antitachycardia pacing. Patient III-8 refused prophylactic ICD implantation and has an implantable loop recorder that detected up to 3.5 sec asymptomatic asystoles (Fig. 2C). During follow-up, the index patient continued to have episodes of NSVT, showed paroxysmal AF documented on ICD memory and, after 13 years from diagnosis, he developed complete AV block (Fig. 2B). Two of 10 (20%) of the LMNA mutation-positive family members (subjects IV-6 and IV-8, respectively aged 14 and 12 years) were asymptomatic, free of significant arrhythmias, and revealed normal cardiac function. All LMNA mutation-negative family members were clinically asymptomatic, and phenotype negative after cardiac evaluation (Table 1). We did not observe any overlap with other known laminopathies in this family. Characterization of LMNA Mutant in HL-1 Cardiomyocytes To functionally characterize the newly identified LMNA mutation, two constructs were generated to express both WT and mutated LMNA as N-terminally tagged EGFP-fusion proteins. For brevity, the (p.Leu140_Ala146dup) LMNA mutation will be termed "LMNA DUP" in the following results and figures. To analyze LMNA subcellular localization, as well as the overall organization of the nuclear envelope (NE) upon ectopic LMNA expression, HL-1 cardiomyocytes were transiently transfected with the LMNA constructs then subjected to fluorescent confocal microscopy analysis and co-localization experiments with other nuclear components. As expected, LMNA WT was uniformly distributed along the NE rim and typically in intranuclear invaginations of nuclear membrane (Fig. 4, LMNA WT, arrow). Cardiomyocytes expressing LMNA WT showed nuclei with regular rounded shapes. Co-localization experiments using antibodies against the nuclear pore complex showed that LMNA WT is tightly associated with nuclear pores (Fig. 4, LMNA WT, Nuclear Pores), which, in turn, showed the regular expected distribution along the whole nuclear periphery. In contrast, the localization of LMNA DUP appeared profoundly impaired, clearly expressed in aggregates of different sizes, not uniformly distributed along the NE, and notably absent from the intranuclear invagination of the NE. In addition to LMNA disorganization, nuclear pores were also altered in the LMNA DUP-expressing cells, resulting in LMNA organization in clusters (Fig. 4, LMNA DUP, Nuclear Pores). Interestingly, as assessed by live imaging, both lamin A proteins have the same rate of synthesis and stability in cultured cardiomyocytes. Moreover, western blotting analysis on lysates from the same cells clearly showed that the expression levels of both WT and DUP LMNA constructs were comparable, regardless of the antibody used in the analysis (S1 Fig). In addition, LMNA DUP-expressing cells were able to normally cycle and divide like LMNA WT-expressing cells (S2 Fig). Analysis of Nuclear Envelope Integrity upon Cell Stresses in WT and Mutant LMNA-Expressing HL-1 Cells In order to assess functional consequences of the altered nuclear lamina structure, the integrity of the NE of LMNA DUP-expressing cells, and its resistance to cellular stress, were checked. Interestingly, cardiac myocytes, differently than other cell types, do not exhibit the volume regulatory response after exposure to hypertonic conditions [18]. Indeed, an osmotically-induced and uncompensated cell shrinkage may strain the nucleus. To examine NE integrity in this experimental condition, we monitored the subcellular location of the nuclear marker CellLight Nucleus-Red Fluorescent Protein (NRF, Invitrogen) expressed simultaneously to lamin A proteins in HL-1 cells by transient transfection. As shown in Fig. 5, the red nuclear marker was confined to the nucleus in both LMNA WT and DUP-expressing cells in control conditions, indicating that nuclear integrity was not significantly impaired in LMNA DUP-expressing cells under resting conditions (Fig. 5, LMNA WT, LMNA DUP, CTR). After 2 h in 300 mM mannitol, added to the culture medium, the nuclear morphology of the LMNA WT-expressing cells, as well as the LMNA WT labelling, was slightly compromised (Fig. 5A, LMNA WT, Hyper), but the nuclear marker was still contained into the nucleus (Fig. 5A, LMNA WT, Hyper, inset); this suggested that NE was not leaky under this challenging condition. In contrast, in the same condition, extensive nuclear deformations appeared in LMNA DUP-expressing cells and the red nuclear marker escaped from the nucleus into the cytoplasm, suggesting that NE integrity was impaired under this stressing condition (Fig. 5B, that the NE of the LMNA DUP-expressing cardiomyocytes was more fragile under different cellular stresses (Fig. 5). The expected final consequence of the increased NE fragility under different cellular stressors is apoptosis. We performed the apoptosis assay by using Ethidium homodimer-1 (EthD-1), a membrane-impermeable fluorescent dye, which only enters dying cells with leaky plasma membranes and binds to DNA in the nucleus, emitting red fluorescence (Fig. 6A). As shown in Fig. 6B, under all the stressing conditions tested, apoptosis increased by about 4 times in cells expressing LMNA DUP, compared to WT LMNA-expressing cells. We then analysed whether the canonical Wnt/β-catenin signalling was altered in cells expressing LMNA DUP to tentatively identify the pathomechanism underlying this form of cardiomyopathy. When we analysed β-catenin localization and phosphorylation levels in LMNA-expressing HL-1 cells, we found that β-catenin was localized to the cell-to-cell contacts in HL-1 cardiomyocytes expressing either WT or mutated lamin A (S3A Fig). Moreover, the amount of phospho-β-Catenin was unchanged upon LMNA DUP expression in HL-1 cells even under hypoxic conditions, suggesting that the canonical Wnt signaling pathway was not suppressed in LMNA DUP-expressing cardiomyocytes (S3B Fig). Discussion In this study, we identified a novel lamin A/C gene mutation associated with a familial form of arrhythmogenic cardiac laminopathy and characterized it, defining a possible pathogenic mechanism leading to disease development. Mutations in the LMNA gene account for approximately 6-8% of all DCMs and 33% of DCM cases in association with cardiac conduction defects [4,[19][20][21]. In recent years, mutations of the lamin A/C gene associated with an ARVC-related phenotype were found [8,22]. Moreover, a combination of morphofunctional phenotypes between DCM and ARVC were highlighted, suggesting a new classification of cardiomyopathies [23]. The newly identified mutated LMNA variant can be convincingly considered causative of the clinical features observed in this family for several reasons. First, LMNA is a disease gene for both cardiac laminopathies and ARVC [4,8,22]. Additionally, a co-segregation of the novel lamin A/C mutation with the disease phenotype was observed within the family. The subjects carrying the LMNA variant displayed arrhythmogenic cardiomyopathy of different phenotypes, including ARVC, DCM, LV systolic dysfunction without LV enlargement, system conduction defects, and arrhythmias, showing intra-familial variability of the cardiac phenotype [4,13]. Importantly, we documented NSVT in 60%, and conduction system disturbances in 70%, of LMNA mutation-positive family members, emphasizing the value of family genetic screening to identify silent mutation carriers [13] and the need for tailored clinical monitoring aimed to undertake early treatment strategies and prevent sudden death [24]. In agreement with previous studies [4,13,20], two LMNA mutation-positive family members under the age of 20 years have no evidence of cardiac structural abnormalities, thus suggesting incomplete and age-related penetrance of the mutation. Moreover, absence of the mutation was associated with normal clinical status in all evaluated relatives of the index patient. The newly detected LMNA variant is in an amino acid region localized to coil 1B of the central α-helical rod domain of the lamin A/C proteins, highly conserved among several species, and was not found in 500 control chromosomes or in the aforementioned databases of genetic variants (see Methods). In our study, the genetic and clinical data for LMNA mutation in this family were strengthened with functional studies. The in vitro characterization of this new LMNA variant showed that mutated LMNA loses the uniform expression along the nuclear rime and perturbs nuclear shape and nuclear pore complex organization in cultured cardiomyocytes in resting conditions. The loss of the higher-order assembly of lamin mutated polymers probably leads to a loss of nuclear stability and enhanced sensitivity to mechanical strain [25,26]; this LMNA mutant significantly increases nuclear envelope fragility upon different cellular stresses, such as hypertonic, hypoxic, and oxidative stresses. The leakage of NE in mutated lamin A-expressing cardiomyocytes under hypertonic conditions suggested a decreased mechanic resistance of the NE. A similar nuclear fragility was observed under hypoxic and oxidative stresses. It is indeed possible that this newly identified LMNA mutation drastically decreases both the tolerance and the adaptation of myocardium to stressing conditions, making cardiomyocytes more susceptible to nuclear breakage and cell death during mechanical stress [26]. It is recognized that lamin A is involved into physical and functional connections between the nucleus and cytoskeleton required for effective mechanotransduction in cells (for review, see [2]). It is indeed possible that the mutated lamin A causes not only a decreased mechanic resistance of the NE but also altered nuclear-cytoskeletal coupling with impairment of the mechanotransduction machinery. In hypoxic conditions, in which beating frequency and cellular work increase in cardiomyocytes [27], continuously underwent to mechanical strain due to contraction cycles, the impairment of the nuclear-cytoskeletal connection may result in inappropriate constraints onto the NE, which can, in turn, lose its integrity due to expression of the mutated lamin A. Moreover, it was reported that the nuclear-cytoplasmic compartmentalization can be profoundly affected by ROS, including H 2 O 2 , since nuclear transport factors are the primary cellular targets for oxidants [28]. This effect, together with the nuclear pore clustering induced by the expression of mutated lamin A, may further affect the selective permeability of the NE, ultimately inducing massive nucleoplasm leakage, as observed in our functional studies. Importantly, the impairment of the nuclear-cytoskeletal connection, due to the expression of the mutated lamin A, may increase the energy cost of contraction and oxygen demand, thus mimicking hypoxic stress even in the absence of physical exercise. Moreover, it has been reported that, in cardiomyocytes, hypoxic conditions increase ROS species production [29], suggesting that both hypoxic and oxidative stresses in mutated lamin A-expressing cardiomyocytes can be continuously induced, even in resting conditions. In addition to the decrease in mechanic resistance to stressing conditions, it is possible that the newly-identified LMNA makes cardiomyocytes more prone to pro-apoptotic pathways, speeding up the cardiomyocyte apoptotic process once initiated by a stressing condition. One of the intracellular pathways altered in forms of ARVC due to desmoplakin mutations is the canonical Wnt/β-catenin signalling [30]. Suppression of this pathway induces adipogenesis, fibrogenesis, and apoptosis, the histological hallmark of the disease [31,32]. However, we found that the canonical Wnt signaling pathway was not suppressed in LMNA DUP-expressing cardiomyocytes. Further experiments will be necessary to identify the intracellular pathway/s involved in the pathogenesis of the cardiolaminopathy described in this study. Regardless of the pathways, we showed that the final fatal consequence of this LMNA mutation is cell death under cell-stressing conditions. Cardiomyocyte apoptosis may lead to the development of arrhythmias, potentially resulting in sudden cardiac death [33]. An arrhythmogenic effect of apoptosis may be mediated in at least two ways. First, in the progress of dying, a cardiomyocyte passes through phases of increased excitability or becomes automatic, at least until it is dead. Second, from a random grouping of several such dead cardiomyocytes, the process of normal activation in that area of heart muscle must be deranged and redirected in a way that would provide a suitable anatomical substrate for re-entrant arrhythmias (for review, see [34]). Sudden cardiac death in patients with LMNA mutations may occur due to ventricular arrhythmias, bradyarrhythmias, or asystole [4,35]. Previous studies suggested that apoptosis in system conduction cardiomyocytes could cause either tachyarrhythmias or bradyarrhythmias, including complete AV block, as observed in our patients, probably playing an important role in the pathogenesis of sudden cardiac death [33]. Pathophysiological mechanisms leading to the cardiac phenotypes caused by LMNA mutations are not yet fully understood [25,26]. Our experimental data shed light on the clinical findings we collected. In this study, the majority of LMNA mutation-positive subjects had ventricular arrhythmias and/or conduction system defects, including severe arrhythmic phenotypes, such as sustained VT and complete AV block, while cardiac function was variable. These findings are in line with previous observations showing that cardiac laminopathies carry high arrhythmogenic risk, even if left ventricular ejection fraction is preserved [4,24,[35][36][37][38]. In our study, myocardial fibrosis by LGE-CMR was found in four LMNA mutation carriers who had documented NSVT, one of whom developed sustained VT, and 3 who showed conduction system disturbances. These findings agree with a recent study that included 41 lamin A/C mutation-positive subjects and showed association of myocardial septal fibrosis with ventricular arrhythmias and a prolonged PR-interval [39]. Furthermore, the typical pattern of LGE detected by CMR imaging in our patients was linear and midwall, predominantly located in the basal interventricular septum, consistent with the distribution of myocardial fibrosis previously described in lamin A/C mutation-positive patients [39,40]. Taken together, our clinical, genetic, and functional data allow us to hypothesize a possible disease mechanism by which the mutated LMNA variant causes decreased nuclear stability and impaired nuclear-cytoskeletal coupling, resulting in a higher susceptibility/sensitivity to nuclear rupture and cardiomyocyte apoptosis in tissue subjected to mechanical stress, like the heart [21,31,32,36,41]. Apoptosis may lead to heterogeneity of cardiac conduction and dispersion of refractoriness [42], providing a basis for the arrhythmias we observed in our patients. In addition, throughout the disease course, cardiomyocyte loss, likely repaired by replacement fibrosis, may provide a possible substrate for conduction block and re-entrant arrhythmias [42,43]; this hypothesis is in line with previous studies showing that LMNA mutation carriers with conduction defects and arrhythmias have myocardial fibrosis involving cardiac conduction system, documented by histopathological examinations [21,36,41]. Our study suggests that myocardial fibrosis detected by LGE-CMR may be considered a marker of higher arrhythmic risk in patients with LMNA mutations, contributing to identify those that would benefit from ICD implantation, in agreement with recent clinical findings [39,44]. However, there is growing evidence that life-threatening arrhythmias or sudden cardiac death may occur without myocardial fibrosis in arrhythmogenic cardiomyopathy [42,45]. In these cases, apoptosis-related enhanced excitability may play a role, which must be proved in further studies. Some limitations of this study need to be mentioned. The findings of myocardial fibrosis by CMR imaging could be only documented in our patients without pacemaker or ICD. Moreover, LGE-negative and LMNA mutation-positive family members were significantly younger than LGE-positive subjects, suggesting an age-related cardiac phenotype expression. Future larger studies, including CMR evaluations and long-term follow-up of healthy mutated subjects, should help to elucidate the timing of expression of the phenotypic traits. Conclusions In conclusions, our functional data, combined with clinical and genetic findings, indicate LMNA p.Leu140_Ala146dup as a disease-causing mutation, and suggest cardiomyocyte apoptosis as a possible molecular mechanism leading to the clinical features observed in this family, thus confirming the emergent role of the LMNA gene in the pathogenesis of a wide spectrum of cardiac laminopathies. The current family is a striking example of the possibility of shared cardiac phenotypes between laminopathies and arrhythmogenic cardiomyopathy. The major clinical implication of our findings is that the LMNA gene should be included in mutational screening of patients with suspected arrhythmogenic cardiomyopathy, particularly when they have ECG evidence for conduction defects and/or myocardial septal fibrosis on CMR. These results, by integrating clinical, genetic, and functional data, could contribute to future studies aimed at improving risk stratification algorithms and testing possible tailored therapeutic approaches in patients with LMNA mutations.
5,976.6
2015-04-02T00:00:00.000
[ "Biology", "Medicine" ]
Quantification of myocardial strain assessed by cardiovascular magnetic resonance feature tracking in healthy subjects—influence of segmentation and analysis software Objectives Quantification of myocardial deformation by feature tracking is of growing interest in cardiovascular magnetic resonance. It allows the assessment of regional myocardial function based on cine images. However, image acquisition, post-processing, and interpretation are not standardized. We aimed to assess the influence of segmentation procedure such as slice selection and different types of analysis software on values and quantification of myocardial strain in healthy adults. Methods Healthy volunteers were retrospectively analyzed. Post-processing was performed using CVI42 and TomTec. Longitudinal and radialLong axis (LAX) strain were quantified using 4-chamber-view, 3-chamber-view, and 2-chamber-view. Circumferential and radialShort axis (SAX) strain were assessed in basal, midventricular, and apical short-axis views and using full coverage. Global and segmental strain values were compared to each other regarding their post-processing approach and analysis software package. Results We screened healthy volunteers studied at 1.5 or 3.0 T and included 67 (age 44.3 ± 16.3 years, 31 females). Circumferential and radialSAX strain values were different between a full coverage approach vs. three short slices (− 17.6 ± 1.8% vs. − 19.2 ± 2.3% and 29.1 ± 4.8% vs. 34.6 ± 7.1%). Different analysis software calculated significantly different strain values. Within the same vendor, different field strengths (− 17.0 ± 2.1% at 1.5 T vs. − 17.0 ± 1.7% at 3 T, p = 0.845) did not influence the calculated global longitudinal strain (GLS), and were similar in gender (− 17.4 ± 2.0% in females vs. − 16.6 ± 1.8% in males, p = 0.098). Circumferential and radial strain were different in females and males (circumferential strain − 18.2 ± 1.7% vs. − 17.1 ± 1.8%, p = 0.029 and radial strain 30.7 ± 4.7% vs. 27.8 ± 4.6%, p = 0.047). Conclusions Myocardial deformation assessed by feature tracking depends on segmentation procedure and type of analysis software. CircumferentialSAX and radialSAX depend on the number of slices used for feature tracking analysis. As known from other imaging modalities, GLS seems to be the most stable parameter. During follow-up studies, standardized conditions should be warranted. Trial registration Retrospectively registered Key Points • Myocardial deformation assessed by feature tracking depends on the segmentation procedure. • Global myocardial strain values differ significantly among vendors. • Standardization in post-processing using CMR feature tracking is essential. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-020-07539-5. Background Quantification of myocardial deformation applying myocardial strain is of growing interest in cardiovascular magnetic resonance (CMR). For a few years, it has been applied in research, and different vendors have developed postprocessing tools [1]. Left ventricular deformation can be quantified in three dimensions: longitudinal and circumferential strain which show ventricular shortening in longitudinal and circumferential directions (negative strain) and radial strain that characterizes wall thickening (positive strain) [15]. Assessment of myocardial regional function is well known in echocardiography using speckle tracking [12,15,16] but is also increasingly investigated in CMR using different techniques, such as strain encoding (SENC) [17,18], displacement encoding (DENSE) [19], and tagging [17,18,[20][21][22]. Feature tracking is a tool which in contrast to the methods mentioned above enables post-processing analysis of myocardial strain based on routine steady-state free precession (SSFP) cine images as acquired for the assessment of left ventricular (LV) function and volume [8,16,23]. It avoids acquisition of additional images and saves time [23]. Preexisting contours for calculation of LV function can be used for strain analysis making it a timesaving method. For those reasons, feature tracking seems to be a beneficial tool, e.g., for follow-up examinations. Even though publications regarding CMR strain analysis exist, standards for image acquisition and interpretation are still not established. Different vendors and different analysis procedures such as slice selection procedures, even within the same software, can heavily influence deformation values. This may lead to uncertainties in comparison and interpretation of data. We aimed to analyze the influence of segmentation procedure such as slice selection on values of quantification of myocardial strain in healthy adults. Additionally, we intended to analyze the influence of different software packages and to provide regional strain quantification. Study population We retrospectively screened 243 truly healthy subjects, who were prospectively examined in former studies [24][25][26][27][28]. Exclusion criteria were known cardiovascular risk factors, any pre-existing diseases or medications, impaired LV ejection fraction (LVEF) (< 55%), or pathological findings in 12 lead ECG or CMR. Incomplete CMR data for feature tracking analysis led to exclusion. That included lack of long-axis (LAX) or short-axis (SAX) slices (n = 137) or variable number of cardiac phases (n = 41). The ethics committee approved all studies. Informed written consent was obtained in concordance with the Helsinki Declaration. LV function and volumes were quantified in a whole SAX stack according to the recommendation of the SCMR [30] applying CVI 4 2 software (Version 4.1.2, Circle Cardiovascular Imaging Inc.). Endo-and epicardial contours were manually drawn in end-diastolic and end-systolic phase. Papillary muscles were excluded from the LV volume. Feature tracking Feature tracking analysis was performed retrospectively using CVI 4 2 software (prototype version 5.3.0, Circle Cardiovascular Imaging Inc.). Longitudinal strain and radial LAX strain (RS) were assessed in three LAX views: 4CV, 3CV, and 2CV (Fig. 1). Circumferential strain (CS) and RS SAX were analyzed using three SAX slices (basal, midventricular, and apical) in all subjects (Fig. 1). If available, strain was additionally assessed using a SAX full coverage (Fig. 2). Endo-and epicardial contours were manually drawn in end-diastolic phase, defined as the phase with the largest LV volume. End-diastolic phase had to be identical in all SAX and LAX slices of one subject. Trabeculae, papillary muscles, pericardium, and epicardial fat were consequently excluded from contouring. Left ventricular outflow tract (LVOT) was completely excluded in all SAX slices if seen in diastolic and/ or systolic phases (Fig. 2). 2D strain analysis was assessed globally and segmentally for longitudinal, RS LAX , CS, and RS SAX strain. Segmentation included both possibilities of slice selection (three slices versus the whole stack) and the segmentation of the left ventricle according to the AHA 17segment model [31]. We excluded the apex (segment 17) from feature tracking analysis; so far, the 16 segment model was used. Tracking quality and segmentation were evaluated using software tools like mesh, boundaries, or myocardial points. If contours did not follow the epi-or endocardial borders correctly, delineation was retraced and adjusted. In case of remaining tracking issues, all corresponding segments were excluded. Also, incorrect segmentation (see Fig. 3) led to exclusion. Excluded segments were not considered for global strain assessment. Strain results were compared between field strengths (1.5 T and 3 T) and between different numbers of SAX slices (three SAX slices versus full coverage) in CS and RS SAX , as well as RS between LAX and SAX analysis. Bulls-eye plots visualizing segmental strain values were created using the Python package Matplotlib. Global strain analysis was repeated by the same observer (intra-observer) and by a different observer (inter-observer) in the same randomly selected subjects (n = 10). Software comparison All images were also analyzed with TomTec Image Arena (version 1.3.0.91, TomTec Imaging Systems GmbH) (Fig. 4). 4CV, 3CV, and 2CV were used for longitudinal and transversal (radial LAX ) strain. CS and RS SAX were assessed using three SAX slices (basal, midventricular, and apical). Endo-and epicardial contours were manually drawn in end-diastolic and endsystolic phases. Trabeculae and papillary muscles were excluded from analysis, as well as LVOT. Tracking quality was checked manually, specifically whether contours followed endo-and epicardial borders correctly and were adjusted if necessary. Myocardial strain was analyzed on a global and segmental level. Post-processing using 2D strain analysis by CVI 42 . Endo-(red) and epicardial (green) contours were manually drawn in end-diastolic phase in long axis (a-c) and short axis (d-f). 4chamber-view (a), 3-chamberview (b), and 2-chamber-view (c) were included in long-axis strain analysis. For short-axis strain, contours were drawn in three short-axis slices: basal (d), midventricular (e), and apical (f) Three LAX (4CV, 3CV, 2CV) and three SAX slices using the exact same slice number were considered for software comparison. Statistical analysis Statistical analyses were performed using IBM SPSS Statistic version 23. We calculated mean values and standard deviation (SD) as well as median and interquartile ranges (IQR) for demographic parameters, LV function, and strain measurements. Volumes were indexed to body surface area (BSA) and height. The non-parametric Mann-Whitney U test for unpaired samples was used for comparisons of strain parameters between gender, analysis software, and field strength. Differences were considered to be statistically significant at p < 0.05. Intra-and inter-observer reproducibility were analyzed using intra-class correlation coefficient (ICC) and 95% confidence interval (CI). ICC was classified as poor (ICC < 0.4), good (ICC = 0.4-0.75), or excellent (ICC > 0.75) [1]. Basic data Sixty-seven healthy subjects (n = 36 at 1.5 T and n = 31 at 3 T) were included and analyzed (mean age 44.3 ± 16.3 years, n = 31 females). The proportion of men and age between the field strength groups was equalized. The 1.5 T group had 19 (52.8%), while the 3 T group accounted for 17 (54.8%) male . Endo-and epicardial contours were drawn in end-diastolic phase (a). If LVOT was visible in end-systolic phase (b, marked red), slices were excluded. The first slice used for analysis was chosen as the most basal slice that did not show LVOT in any end-diastolic (a, marked green) and end-systolic phase (b, marked green) All volunteers had normal LV function (LVEF 64.1 ± 4.2%) without wall motion abnormalities. Demographic parameters as well as LV function and volumes are summarized in Table 1. Seven subjects had to be excluded from 3D LV function analysis due to incomplete SAX package (n = 6) or artifacts (n = 1). Feature tracking quality In all 67 subjects, strain was analyzed in 4CV, 3CV, 2CV, and three SAX slices. Sixty-one subjects were additionally analyzed by CVI 42 Reasons for exclusion were inaccurate tracking or incorrect segmentation. Longitudinal and radial LAX strain were assessed in 4CV, 3CV (a, c), and 2CV; circumferential and radial SAX strain were analyzed in basal (b, d), medial, and apical short-axis slice In both, three selected slices and a whole SAX stack global circumferential and radial SAX strain differed significantly between genders (for details, see Table 2). Gender-related strain values are visualized in the supplemental material additional file 2. Assessment of radial strain in long-and short-axis views Global radial strain acquired in LAX (radial LAX ) versus SAX (radial SAX ) differed significantly: global radial LAX Longitudinal strain using CVI 42 Longitudinal strain did not show any significant difference for both global and segmental strain measurements between 1. Gender-related global strain values using TomTec are summarized in Table 2. Unlike differences in global RS SAX , GLS and global CS were not associated with gender. Discussion In this study, we aimed to increase knowledge about influencing factors on strain results obtained by CMR feature tracking. We focused on the segmentation procedure and on the comparison of software packages of two different vendors. For the first time, we showed that CS and RS SAX were dependent on the number of slices used for feature tracking analysis. Previous published studies considered a different number of slices for strain analysis making it difficult to compare strain values to each other. While some used one LAX and one midventricular SAX slice [20,32,33], others included two LAX and three SAX views [34,35] or considered all three LAX views and a SAX full coverage [36]. The variation in analysis procedure like slice selection may lead to different quantitative results and consequently to uncertainties and Global strain values are given as mean ± standard deviation (SD), median, and interquartile range (Q1 and Q3). Significant differences are shown in italics LAX long axis, SAX short axis difficulties in comparison and interpretation. Significant variations among vendors are already known in echocardiography and CMR-FT and this should be considered when performing serial studies [37]. A recent study by Liu et al compared 3D strain analysis (three LAX slices and SAX full coverage) with 2D analysis using one horizontal LAX and one midventricular SAX slice showing notable differences [38]. In our study, we detected differences for CS and RS SAX between three SAX slices and full coverage using CVI 42 . Of note, both parameters were significantly higher using 3 SAX slices vs. full coverage; one should assume that partial volume effects, mainly effecting an apical slice, may influence the results. Furthermore, vendors may use a different way of pixel definition leading to a different boundary detection. Radial strain assessed in LAX and SAX slices differed significantly. There is no broad experience in using radial LAX strain yet, but when SAX slices are missing, assessment of radial strain in LAX can add information. Among different types of post-processing software, both global and segmental strain values differed significantly. These findings indicate that strain values are not comparable between different software applications. Our findings in terms of differences among post-processing software packages are mostly in accordance with previous published data [1,20,38]. Barreiro-Pérez et al showed variability among different vendors (TomTec, CVI 42 , Medis, Medviso) in GLS and RS measurements, but not in CS [1]. In our study, strain values were significantly lower using CVI 42 , but these findings conform with previous studies [20,38]. Cao et al compared different sequences and different post-processing software [20], detecting notable differences between all CMR techniques. However, the proper validation of most analysis procedures as well as absolute and objective reference values is yet to be established. While DENSE, SENC, and tagging, techniques for measuring three-dimensional motion and deformation, require dedicated sequences, feature tracking analysis is based on routine SSFP cine images. However, FT is based on contours only and does not follow intrinsic myocardial contraction. Moreover, the influence of field strengths seems to not be relevant. Schuster et al showed similar results for myocardial Fig. 5 Gender-related mean values for longitudinal strain using CVI 42 . Segmental values are provided as mean (in %) ± standard deviation in a bulls-eye plot according to the AHA segment model [31]. Segment 5 (marked red) differed between genders (p = 0.048) Global strain values are given as mean ± standard deviation (SD), median, and interquartile ranges (Q1 and Q3). Radial SAX and circumferential strain were assessed using three short-axis slices (basal, midventricular, apical). Significant differences (p < 0.05) are shown in italics. * p < 0.05 between 1.5 T and 3 T within one software strain among 1.5 T and 3 T applying TomTec [32]. This agrees with our results since field strength did not influence global values of longitudinal, RS LAX , RS SAX , and CS strain using CVI 42 . Reference values for CMR feature tracking analysis have been published, mainly focused on global left ventricular strain. Most studies performed feature tracking via TomTec [36,39,40]. Liu et al were the first to establish normal ranges for CVI 42 using 3D strain analysis [38]. However, regional deformation was only acquired for CS. Regional assessment of myocardial strain is less validated, but may reveal further information compared to global values as single regions of the myocardium can be injured even though global strain is in normal range. We added knowledge on reference values for myocardial strain in healthy subjects using CVI 42 and TomTec. Unlike most studies showing greater deformation in females resulting in more negative strain [36,[39][40][41][42], we did not find gender-related differences for global longitudinal strain. The larger magnitudes of global CS in females having more negative strain values also agree with the findings reported by Andre et al and Peng et al [40,41]. However, the higher global radial strain values in females contradict former findings [36,40]. In accordance with our findings, CMR feature tracking has shown fair reproducibility in previous studies [34]. In fact, strain assessment is influenced by observer experience, but reproducibility may be optimized by training [43,44]. Most studies indicate better reproducibility for global rather than segmental strain analysis with global CS being the most and global radial strain being the least reproducible measurement [20,33,35,36,42]. However, analysis methods throughout all studies were not standardized until now. CMR feature tracking-derived strain seems to be influenced by many factors including software package and the applied approach of image processing; thus, reference values should be derived from similar approaches. Currently, no gold standard exists. There is no defined "right" or "wrong" as in most of the publications that evaluate differences between post-processing software or sequences. But there is a need to understand that the application of different approaches may lead to different results. CMR feature tracking is a promising tool that enables early detection of subtle myocardial dysfunction and prediction of major adverse cardiovascular events [5][6][7]. Standardization is needed if assessment of myocardial deformation including feature tracking should enter clinical routine. Limitations This study is limited by a relatively small, but carefully and well-characterized healthy study cohort. As our analysis was performed retrospectively in prospectively enrolled volunteers, scan protocols were slightly different. This led to exclusion of 176 subjects due to incomplete CMR data. This may be preventable by a prospectively designed study, but our settings also reflect potential difficulties in clinical routine. Our statistical analysis was only descriptive and exploratory. It indicates that differences among vendors or segmentation procedures may exist, but further validation remains necessary. The CMR examinations performed at 1.5 T and 3 T did not contain the same subjects, but showed an equal distribution regarding gender and age. In accordance with our results, preexisting studies have also shown that field strength does not influence global strain values [32]. CMR feature tracking is less validated for regional strain and radial LAX strain, but they can presumably reveal different physiological mechanisms of the myocardium. Regional assessment is limited by inaccurate tracking or incorrect segmentation which may distort segmental strain values. We provide numbers, but long-term studies have to show the potential significance before CMR-FT may enter clinical routine. Conclusion Myocardial deformation assessed by feature tracking depends on segmentation procedure and type of analysis software. Circumferential SAX and radial SAX depend on the number of slices used for feature tracking analysis. As known from other imaging modalities, GLS seems to be the most stable parameter. Standardized conditions should be considered. Funding Open Access funding enabled and organized by Projekt DEAL. Compliance with ethical standards Guarantor The scientific guarantor of this publication is Prof. Jeanette Schulz-Menger. Conflict of interest The authors of this manuscript declare no relationships with any companies whose products or services may be related to the subject matter of the article. Statistics and biometry No complex statistical methods were necessary for this paper. Informed consent Written informed consent was not required for this study because we screened healthy subjects, who were prospectively examined in former studies and written informed consent was obtained from all subjects (patients) in all former studies. Ethical approval Institutional Review Board approval was not required because we screened healthy subjects, who were prospectively examined in former studies. The ethical committee had approved all former studies. Methodology • retrospective • observational • performed at one institution Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
4,632.6
2020-12-04T00:00:00.000
[ "Medicine", "Biology" ]
Growth of Outward Propagating Fast-Magnetosonic/Whistler Waves in the Inner Heliosphere Observed by Parker Solar Probe The solar wind in the inner heliosphere has been observed by Parker Solar Probe (PSP) to exhibit abundant wave activities. The cyclotron wave modes in the sense of ions or electrons are among the most crucial wave components. However, their origin and evolution in the inner heliosphere close to the Sun remain mysteries. Specifically, it remains unknown whether it is an emitted signal from the solar atmosphere or an eigenmode growing locally in the heliosphere due to plasma instability. To address and resolve this controversy, we must investigate the key quantity of the energy change rate of the wave mode. We develop a new technique to measure the energy change rate of plasma waves, and apply this technique to the wave electromagnetic fields measured by PSP. We provide the wave Poynting flux in the solar wind frame, identify the wave nature to be the outward propagating fast-magnetosonic/whistler wave mode instead of the sunward propagating waves. We provide the first evidence for growth of the fast-magnetosonic/whistler wave mode in the inner heliosphere based on the derived spectra of the real and imaginary parts of the wave frequencies. The energy change rate rises and stays at a positive level in the same wavenumber range as the bumps of the electromagnetic field power spectral densities, clearly manifesting that the observed fast-magnetosonic/whistler waves are locally growing to a large amplitude. INTRODUCTION Waves are essential channels of energy conversion in various plasma systems. Particularly for the waves at kinetic scales, wave-particle interaction plays a crucial role in modulating the particles' velocity distribution, leading to the energization/cooling of plasmas, as well as the kinetic energy transfer between parallel and perpendicular degrees of freedom (Marsch 2006;Hellinger et al. 2006;He et al. 2015b;Ruan et al. 2016;Howes et al. 2017;Yoon 2017;Klein et al. 2018;Verscharen et al. 2019;Duan et al. 2020;Verniero et al. 2020;Zhao et al. 2020). Regarding space plasmas in the heliosphere, the situation is more complicated. There exist various wave modes: electromagnetic wave modes (e.g., Alfven-cyclotron waves, whistler waves) (Jian et al. 2009;He et al. 2011;Boardsen et al. 2015;Narita 2018;Zhao et al. 2018;Woodham et al. 2019;Bowen et al. 2020a;Shi et al. 2021;Jagarlamudi et al. 2021;Zhao et al. 2021), electrostatic wave modes (e.g., ion-acoustic waves, Langmuir waves) (Zhu et al. 2019;Mozer et al. 2020b), and hybrid wave modes (e.g., quasi-perpendicular kinetic Alfvén waves) (Bale et al. 2005;Sahraoui et al. 2009;He et al. 2012;Salem et al. 2012;Chen et al. 2013;Huang et al. 2020). Observations reveal propagation directions to be anti-sunward or sunward, quasi-parallel or quasi-perpendicular with respect to the local background magnetic field direction. The polarization of the fluctuating vectors (e.g., δB, δE, δV for the disturbed magnetic, electric, and velocity field vectors, respectively) can be quasi-linear or quasi-circular with left-or right-handedness. It is also desired to distinguish whether the observed waves are dissipative damping or stimulative growing. Therefore, a thorough diagnosis of the kinetic waves in space plasmas, including the solar wind, is undoubtedly a challenging task. The fluctuating magnetic field can be helpful in determining the propagation but lead to a 180-degree ambiguity. Since the magnetic field is a solenoidal vector field, the wave magnetic field (δB) cannot have a component oscillating along the wave vector direction. This feature of the oscillation direction provides a basis for diagnosing the propagation direction. Therefore, approximating the wave vector direction with the minimum variance direction has become one of the main principles when developing wave vector diagnosis methods, such as the MVA method based on time series (Sonnerup & Cahill Jr 1967), or the SVD method based on the spectrum or dynamic spectrum (Santolík et al. 2003). According to these methods, we can preliminarily diagnose whether the wave encountered by a spacecraft has a quasi-parallel propagation or a quasi-perpendicular propagation. For example, we often see that with decreasing wavelength the magnetic compressibility becomes more significant, and the corresponding θ kB0 becomes larger (He et al. 2015a). One of the reasons for this change in behavior is the transformation from magnetohydrodynamic (MHD) Alfvén waves to kinetic Alfvén waves with decreasing scales. However, single-satellite magnetic field measurements can not solve the problem of the 180-degree ambiguity of propagation angle. So, these measurements are unable to judge the real propagation direction of the wave, hence unable to accurately diagnose the nature of wave mode. To unambiguously identify the wave propagation direction, there are two possible solutions: (1) the time delay analysis based on multi-satellite constellation measurements (Gershman et al. 2017); (2) the consideration of more physical measurements (such as wave electric field, e.g., measured from MMS) (He et al. 2019(He et al. , 2020. The fluctuating electric field is another crucial variable for wave diagnosis (Mozer & Chen 2013;He et al. 2020). Only when the wave electric and magnetic fields are measured simultaneously, can the wave electromagnetic energyflux density, that is, the Poynting flux density, be calculated. However, the measurement and calibration of the electric field are more complicated than of the magnetic field due to Debye shielding and the photoelectric effect, which bring a significant challenge to the accurate measurement of the electric field. Fortunately, the number density of the solar wind measured by PSP is two orders of magnitude higher than that of the near-Earth solar wind, and the Debye sphere is thus one order of magnitude smaller, making shorter electric-field antennas feasible (Bale et al. 2016;Mozer et al. 2020a). Furthermore, the PSP antenna's geometric configuration leaves the potential measurement at the four ends (U 1 , U 2 , U 3 , and U 4 ) unaffected by the wake of the spacecraft. In this way, in the absence of physical adverse factors, the main task for the data analysis is the careful calibration of the electric field. The convection electric field at MHD scales can be used as the benchmark electric field to calibrate the electric field based on multi-point potential measurements (Mozer et al. 2020a). Based on the magnetic field's frozen-in condition at MHD scales, the convection electric field can be approximated by the opposite of the cross product of the velocity and magnetic field vectors (E ∼ −V × B). Therefore, the calibration coefficients obtained at MHD scales can be extended to obtain the electric field at kinetic scales. Based on the time series of electric and magnetic fields, it is found that the magnitude of the Poynting vector in the switchback structure is larger than that outside (Mozer et al. 2020a). The reason is that the outflow velocity inside the structure is larger, and so is the angle between the outflow velocity and the magnetic field. Besides, the propagation speed of the kinetic wave's Poynting vector in the heliographic inertial (HGI) reference frame is larger than the solar wind flow speed, suggesting that the wave events under study propagate away from the sun . The origin of kinetic scale fluctuations in the solar wind is a controversial topic of research. There are two different views on this issue. (1) One view is that the wave fluctuations are emitted from the solar atmosphere and cascade from the MHD scales to the kinetic scales during their journey of outward propagation (He et al. 2009;Cranmer et al. 2015;Yang et al. 2017;Chandran & Perez 2019;He et al. 2021). (2) The other view is that the kinetic-scale waves are produced locally in the interplanetary space due to some plasma instability (Jian et al. 2014;Wicks et al. 2016;Jiansen et al. 2018;Zhao et al. 2019;Verniero et al. 2020). Because the cascade of the Alfvén turbulence preferentially creates anisotropy with k ⊥ k , the quasi-perpendicular propagation of kinetic Alfvén wave may be generated by a cascade along with the outward propagation of MHD waves. The origin mechanism is especially unclear for quasi-parallel kinetic waves (such as ion cyclotron wave or whistler wave). However, due to the frequent existence of spectral peaks, it is generally speculated that these waves are related to the excitation by local instability. In addition, the thermal anisotropy of protons, the beam structures in protons and other ions, and the heat flux caused by the strahl component of the solar wind electrons may cause instability in various plasma states. However, previous studies, which are mainly based on the prediction from linear theory, have not provided direct evidence for the time-varying growth of solar wind kinetic waves. Therefore, it is one of the cutting-edge frontiers to study and provide evidence of the time-varying evolution (growth or dissipation) of wave events. Quasi-parallel kinetic waves (such as ion cyclotron waves) were once considered an important energy source for solar wind heating. The dissipation of quasi-perpendicular kinetic Alfvén waves is also an effective way to heat the solar wind. These viewpoints need to be proved by the direct observation of the dissipation rate spectrum, but the dissipation rate spectrum has been unexplored for a long time. Recently, based on the detection of electromagnetic field and plasma in the magnetosheath turbulence by MMS, the measurement method of the dissipation rate spectrum was proposed (He et al. 2019). The dissipation rate spectra of ion cyclotron waves (mainly in the perpendicular direction) and kinetic Alfvén waves (mainly in the parallel direction) in magnetosheath turbulence are measured (He et al. 2019(He et al. , 2020. However, the growth rate spectrum of an excited instability has yet to be reported. Although the trivial energy transfer rate from fields to particles as compared with the energy flux density supports local generation scenario of cyclotron waves (Vech et al. 2020), the direct measurement of the wave growth in the inner heliosphere and the details of the associated growth rate spectrum are still unresolved. Calibration of electric field Since the magnetic frozen-in condition holds at MHD scales, the electric field due to convection (E = −V × B) can be viewed and used as the benchmark electric field for the calibration. We regard the calibration procedure for the electric field vector (E T , E N ) from the electric potentials measured at four points (U 1 , U 2 , U 3 , U 4 ) as a type of fitting procedure. The input conditions are known as U 12 =U 2 -U 1 and U 34 =U 4 -U 3 , and the output variables are The fitting parameters to be determined consist of the following parameters: (1) the residual electric potential between the measurement points 1 and 2, (2) the residual electric potential between the measurement points 3 and 4, (3) effective length of the antennas (L), (4) the rotation angle θ from the coordinates defined by the two measurement antennas (e 12 , e 34 ) to the coordinates (T, N). A set of fitting equations can be obtained based on the known observables and unknown fitting parameters, and written as: To employ the technique of a generalized gradient descent algorithm ("GGDA") (Zhang et al. 2012), we combine the fitting parameters, and rewrite Equation 1 as: where the fitting parameters are considered as the vector on the left side of Equation 2. The pair of parameters (C 1 , C 2 ) is expressed as: If there are N data points in the time sequence, then the sizes of the matrix and the vectors in Equation 2 are (4, 2N), (1, 4), (1, 2N). In practice, similar to the time length adopted in (Mozer et al. 2020a), we choose a time window of 12 s as the the time length to employ the fitting approach. As the last step of the electric field calibration, we use the fitting parameters derived from the "GGDA" to calculate the electric field vectors based on the four-point measurements of the electric potentials at a higher time cadence of 0.0068 second. The calibrated electric field vectors are in the heliographic inertial (HGI) reference frame instead of in the solar wind frame. Moreover, the calibration of E R is more complicated than that of E T and E N since the measurement of the potential U 5 is in the wake of the PSP spacecraft. Therefore, this work uses mostly E T and E N , and focuses on the parallel/anti-parallel propagating wave events when the local background magnetic field is in the quasi-radial or anti-quasi-radial directions. Formulas of dynamic spectra for Poynting vector, magnetic helicity, and electric field polarization We adopt a method similar to that in (Podesta 2009) to calculate the local background magnetic field (B 0,local−BG ) and the local background flow velocity (V sw,local−BG ), which are obtained through the convolution between Gaussian windows of different widths and the time sequences of the magnetic vectors and flow velocity vectors. The dynamic spectrum of the Poynting vector in the solar wind frame and its component in the R direction can be calculated as: where the independent variables (t, p) represent the time and period, respectively. The complex variables (δE and δB) are the wavelet spectra of electric field (in the reference frame of local background flow) and magnetic field, respectively. The relation between the electric field spectra in the reference frame of local background flow and its counterpart in the heliographic inertial (HGI) reference frame can be expressed as: Note that, the zero-frequency part of convection electric field as contributed from the convection of the mean magnetic flux by the mean flow (E 0 = −V 0 × B 0 ) does not appear in the frequency-dependent Equation 7. The normalized and reduced magnetic helicity is calculated according to where δ B T and δ B N are the wavelet spectra of the magnetic field components B T and B N . Similarly, the "polarization" of the electric field about the R direction can be formulated as: where δ E T and δ E N represent the wavelet spectra of electric field components in the T and N directions, respectively. Method of identification and classification of wave events To identify some ideal events of kinetic waves for further detailed analysis, we propose a set of criteria and list them in Table 1. The variables PF, θ RB , σ m , and σ E represent: (1) the Poynting flux density, (2) the angle between radial and local mean magnetic field directions, (3) the normalized reduced magnetic helicity, (4) the polarization wave electric field about the radial direction in the local mean flow frame, respectively. To make sure that the identified wave events possess the typical characteristics of kinetic wave modes, we conduct the following procedure: (1) We select a time window of 30 s to calculate an average of dynamic spectra of the variables (PF, θ RB , σ m , σ E ) at the time scale of 0.3 s. (2) We set the thresholds for the key variables: θ * RB = 30 • , |σ * m | = 0.5, |σ * E | = 0.5. Estimating the real and imaginary frequencies of wave activity Based on a Fourier transform of the Faraday equation, we obtain: If the wave is a transverse wave with both electric and magnetic field fluctuations oscillating in the directions perpendicular to the wave vector, as it is the case in quasi-parallel propagating Alfvén/ion-cyclotron waves and fastmagnetosonic/whistler waves for examples, Equation 10 can be rewritten as Therefore, based on the wavelet spectra of the electric and magnetic field, we obtain the dynamic spectrum of dispersion relation and growth rate We note that, the above equation is a simplified version for the situation of quasi-parallel transverse waves. In general, the wave group speed is determined by the ratio of energy flux density to the energy density, with the energy flux density being the sum of Poynting flux and kinetic flux and the energy density being contributed by the fluctuating electromagnetic field energy and plasma kinetic energy (Stix 1992;Swanson 2003). According to the Doppler-shift effect caused by the solar wind flow, the relation between wave frequencies (ω sc ) in the spacecraft reference frame and in the solar wind flow reference frame (ω pl ) can be expressed as: where V sw is the local background solar wind flow velocity, and θ kV is the angle between V sw and the wave vector k. The direction of the wave vector can be determined without the problem of 180-degree ambiguity by considering the analysis result from the "singular value decomposition" (SVD) method and the direction of the Poynting vector relative to the background magnetic field. For convenience, hereafter, we drop the subscript "pl" in "ω pl " for simplicity. Based on Equations (12) and (13), we further derive formulas of k, ω, and γ, which read as To validate the credibility of applying Equation (6) to the measurement of the Poynting vector, we also propose a formula for calculating the phase difference between the wave electric field δE (φ(δE )) and the wave magnetic field δB (φ(δB)) (see Equation (17)), and calculate its distribution in the time and scale dimensions. Analysis Steps We conduct the search and analysis of interesting wave events based on the measurements from PSP during its first encounter on November 4, 2018. We break this task into six steps. (1) The first step is to calibrate the electric field segment by segment according to Equation (2), and thereby realizing the conversion from the four-point electric potentials to the 2D electric field vectors. (2) We then invoke Equation (7) to realize the coordinate transformation of the electric field from the spacecraft reference frame to the reference frame of the local solar wind background flow. (3) We calculate the dynamic spectrum of the Poynting flux along the R-direction according to Equation (6). (4) We calculate the dynamic spectrum of the magnetic helicity and electric polarization about the R-direction with Equations (8) and (9), respectively. (5) We calculate ω/k and γ/k for the wave events. (6) We estimate the wave number, real part and imaginary part of wave frequency according to . We classify the wave events based on the analysis results of the above first five steps and as per Table-1. In this way, we accomplish the goal of diagnosing the key characteristics (e.g., propagation direction, the polarization, and the growth/damping rate) of the wave events. Power spectral densities and polarization of wave electromagnetic fields As a typical example, we show a wave event of outward propagation, right-hand polarization about B 0 , and positive growth. The time interval of this event is between [18:28, 18:31] UT on Nov 4, 2018. In Figure 1a and 1b, we display and compare the calibrated electric field (E T , E N ) and the induced electric field based on the measurements of magnetic field and bulk velocity (−(V × B) T , −(V × B) N ). The two types of electric field match well with one another. Therefore, we use the calibrated electric field to analyze the propagation direction and growth/damping rate of the observed wave. We apply wavelet decomposition to the time sequences of the electric and magnetic field components (E T , E N , B T , B N ), and obtain the corresponding band-pass waves in the frequency range of [0.2, 10] Hz, which are illustrated in Figure 1c-1f, respectively. To further diagnose how the wave propagates in the solar wind reference frame, we transform the electric field from the spacecraft reference frame to the solar wind reference frame. We conduct a detailed analysis of the magnetic field (including the local background and the fluctuating magnetic field) and the electric field (the fluctuating electric field in the local solar wind background frame). We find that the local background magnetic field direction is mainly sunward with θ BR 140 • (see Figure 2a). The magnetic field fluctuations are mainly in the transverse directions, indicating the state of approximate incompressibility (P SD(δB ⊥ ) in Figure 2b is dominant over P SD(δB ) in Figure 2c). For most times of the interval, there are evident enhanced signals of P SD(δB ⊥ ) at periods of [0.2, 0.4] s (see Figure 2b). Since B 0,local is quasi-anti-parallel to the R direction, the T and N directions can be approximated as the two directions perpendicular to B 0,local , rendering convenience for the analysis of the transverse wave electric field. We clearly identify the enhanced signals of P SD(δE t ) and P SD(δE n ) in the same period range of [0.2, 0.4] s (Figure 2d & 2e). The magnetic helicity spectrum as calculated according to Equation 8 shows obvious negative polarity in the period range of [0.2, 0.4] s (see Figure 2f). Likewise, the polarization of the wave electric field (in the local solar wind background frame) around the R direction, which is calculated from Equation 9, appears with negative polarity (Figure 2g). The good match between magnetic helicity and electric polarization indicates the high-quality measurements of the electric and magnetic fields of this wave event, which can be further analyzed to investigate its propagation direction and activity of growth/dissipation. DIAGNOSIS OF PROPAGATION AND EVOLUTION OF WAVE EVENTS According to Equation 6, we calculate and illustrate the dynamic spectrum of the Poynting flux density in the R direction (P F R ) (see Figure 3a). During the time interval of [18:27:45, 18:31:00] and in the period range of [0.1, 0.5] s, P F R is basically greater than 0, suggesting that the waves propagate outward quasi-anti-parallel to the sunward local background magnetic field direction. Referring to Table 1, we clearly identify this wave event as outward propagating fast-magnetosonic/whistler waves with right-hand polarization of electromagnetic field vectors about the background magnetic field direction. In Figure 3b, we observe that the phase angle differences, φ (δB ⊥ , δE ⊥ ) ∈ (0, 180) • and φ (δB ⊥ , δE ⊥ ) ∈ (−180, 0) • correspond to P F R > 0 and P F R < 0 in Figure 3a, respectively. We calculate the dynamic spectra of ω/k and γ/k according to Equation 12 (see Figure 3c and 3d). Moreover, we calculate the dynamic spectral distribution of γ/ |ω| and γ according to Equation 16 (see Figure 3e and 3f). In the case of our study, we approximate θ kV in Equation 16 with 0 • since θ kB ∼ 180 • and θ BV ∼ 180 • during the interval of study. We can see that γ is most of the time greater than 0 in the time-period distribution, especially in the period range of [0.1, 0.5] s. This evidence strongly suggests that the observed fast-magnetosonic/whistler waves are growing during the time of observation. We apply a further statistical analysis of the above results of wave propagation and growth. We select seven time scales (τ =0.141, 0.167, 0.197, 0.234, 0.277, 0.329, 0.390 s), and count the value-dependent occurrence frequency distribution of multiple variables (e.g., P F R , φ(δE ⊥ , δB ⊥ ), ω/k, γ/k, γ/|ω|, γ) (see Figure 4a-4f). At scales shorter than 0.5 s, P F R appears more on the positive side, φ(δE ⊥ , δB ⊥ ) appears more in the angle range of (0, 180) degrees. The distribution of γ is asymmetric, with more intervals on the side greater than 0, indicating the nature of local excitation and emission for the studied fast-magnetosonic/whistler waves. To view the variation of PSD(δB ⊥ ), PSD(δE T,N ), ω and γ as a function of f sc from a statistical perspective, we plot the occurrence frequency distribution in the 2D space of (f sc , PSD(δB ⊥ )), (f sc , PSD(δE T,N )), (f sc , ω), and (f sc , γ) (see Figure 5). We can see from Figure 5a and 5b that, both PSD(δB ⊥ ) and PSD(δE T,N ) show an obvious spectral bump around f sc ∼ 0.4 Hz. Such spectral bump structure indicates that, the wave signal is stronger than the background turbulence level, probably due to its excitation and unstable growth. Unlike for damped or freely propagating waves, the growth rate (γ) of the active wave evidently exceeds the zero level (see Figure 5d), and even approaches a level comparable to the derived wave frequency (Figure 5c), offering further direct evidence that the active wave is growing during the time of the observation. At higher frequency beyond the PSD's bump, the occurrence frequency distributions of both ω and γ become diffusive (see the right end of Figures 5c&5d), probably due to the uncertainty of the electric field measurements at higher frequency. DISCUSSION AND CONCLUSIONS In this work, we propose a method to quantify the energy flux density of wave propagation (i.e., Poynting flux density for electromagnetic waves) and the growth/dissipation rate spectrum. Based on this method, we further put forward a set of diagnosis criteria for the nature of kinetic wave events in the heliosphere. We apply this analysis method and the diagnosis criteria to in situ measurements from PSP in the inner heliosphere. As an example, we identify an event of outward propagating fast-magnetosonic/whistler waves with right-hand polarization. For this wave event, we provide the dynamic spectra of physical variables (power spectral densities, magnetic helicity, electric field polarization, Poynting flux density, phase difference between electric and magnetic fields, wave frequency, and normalized rate of change of the wave energy density). We find that the wave event is not in a time-steady state but in the temporally growing phase, evidenced by the positive bump of the γ(f sc ) spectral profile, which is physically responsible for the spectral bumps appearing on the PSDs of the electric and magnetic field fluctuations. This work addresses the issue of the origin of kinetic waves in the inner heliosphere. We point out that kinetic waves are not necessarily created in the solar wind source region, though some proportion of waves may be launched from the solar atmosphere through magnetic reconnection or turbulent advection shaking (He et al. 2021; Zank et al. 2020). Instead they can be locally excited and grow due to instability in the inner heliosphere. The results of this work indicate that the inner heliosphere shall be regarded as a critical region for the birth and development of kinetic waves. This suggests that the inner heliosphere exhibits complex wave-particle coupling processes, involving the velocity distributions of various plasma species and the time-varying evolution of different wave modes. The free energy responsible for the fast-magnetosonic/whistler waves may come from the drift ion population, electron heat flux, and electron thermal anisotropy (Verscharen et al. 2013;Stansby et al. 2016;Narita et al. 2016;Tong et al. 2019;Sun et al. 2020). In the future, we require a combination of both the electromagnetic field information and the particle phase space density to explore the mystery of kinetic waves and their wave-particle interactions in the inner heliosphere in a comprehensive way. We thank the NASA Parker Solar Probe Mission and the FIELDS and SWEAP teams for use of data. PSP data is available on SPDF (https://cdaweb.sci.gsfc.nasa.gov/index.html/). The work at Peking University is supported by NSFC under contracts 41674171 and 41874200, and by CNSA under contracts No. D020301 and D020302. D.V. from UCL is supported by the STFC Ernest Rutherford Fellowship 354 ST/P003826/1 and STFC Consolidated Grant ST/S000240/1. G.Q. Zhao is supported by NSFC under contract 41874204 and partly by the Project for Scientific Innovation Talent in Universities of Henan Province (19HASTIT020).
6,360.2
2021-09-27T00:00:00.000
[ "Physics", "Environmental Science" ]
Critical points of the three-dimensional Bose-Hubbard model from on-site atom number fluctuations We discuss how positions of critical points of the three-dimensional Bose-Hubbard model can be accurately obtained from variance of the on-site atom number operator, which can be experimentally measured. The idea that we explore is that the derivative of the variance, with respect to the parameter driving the transition, has a pronounced maximum close to critical points. We show that Quantum Monte Carlo studies of this maximum lead to precise determination of critical points for the superfluid-Mott insulator transition in systems with mean number of atoms per lattice site equal to one, two, and three. We also extract from such data the correlation-length critical exponent through the finite-size scaling analysis and discuss how the derivative of the variance can be reliably computed from numerical data for the variance. The same conclusions apply to the derivative of the nearest-neighbor correlation function, which can be obtained from routinely measured time-of-flight images. Results He near the lambda transition 9 . The curve is given by 0 0127, B ≈ 460 J/(mol K) ⋅ , and T c ≈ 2.17 K. The A coefficient for < T T c and T T c > is approximately given by −447 J/(mol K) ⋅ and −471 J/(mol K) ⋅ , respectively. Its asymmetry produces the lambda shape of the specific heat curve. Specific heat is not divergent at the critical point because the exponent α < 0. Measurements of specific heat as large as 120 J/ ⋅ (mol K) were reported. A similar shape is observed in our results for Var ∂ η . boundary conditions, would require placing cold atoms in a three-dimensional optical lattice enclosed in an optical box trap. The tools needed for experimental creation of such a trap have been recently developed [17][18][19][20][21][22] . A comprehensive review of the properties of this model can be found in ref. 23 . In short, its physics depends on the filling factor n, i.e. the mean number of atoms per lattice site, and the ratio of the tunneling coupling J to the interaction energy U η = . J U / (2) We are interested in integer filling factors, for which there is a quantum phase transition 15 between the Mott insulator and the superfluid phase at the critical point η = J c c /U c . The system is in the Mott insulator phase for c η η < and in the superfluid phase for η η > c . The critical point at unit filling factor was theoretically studied via perturbative expansions 24,25 , QMC simulations 26 , the non-perturbative renormalization group approach 27 , and the projection operator approach 28 . The critical points at higher filling factors were studied perturbatively in ref. 24 for = n 2, 3 and in ref. 25 for an arbitrary filling factor. The results of all above-mentioned papers can be summarized as follows We will now summarize experimental results on the critical points of the 3D Bose-Hubbard model 5,29,30 . The experiment 5,31 is done in an optical lattice of wavelength λ = 852 nm with 87 Rb atoms in the | = = 〉 F m 2, 2 F state. Their s-wave scattering length 32 is 5.45 (26) nm. The position of the critical point for the unit filling factor was estimated to correspond to lattice heights between 10E R and 13E R , where the recoil energy E R is defined as k 2 2  /2m with π = k 2 /λ and m being the mass of the considered atom. This may be written as 11.5(9)E R , where the standard deviation has been estimated 33 by dividing maximum uncertainty of 1.5E R by 3. Using the formulas for J and U coefficients from 23 , we find c η to be 0.04 (1). The number reported in the bracket provides one standard deviation due to uncertainties in the lattice height and scattering length. It has been obtained through the uncertainty propagation formula. Nearly identical experimental setup was used in 29 . It was found there that the lattice heights corresponding to critical points for double and triple filling factors are 14.1(8)E R and 16.6(9)E R , respectively. Applying the same procedure as above, albeit with λ = 850 nm, we get c η for the double and triple filling factors 0.021(5) and 0.011(2), respectively. Finally, we come to ref. 30 , where again 87 Rb atoms, but in a different hyperfine state, are studied. It was found there that the superfluid-Mott insulator transition takes place at η = . 0 029(2) c for the unit filling factor. To put these results in perspective, we can compare them to the mean-field predictions, which for our system are 34 This yields η c equal to 0.029, 0.017, and 0.012 for n = 1, 2, and 3, respectively. Therefore, we see that more accurate experimental results are needed for characterization of beyond mean-field effects in the position of the critical points. It should be also said that in all above-mentioned experiments external harmonic trapping is imposed on the system. At the very least, it enhances finite-size effects making detailed comparision between experiments and the theory based on Hamiltonian (1) difficult. Such comparision is additionally complicated by the fact that the 3D Bose-Hubbard model captures only the leading-order behavior of cold atoms in optical lattices 35 . As a result, more precise experimental results on the critical points would presumably ask for a bit more advanced theoretical description of the system. Our method for locating the critical points should be immediately applicable to such non-standard versions of the 3D Bose-Hubbard model. Besides critical points, quantum phase transitions are also characterized by critical exponents, which are supposed to be the same within a given universality class. The quantum 3D Bose-Hubbard model belongs to the universality class of the classical 4D XY model 15 . To the best of our knowledge, however, detailed studies of the critical properties of the latter model have not been presented in the literature so far. This is in sharp contrast to the properties of the lower-dimensional XY models, which have been studied in great detail 36 . The difficulties presumably arise here from the complexity of numerical studies of such a high-dimensional model. Furthermore, we note that the upper critical dimension of the XY model is four. This means that the mean-field theory, whose dynamical z and correlation-length ν critical exponents are given by should provide the lowest-order approximation to behavior of the 4D XY model. As it will turn out below, our relatively small system-size simulations are unable to capture corrections to the mean-field values of the critical exponents. the observable of interest. We will study here variance of the on-site atom number distribution where the site index i can be chosen arbitrarily due to the translational invariance of our model. Such an observable can be conveniently computed with QMC algorithms. It can be also experimentally measured 37-39 . www.nature.com/scientificreports www.nature.com/scientificreports/ Alternatively, since we are actually interested in the derivative of the variance, one may focus on the derivative of the nearest-neighbor correlation function and use the mappinĝˆ † † η η η The higher-order zero-temperature perturbative calculations of the variance in the Mott insulator phase of the 3D Bose-Hubbard model were numerically performed in comprehensive work of Teichmann et al. 25 . Quantum Monte Carlo simulations. We perform QMC simulations, which we briefly describe in the Methods section (see 40 for a cold-atom-oriented review of this subject). This allows us to study physics on the superfluid side of the transition, where dependence of the variance on η is most interesting for our purposes. Additionally, this approach allows us to get nonzero-temperature results, which is of interest from the experimental perspective. We perform our studies in lattices of size 3 where the linear system size L 4 16 ≤ ≤ for = n 1, 2 and L 4 12 ≤ ≤ for n 3 = . Most of the time, we investigate systems at temperature k T B / = . U 0 02, where k B is the Boltzmann constant. Such temperature can be converted to Kelvins by considering typical experimental conditions. To do so, one may assume that the lattice of wavelength λ = 532 nm is populated by either 87 Rb or 174 Yb atoms having the s-wave scattering lengths of 5.45 nm and 5.56 nm, respectively 32,41 . Using then formulas from 23 , we find that U c /E R for n 1, 2, 3 = equals 0.47, 0.54, 0.59 for 87 Rb and 0.48, 0.55, 0.60 for 174 Yb. These numbers have been obtained by assuming that critical points are given by (3). Since we work near critical points, we may take U c /k B as the unit of temperature. In nano Kelvins, it equals 184, 211, 230 (93, 107, 117) for = n 1, 2, 3 in 87 Rb ( 174 Yb). We have checked that the same results can be obtained by generating Wannier functions and then integrating them over in order to get J and U coefficients 16 . www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ We see from these calculations that temperature k T B /U 0 02 = . corresponds to a few nano Kelvins in typical rubidium and ytterbium systems. While such low temperatures are certainly experimentally challenging, it does not mean that our studies are completely free from nonzero-temperature corrections. For example, the 3D Bose-Hubbard model was studied through QMC simulations in ref. 26 at temperature about twenty times smaller than our k T B /U 0 02 = . . These studies were done at the unit filling factor in systems, whose sizes were similar to the ones used by us. The critical point was extracted from finite-size scaling of the excitation gap. The relative difference between the position of the critical point found in our work and the one from ref. 26 is about 0.5%. We will thus skip systematic discussion of nonzero-temperature effects in our computations. It should be stressed, however, that our approach to finding critical points can be applied to "warmer" systems as well, where nonzero-temperature scaling analysis can be deployed 3 . The variance for the filling factors n = 1 and n = 2, 3 is presented in Figs 2 and 3, respectively. We see there its steep increase around the expected positions of critical points (3). To locate the points, where changes of the variance proceed most rapidly, we compute the first derivative of the variance. Such a procedure encounters a technical problem: the derivative is sensitive to fluctuations of data that is being differentiated (Fig. 4). Therefore, such data has to be smoothed first, which we do by fitting the Padé approximant of order (m, s) 42 Usefulness of this procedure is illustrated in Fig. 4, where we see that the derivative of the Padé approximant provides a smooth curve that can be easily subjected to detailed analysis. We mention in passing that the very same procedure could be applied to data for the variance coming from experimental measurements. Fitting noisy data with (10) requires the approximation order to be adapted to the problem. While small orders may result in a bad fit due to insufficient number of fitting parameters, choosing too large orders causes problems as well. In the latter case, such extra flexibility leads to reproduction of noise-induced fluctuations of data points instead of averaging the fluctuations out. Choosing the optimal order of the Padé approximant is not difficult in our computations. Indeed, we have found that for every combination of (L, n, T) parameters, there exists a stability island in the set of all reasonable approximation orders. By taking the order of approximation within the island, stable results for the variance and its derivative are obtained. We have found that typically our QMC data sets can be reasonably fitted with m 8 ≤ , ≤ s 9. We have also found that by considering a denser numerical grid, or by reducing the QMC noise by increasing the sample size, stable results can be obtained. We have applied this strategy for creation of Figs 2-8, where Padé approximants of the fixed (8,8) order are employed. The system-size and temperature dependence of Var ∂ η is presented in Figs 5 and 6 for the filling factors n 1 = and = n 2, 3, respectively. The first thing we notice there is the lambda-shape of the plots reminiscent of the specific-heat plot in liquid 4 He (Fig. 1). Then, we find that the first derivative of the variance has a maximum near the critical point on the superfluid side of the transition, say at η ⁎ . We see that ⁎ η shifts towards the critical point as the system size increases. The same is observed when temperature decreases. Moreover, Var( ) η ∂ η ⁎ grows with the system size and inverse tem- www.nature.com/scientificreports www.nature.com/scientificreports/ perature. All these observations suggest that the maximum of the derivative of the variance encodes the position of the critical point. This is not the first time when the derivative of an experimentally-accessible quantity is used for finding the critical point of the 3D Bose-Hubbard model. Indeed, the derivative of experimentally-measured visibility of the time-of-flight interference pattern was used for such a purpose as well 29 . More quantitatively, we study the position of the maximum of Var ∂ η by fitting C to numerical results. The idea here is that the parameter a estimates the position of the maximum in the thermodynamically-large system (c 0 > ). The typical fits that we perform are shown in Fig. 7a-c, where the positions of the maxima have been extracted from Padé approximants of order (8,8). To check sensitivity of these results to the order of approximants (10) We have obtained ± . = . ± . = a n n n 0 03430 0 00008 for 1 0 02020 0 00006 for 2 0 01447 0 00003 for 3 , where the error bars are chosen to capture all the results in the parameter range given by (12). Standard deviations of fitted coefficients for = = m s 8 are typically a bit smaller (Fig. 7). A quick look at (3) reveals that these results provide positions of critical points, which makes us confident that ⁎ η converges to c η in the thermodynamic limit. Table 1. Coefficients obtained by fitting (18) to ∂ η Var(η*). Error bars capture spread of the results due to the varying order of Padé approximation (12). QMC results for k B T/U = 0.02 have been used to prepare this table, the same system sizes as in Fig. 7 have been employed in the fitting. www.nature.com/scientificreports www.nature.com/scientificreports/ which can be explained through the standard finite-size scaling argument 8,43 . Namely, we assume that near the critical point the system properties depend on the linear system size and the ratio between the linear system size and the correlation length ξ, say c where f is a non-singular scaling function. According to this ansatz, the position of the extremum of ∂ η Var scales and we see that the function h(L) captures system-size dependence of Var ∂ η at ⁎ η . The exponent in this equation matches the c parameter if we use the mean-field value of the critical exponent ν (5). More accurate studies are needed for checking if there are beyond-mean-field corrections to the critical exponent ν in the 3D Bose-Hubbard model. We believe that the key limitations here come from the relatively small system sizes that can be numerically handled. Finally, for the sake of completeness, we provide the results for the fitting parameter b again under the variation of the order of Padé approximation (12) ± . = . ± . = . b n n n 0 059 0 005 for 1 0 029 0 002 for 2 0 028 0 002 for 3 (17) Next, we will discuss scaling of ⁎ Var( ) η ∂ η with the linear system size. We fit to numerics. Typical results that we obtain are presented in Fig. 7d-f, where again Padé approximants of order (8,8) have been employed. Next, we quantify influence of the order of Padé approximation on these results. Proceeding similarly as with (13), (14) and (17), we get the results summarized in Table 1. All of them suggest that ⁎ η ∂ η Var( ) slowly increases with the linear system size reaching a finite value in the thermodynamic limit. This has an interesting consequence that can be readily spotted in Figs 5 and 6. Namely, we see that the curves showing ∂ η Var for different system sizes at constant temperature cross near the critical point. This can be explained by the finite-size scaling ansatz (15) and take into account that h(L) weakly depends on the linear system size reaching a finite value in the thermodynamic www.nature.com/scientificreports www.nature.com/scientificreports/ limit. The latter remark follows from the fact that h(L) is proportional to , which we have just discussed. We mention in passing that similar-looking crossing of curves near the critical point was used for finding the position of the critical point from QMC data for the excitation gap 26 . Further insights into Var ∂ η can be obtained by setting k T B /U 0 = and using the Feynman-Hellmann theorem to arrive at 14 where  is the ground-state energy per lattice site. This expression is closely linked to the one for specific heat oftentimes studied in the context of classical phase transitions (see e.g. Fig. 1 and the discussion around it). Indeed, specific heat per lattice site can be written as 44 where  is the free energy per lattice site. Its singular part is typically assumed to scale as where α is the specific heat critical exponent. A quick look at (19) and (20) reveals the mapping between the two expressions. It is then unsurprising that the singular part of  is usually assumed to scale as The exponent α is linked to the z and ν critical exponents through the quantum hyperscaling relation where d is the system's dimensionality 3 . Var c in the infinite system, which would imply that h L L ( ) / ∼ α ν . As a result, 0 α = would be compatible with our numerics in the large-L limit. Such a value can be obtained by putting mean-field critical exponents (5) into (22). There are, however, at least two reasons to be cautious here. First, the upper critical dimension of the Bose-Hubbard model is three and so it is expected that there will be corrections to the mean-field scaling laws. As a result, it is unclear to us what is the actual value of α. Second, even if α would be zero, the presence of logarithmic singularities in the derivatives of the ground-state energy could not be ruled out without detailed analysis. For example, such a situation takes place in the one-dimensional quantum Ising model, where 0 α = due to z d 1 ν = = = 45,46 . The singular part of its ground-state energy per lattice site Ising  turns out to be proportional to g g g g ( ) ln www.nature.com/scientificreports www.nature.com/scientificreports/ thermodynamically-large chain, where g is the magnetic field driving the transition and g c is the critical point. As a result, ∂ g 2 Ising  diverges logarithmically with | − | g g c . In the finite chain, there is the extremum of  . The latter property differs from what we seem to observe in the 3D Bose-Hubbard model. One possible explanation of our puzzling observation that Var( ) ⁎ η ∂ η reaches a finite value in the thermodynamic limit might be that the system sizes that we consider are much too limited rendering our extrapolations unreliable. Still, the use of our fitting results for interpolation purposes should be very well justified and useful. Finally, we would like to briefly discuss dependence of our results on the filling factor n. The idea that we explore here comes from Teichmann et al. 25 , where it was found through perturbative studies that deeply in the Mott insulator phase the variance of the on-site atom number operator is a function of This implies that Var  ∂ η should be a function of (23) as well. Our QMC numerics, which we present in Fig. 8, perfectly follows these predictions away from the critical point on the Mott insulator side of the transition. We also see in this figure that the mapping η η fails a bit in the superfluid phase for low-n data that we explore. Nonetheless, judging from quite good overlap between the n = 2 and 3 results, it is reasonable to expect that the mapping will be accurately supported by numerical simulations in both phases in the limit of n 1  . Further studies are needed for establishing this observation. Discussion We have studied equilibrium properties of the 3D Bose-Hubbard model focusing our attention on the variance of the on-site atom number operator and its derivative with respect to the parameter driving the superfluid-Mott insulator transition. Our results have been obtained in systems with the mean number of atoms per lattice site equal to one, two, and three. They come from Quantum Monte Carlo simulations. The key finding of this work is that the derivative of the variance has a pronounced maximum close to critical points. For example, in a very small lattice of linear size 4, when the number of atoms equals the number of lattice sites, the position of the maximum estimates the position of the critical point with 10% relative accuracy (Fig. 4). The mismatch between the two decreases quadratically with the linear system size and it can be further suppressed by simple extrapolation to the thermodynamic limit. Besides discussing the position of the critical point, which is an interesting albeit non-universal feature, we have found that even in small systems the critical exponent ν can be extracted from the finite-size shift of the maximum of the derivative of the variance. This is interesting because knowledge of this exponent can provide important information about the universality class of the 4D XY model. This is the least-studied universality class of the XY model. Limited knowledge of its properties stems from numerical shortcomings, clearly seen in our work, and difficulties in finding physical systems, where it can be experimentally approached. The latter can be done in condensed matter and atomic physics setups. In the condensed matter context, it was proposed that some properties of either strongly underdoped cuprate superconductors or 4 He in nanoporous media can be captured by the 4D XY universality class [47][48][49] . In the atomic physics context, cold atoms in a three-dimensional optical lattice are the best example of a system whose scaling properties should mirror those of the 4D XY model. We view cold atom setups, simulating the 3D Bose-Hubbard model, as the cleanest and most promising platform for future quantitative studies of the 4D XY universality class. In fact, measurements of the on-site atom number fluctuations in the 3D Bose-Hubbard systems have been recently reported 38,39 . Direct comparision of these results to our findings is difficult because setups studied in 38,39 are non-uniform due to the external trapping potential adding a local chemical-potential-like term to Hamiltonian (1). We are hopeful, however, that blending of techniques presented in these references with the recent optical box trapping advances can lead to successful creation of the homogeneous 3D Bose-Hubbard quantum simulator. Such a system could be large-enough to overcome small-size limitations plaguing numerical simulations. As a matter of fact, quantum simulators, at the very least, are supposed to do just that. Methods We use the Directed Worm Algorithm from the ALPS software package 50,51 . This algorithm samples the path-integral representation of a density matrix of a grand canonical ensemble (GCE) with configurations called worldlines. Since we work with systems having fixed filling factor n, computation of any average in a lattice of the linear size L requires rejecting those worldlines, where the total numer of particles differs from nL 3 . To improve the sample count of the remaining fraction, the chemical potential is adjusted to set the expected GCE density of the system to n particles per site. The statistical error of the determined variance is significantly reduced by adopting periodic boundary conditions, where observables do not depend on the lattice site. As a result, the variance of the on-site atom number operator can be averaged over the resulting ensemble and over all L 3 lattice sites, which is exactly what we do. Due to the amount of computational power needed, our QMC simulations are limited to system sizes and temperatures discussed below equation (9). Early symptoms of these limitations can be spotted in Figs 5 and 6. We see there that for the largest systems considered, there is a small warp in the derivative of the variance slightly to the left of the dotted lines marking positions of critical points. Therefore, it is important to check that positions of the maxima, which we extensively study, are stable under change of parameters of our QMC simulations. Several tests are thus performed. First, we vary the total number of later averaged worldlines reaching typically the level of 10 7 to 10 8 . Second, when generating the worldlines, only every m-th trajectory is included into the final ensemble (if it additionally contains nL 3 particles), to ensure that subsequent worldlines in the ensemble are independent
6,001.8
2019-03-14T00:00:00.000
[ "Physics" ]
2 D-NanoLC-ESI-MS / MS for Separation and Identification of Mouse Brain Membrane Proteins © 2012 Van Chi and Dung, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 2D-NanoLC-ESI-MS/MS for Separation and Identification of Mouse Brain Membrane Proteins Introduction Comprehensive proteomics analysis has the potential to provide new knowledge on cellular responses in development, aging, drug action, environmental stress, and disease pathogenesis (carcinogenesis, cardiovascular disease, etc). However, the separation and identification of proteomes/proteins is a challenging task due to their heterogeneous constituents or complex structures and closely related physico-chemical behaviors. It is clear that the combination of many analytical techniques is necessary to fulfill this complex task. At the start of proteomics research, two-dimensional electrophoresis (2DE) was routinely used to separate complex proteomic sample because of its high resolving power. In this technique, proteins are separated in a two-step process (two dimensions) based on their different physical properties. The first dimension is isoelectrofocusing in which proteins are separated based on their isoelectric points (pI, the pH where a protein's net charge is zero) using immobilized pH-gradient strips. Proteins then are separated according to their mass using sodium dodecylsulfate-polyacrylamide gel electrophoresis (SDS-PAGE) in the second dimension. With 2DE, thousands of proteins can be detected in a single experiment depending on the used staining techniques (Coomassie blue, silver, fluorescent dyes staining) [11]. Mass spectrometry (MS), using either electrospray ionization (ESI) or matrixassisted laser desorption/ionization (MALDI), is the key technology for the identification of protein spots including membrane proteins, for which differential expression has been demonstrated [16,30]. 2DE, however, has some major drawbacks/disadvantages. It is time-consuming, difficult to reproduce and automation is hard to achieve. Furthermore, 2DE faces with many difficulties in analyzing several groups of proteins, such as low-abundance proteins, hydrophobic proteins (membrane proteins/membrane-bound and membrane-associated proteins), very large as well as very small proteins and proteins with extreme pI values. Unfortunately, these proteins have high proportion in comparison to total cellular proteins and are usually the most promising targets for drug development or disease diagnostics. About 30% of the mammalian genome encodes integral membrane proteins [27]. However, the comprehensive proteomic analysis of these proteins by mass spectrometry is difficult due to the amphipathic (containing regions that are hydrophobic and hydrophilic) nature in integral membrane proteins and their general low abundance levels [23]. Since the analysis of membrane proteins remains a significant challenge in proteomics, other techniques need to be established to address these problems. There have been many strategies developed for enriching, isolating and separating membrane proteins for proteomic analysis that have moved this field forward. In recent years, two-dimensional liquid chromatography (2D-LC) has been employed as a complementary or alternative separation technique to 2DE. The combination of liquid chromatography as a separation tool for proteins and peptides with tandem mass spectrometry as an identification tool referred to as LC-MS/MS has generated a powerful and broadly used technique in the field of proteomics [6,9,10,21,22], particularly in the analysis of membrane proteomes [18,19]. With the development of new quantitative strategies and bioinformatics tools to cope with the analysis of the large amounts of data generated in proteomics experiments, the resolution and sensitivity state-of-the art LC-MS/MS systems has reached dimensions allowing not only the analysis of individual proteins but also investigations on the level of complete proteomes [8]. This approach is usually based on the injection of the digested protein sample onto a strong cation-exchange (SCX) column as a first-dimension separation. Peptides bound in SCX column are eluted and separated from the column as fractions by an injecting salt plugs/salt step gradient of increasing salt concentration. Each fraction is subsequently separated on a reversed-phase (RP) column as the second orthogonal separation dimension before being presented to mass spectrometry analysis. Different stationary phases in chromatography columns provide variable levels of resolution. Reversed-phase chromatography is highly compatible with subsequent mass spectrometric analysis due to the lack of salts in the buffers and provides relatively high-resolution separation. Most reversed-phase stationary phases for LC-MS analysis consist of silica beads of 3-5 μm in diameter with alkyl chains of either eight or eighteen carbons in length (C8 or C18) attached. Using column switching, the entire procedure is on-line and fully automated. In order to improve sensitivity the reversed phase separation is usually performed in the nanoflow scale and mass spectrometry is used as the final detection method. In this chapter a strategy for enrichment, isolation, separation, identification and characterization of mouse brain membrane proteins with the basic setup of two-dimensional nano liquid chromatography (2D-nanoLC) system (UltiMate TM /FAMOS/Switchos TM , LC Parking, Dionex, The Netherlands) coupled online with QSTAR ® XL MS/MS mass spectrometer (Appllied Biosystems/MDS SCIEX, Ontario, Canada) is presented. Figure 1. A scheme illustrating the necessary steps, including enrichment and extraction, separation, identification and characterization for proteomic analyses of mouse brain membrane proteins using gelbased approach in combination with comprehensive two-dimensional nano liquid chromatography (2D-nanoLC) coupled online with tandem mass spectrometry. Membrane protein enrichment and extraction Swiss mouse brains were collected as soon as possible after the animals were killed. The samples (3-5 g) were excised into approximately 5 mm wide pieces using scissors and washed with 10 ml of ice cold PBS buffer (0.2 g KCl, 8 g NaCl, 1.44 Na2HPO4, 0.24 g KH2PO4) and then resuspended in 3 volumes of the homogenization medium (0.25 M sucrose in 5 mM Tris-HCl pH 7.4 with 1 mM tetrasodium EGTA, 1 mM sodium orthovanadate (Na3VO4) and 2 mM sodium fluoride in deionized filter-sterilised MilliQ water) containing protease inhibitors (Calbiochem Protease Inhibitor Cocktail Set 111, catalog number 39134, contains AEBSF, aprotinin, bestatin, E-64, leupeptin, pepstatin A). After the medium has been drained off, new medium was replaced and drained off again. 10 ml of homogenisation medium (containing inhibitors) was added and the sample was homogenised using a Polytron in a Potter homogeniser with motor driven teflon pestle at approximately 1,000 rpm. Completely homogenized samples were centrifuged at 10,000 rpm for 15 min at 4 o C to sediment large organelles. The obtained supernatant was used for recentrifugation again at 10,000 rpm for 15 min at 4 o C. The supernatant was collected and centrifuged at 40,000 rpm at 4 o C for 1 hr. After discarding the clear supernatant, the membrane pellets were retained and washed by resuspending in ice-cold 0.1 M Na2CO3 containing protease inhibitors for 1 hr. The mouse brain membrane protein fractions were obtained by centrifugation again at 40,000 rpm for 1 h at 4 o C. The sample was divided and stored at −80 o C until use. The protein concentration of the extracted membrane fractions was assessed using a Quick Start TM Bradford Protein Assay Kit (Bio-Rad, Hercules, CA 94547 USA). Protein quantification Protein concentration of the extracted membrane fractions was determined using Bio-Rad's Quick Start TM Bradford Protein Assay [5]. The assay is based on the observation that the maximum absorbance for an acidic solution of Coomassie Brilliant Blue G-250 shifts from 465nm to 595 nm when binding to protein occurs. Both hydrophobic and ionic interactions stabilize the anionic form of the dye, causing a visible colour change. For the standard curve, bovine serum albumin over a wide range of concentrations (0.1 -20 μg/μl) was used. The low concentration range assay was used in the test tube format. 2 μl of standard or sample was added to 798 μl of MilliQ water. 200 μL of Bio-Rad reagent was added, mixed, and incubated for 10 min at room temperature. The absorbance at the wavelength of 595 nm was measured in a spectrophotometer. Glass or polystyrene (cheap) cuvettes have been used, however the color reagent stained both. Disposable cuvettes were recommended. In-gel digestion In-gel digestion of proteins isolated by gel electrophoresis was carried out according to the protocol published by Shevchenko et al [25] with some modifications described in our previous study [3,28,29]. All chemicals including DTT, iodoacetamide (IAA), ammonium bicarbonate, ammonium acetate, trypsin (proteomics sequencing grade), sodium bicarbonate and Triton X-100 were purchased from Sigma-Aldrich (St. Louis, MO, USA) prepared using deionized filter-sterilised MilliQ water. Upon electrophoresis, proteins were fixed within a polyacrylamide matrix by incubating the entire gel in 5% (vol/vol) acetic acid in 1:1 (vol/vol) water:methanol. Coomassie blue-stained protein bands were excised from gels and placed into 1.5 ml eppendorf tubes, destained with 50% ACN in 25 mM NH4HCO pH 8.0 at room temperature with occasional vortexing, until gel pieces became white and shrank, and then acetonitrile was removed. The gel pieces were then reduced by incubating with 5 mM DTT solution at 56 o C for 45 min and alkylated for 1 hr with 20 mM IAA solution in darkness at room temperature. The membrane proteins were digested by adding trypsin buffer (0.03 μg/μl in 10 mM ammonium bicarbonate containing 10% (vol/vol) acetonitrile) and incubating overnight at 37 o C. Check if all solution was absorbed and add more trypsin buffer, if necessary. Gel pieces should be completely covered with trypsin buffer (typically, 50 μl or more). Sample cleanup with C-18 ZipTips The resulting peptide digestion products were extracted by adding 100 μl of extraction buffer (1:2 (vol/vol) 5% formic acid/acetonitrile) to each tube and incubated for 15 min at 37 °C in a shaker. All extracts were saved and dried and re-dissolved in 10-20 μl of 0.1% FA, incubated for 2-5 min in the sonication bath and centrifuged for 15 min at 10,000 rpm at the bench-top centrifuge. The obtained supernatant was applied for binding the samples onto micro pipette tips (µC18), catalog number ZTC18S096 (ZipTip ® , Millipore Co., Billerica, MA 01821 USA), equilibrated by being aspirated and dispensed with 100% acetonitrile, 40% acetonitrile/0.1% FA, 0.1% FA solutions. The samples were washed (4 times by aspirating and dispensing) with 15 µl of 0.1% FA), then eluted with 10 µl of 40% acetonitrile/0.1% FA. Appropriate aliquots were withdrawn for LC-MS/MS analysis or store at −20°C as contingency. Two-dimensional nano liquid chromatography (2D-nanoLC) The basic setup of an online two-dimensional nano liquid chromatography (2D-nanoLC) system (LC Parking, Dionex, The Netherlands) was developed for improved separation and hydrophobic peptide recovery, especially for complex peptides made from enzymatic digests of selected proteomes. The system works with the principle of elution of the digested peptides from the first dimension SCX column with injected salt solution plugs of increasing concentration. The eluted peptides are again trapped and introduced into the nanoflow path for separation and analysis by second dimension RP column and tandem mass spectrometry. The great advantage of the system is a robust and fully automated separation. The methods are easy to set up and composed of identical runs differing only in the concentration of injected salt plugs. After washing (~12 min), peptides were eluted from a reversed phase C18 column using the solvent B (0.1% FA in 85% LC-MS grade ACN) gradients: from 5 to 20% of solvent B in 25 min, 20 to 70% in 28 min, 70 to 100% in 10 min and maintaining 100% B in 10 min, and back to 5% B in 5 min. According to the workflow, after 2D-nanoLC separation, peptides were independently analyzed by a QSTAR ® XL MS/MS mass spectrometer (Appllied Biosystems/MDS SCIEX, Ontario, Canada) equipped with a nanoESI source. MS and MS/MS spectra were recorded and processed in IDA mode (Information Dependent Acquisition) controlled by Analyst QS software. Typical settings are chosen to select multiply charged ions for MS/MS that produce at least 45-50 ion counts/s in a 0.5 s survey scan. The range of the MS full scan was from 400 to 1200 amu followed by MS/MS fragmentation of the three most intense precursor peptide ions for 1 s each. Protein identification and validation There are a number of different methods for identifying the proteins in the sample, and the most frequently used is the searching of the uninterpreted MS/MS data. The FASTA formatted protein sequences from National Center for BioTechnology Information (NCBI) and UniProtKB/Swiss-Prot databases are collected for proteins identified or identification by each MS experiment. Searching uninterpreted MS/MS data from a single peptide or from a complete nanoLC-MS/MS run was automatically analyzed with a non-redundant protein database by the program SEQUEST, which allows the correlation of experimental data with theoretical spectra generated from known protein sequences [7]. The precursor mass is used as a filter to find a list of candidate peptide sequences from the theoretical digest of the database. A variety of different systems are used to score the experimental MS/MS spectrum against spectra predicted from the candidate peptide sequences. For protein identification, experimental data were searched against the NCBInr and Swiss-Prot mouse protein database using Mascot v1.8 software in which the criteria were based on the manufacturer's definitions (Matrix Science Ltd, London, UK) [20]. The parameters were set as follows: enzymatic cleavage with trypsin; 1 potential missed cleavage; a peptide and fragment mass tolerance of ± 0.25 Da, and fixed modification of carbamidomethyl (cysteine); variable modification of oxidation (methionine); 1 + , 2 + , and 3 + peptide charge. Protein identifications were performed using a Mowse scoring algorithm with a confidence level of 95% and at least two matched peptides, showing a high score [12]. For further verification, proteins might be validated by MSQuant software [1,4,24] available at http://msquant.sourceforge.net. The MSQuant software is used as a validation and quantitation tool that produces the Mascot peptide identifications (HTLM files) and allows manual verification against the raw MS data (QSTAR XL raw files). The MSQuant software will pick up significant and verified hits from the Mascot output file and export information of identified proteins into an .xls file, including the GI (genInfo identifier) number and molecular-mass values. The average hydrophobic values and transmembrane domains of the identified proteins were calculated using the SOSUI system that is available at http://bp.nuap.nagoya-u.ac.jp/sosui/ [11]. The proteins exhibiting positive GRAVY values were recognized as hydrophobic and those with negative values were hydrophilic [13]. . An example of hydropathy profile and transmembrane regions/domains of an identified mouse brain membrane protein calculated using the SOSUI system that is available at http://bp.nuap.nagoya-u.ac.jp/sosui/ [11]. Conclusion Identification and characterization of membrane proteins is a crucial challenge in proteomics research. Thus, we have designed a strategy of gel-based approach in combination with comprehensive two-dimensional nano liquid chromatography (2D-nanoLC) that is robust and offers high separation capacity and high analysis throughput for mouse brain membrane proteins. By using this system, mixtures of in-gel trypsin-digested mouse brain membrane proteins were injected, desalted, separated and analyzed in complete automatization. The workflow started by the extraction and purification of the membrane fractions, then the SDS-PAGE was carried out as a useful preparative separation step. After staining, the gel slides with protein bands were cut, reduced, alkylated and trypsin-digested. The peptide mixtures extracted from each gel slice were fractionated by 2D-nanoLC coupled online with tandem mass spectrometry analysis (nanoESI-Q-TOF-MS/MS). The proteins were identified by MASCOT search against mouse protein database using a peptide and fragment mass tolerance of ±0.25 Da. Protein identification was carried out using a MOWSE scoring algorithm with a confidence level of 95% and processed by MSQuant software for further validation. In total, 298 identified membrane proteins from mouse brain tissues were verified by UniProt database, SOSUI and TMHMM prediction algorithms. Of these, 129 (43.3%) proteins have at least one transmembrane domain according to SOSUI and TMHMM. Furthermore, the function, subcellular location, molecular weight, post-translational modifications, transmembrane domains (TMD) and average of hydrophobicity of the identified membrane proteins might be categorized and analysed.
3,623.2
2012-10-24T00:00:00.000
[ "Biology", "Chemistry" ]
Thermodynamics of Aging and Heredity A brief review of the author’s works in the sphere of thermodynamic theory of aging is presented. Particular attention is given to justification of the methods of classical and near-equilibrium “dynamic” thermodynamics used to assess the direction and extent of ontogenesis. It is noted that discovery of the law of temporal hierarchies and the substance stability principle made it possible to use quasi-equilibrium thermodynamics to describe aging of organisms and evolution of the living world. The review contains certain examples confirming thermodynamic direction of the origin of life and its development. The author states that supramolecular thermodynamics is the foundation of modern epigenetics. The review shows that the environment affects the composition and structure of the genetic apparatus as well as gene expression through the mechanisms of hierarchical thermodynamics. The author discusses the influence of “nutritive molecules” and other biological substances on tissue composition. It is noted that “nutritive molecules” can have an epigenetic influence on DNA and genetic apparatus as a whole. The author gives recommendations regarding nutrition and use of medicines from the perspective of the thermodynamic theory of aging. Though the author used simplifications and approximations when developing the thermodynamic theory of living beings’ aging, the theory is in agreement with the detected correlation between changes in the specific Gibbs free energy of supramolecular structure formation and changes in the chemical composition of body tissues during aging. The author believes that hierarchical thermodynamics is the foundation of Darwinian natural selection. view from which the subject appears in its greatest simplicity." J. Willard Gibbs "The true and only goal of science is to reveal unity rather than mechanism." Henri Poincare "The properties of living things are the outcome of their chemical and physical composition and configuration." Thomas Hunt Morgan Thermodynamic Theory of Aging All material systems and objects grow old. The phenomenon of aging observed in the inorganic world is usually taken for granted. Relatively easy construction and transformation of numerous objects in inorganic nature enabled scientists to discover a range of laws of nature. General laws of nature were formulated on the basis of the principles of simplicity and generality. Classical thermodynamics plays an important role in explaining the aging of objects in the inorganic world [1]. Many researchers followed the thermodynamic method trying to explain complex processes in the inorganic and living world "in an outburst of thought" [2]. However, this approach proved inapplicable to life phenomena as well as to other complex phenomena. Life sciences saw emergence of numerous aging theories, which take into account individual facts and regard them as aging causes. However, these facts are usually just signs or characteristics of aging. Such a situation resembles an attempt to find an elixir of youth or immortality. Of course, each of such hypotheses may contain some reasonable ideas, but they do not disclose generality-the essence of the phenomenon. One may notice that sometimes individual descriptive theories of a general nature insensibly take into account directional effect of thermodynamics on aging processes [3]. Thermodynamics of complex hierarchical systems is surely the driving force of aging [4]- [16]. However, the correct application of thermodynamics towards complex living natural systems is based on in-depth study not only of biology but also interdisciplinary sciences, especially physical chemistry [17]- [20], biophysical organic chemistry [21], and other physical disciplines [22] [23]. It is necessary to point out that many previous inept attempts of using thermodynamics to explain aging resulted in considerable misunderstandings, confusion and, finally, science discredit. It would be sufficient to point out erroneous ideas on "living dissipative structures" and numerous wordy irresponsible speculations on "entropy" or "negentropy" in natural systems [10] [11] [24]- [26]. It is much easier to understand the aging process from the viewpoint of thermodynamics if one studies this phenomenon from the perspectives of changes in chemical and supramolecular composition of body tissues [4]- [9]. At the end of the 70s of the last century, the author [4] applied methods of Gibbs' phenomenological thermodynamics towards near-equilibrium dynamic living systems and showed that body aging has a thermodynamic nature. It was found that aging is accompanied by changing of the averaged specific Gibbs function (Gibbs free energy) of formation of supramolecular structure of body tissues. The author claimed that this changing resulted from the second law of thermodynamics applicable to quasi-closed near-equilibrium supramolecular systems. The tendency to near-equilibrium intermolecular stable structure of all supramolecular formations and tissues of the body is accompanied by accumulation of chemically energy-intensive organic matter in the body. In the process of development (aging), the living body (as it has been known for a long time) loses water and acquires lipids (fat), proteins, polysaccharides and other energy-intensive chemical compounds. If we look at this phenomenon from the viewpoint of physical chemistry, it will be obvious that the system is enriched with organic matter that is obviously less stable than water, the amount of which significantly decreases with aging. It is worth mentioning that aging is accompanied by certain accumulation of stable inorganic substances, for instance, in bony tissues. However, this does not distort the general thermodynamic tendency of change in chemical composition of the body during aging. In past years, the overwhelming majority of researchers believed that the specified body enrichment with relatively energy-intensive, poorly stable chemical substance contravenes thermodynamics. However, the author [4] claimed that it was wrong. According to him, the specified enrichment is a secondary effect. It results from the fact that energy-rich (relatively unstable) compounds have an increased affinity of supramolecular structure formation. Stability of the body's supramolecular structures increases as a result of the second law of thermodynamics. Accumulation of relatively unstable organic matters in the body tissues is accompanied by their gra-dual spontaneous decomposition, which is considerably accelerated under the influence of the environmental oxygen. All of this manifests itself in the body's metabolism cycle as well as in biological evolution on the whole. In fact, the author claimed that the following held true for the supramolecular (intermolecular) and chemical (molecular) hierarchies in evolution and ontogenesis (during aging): "The higher the supramolecular stability ( ) or "The higher the chemical stability ( ) Here: im G ∆  -specific Gibbs free energy of formation of supramolecular (intermolecular) structure of body tissues, ch G ∆  -specific Gibbs free energy of formation of molecular (chemical) structure of body tissues. Symbol "-" means that value G is specific, and symbol "~" emphasizes the heterogeneous character of the system. The author named the presented regularity the "principle of chemical substance stability". Subsequently, the author extended this principle to all hierarchies of living matter and called it the "principle of substance stability". Figure 1 shows the specified regularity. The saw-tooth lines plotted against the curves emphasize that the fluctuation of the environmental parameters such as temperature, pressure, nutrition schedule, physical fields, change of day and night, change of seasons, etc., lead to variation of ch G ∆  and im G ∆  . The organisms adapt to this variation only within the limits of the adaptive zone. By now, the thermodynamic theory of aging (like the thermodynamic theory of life origin and biological evolution) has not undergone fundamental changes. Only some formulations were made more precise, explanations were added, certain misunderstandings were removed, and some terms and designations were added. As a rule, the author mentioned this in his works. These changes and explanations were meant to avoid confusion and possible misunderstandings. For specialists in physical chemistry, the described picture of aging is usually unsurprising. Aging of the body-non-stationary thermodynamic system-resembles aging of sorbent (adsorbent) of the equilibrium or rather quasi-equilibrium chromatographic column, which with time loses its activity due to accumulation of the "aging sorbent" that has a high affinity for the sorbing substance. Therefore it is clear why the author sometimes calls the thermodynamic mechanism of aging a "chromatographic" one, and the aging theory, the "chromatographic theory of aging" [27]- [29] (see: http://www.eoht.info/page/Aging). Figure 2 diagrammatically shows change in specific Gibbs free energy of formation of supramolecular tissue structures in the process of body development and aging during ontogenesis. Aging of a burning candlewick, rust formation in open flow-through systems and many other phenomena serve as common analogues of living beings' aging in the inanimate world [30]. Since biological evolution, phylogenesis and ontogenesis are thermodynamically directed, it is reasonable to consider these phenomena from the perspective of changes in specific values of formation of all hierarchical structures, using Gibbs equation-a generalized equation of the first and second laws of thermodynamics [5]- [7] [14]- [16] [30]. It is appropriate to mention that in our case, the generalized Gibbs equation takes into account spontaneous processes in the system and non-spontaneous processes stimulated in the system by the environment, which is a changing physical thermostat [5] of the complex quasi-closed system under study. The well-known equation for differential of Gibbs function (free energy) may be represented in the following form [7] [14]- [17]: where: G-Gibbs free energy; T-temperature; S-entropy; V-volume; p-pressure; X-any generalized force except pressure; x-any generalized coordinate except volume; µ-chemical (evolutionary) potential; m-mass of k-substance; work realized by the system is negative. Index i pertains to the specific evolution, k-to the component i evolution. The superscript * means that the behavior of a quasi-equilibrium complex system is considered. The above equation is a generalized one since, (in principle) all interactions (inside and outside ones) of all structures on each hierarchical level are taken into consideration regardless of the scale of these interactions. It is logical to consider this equation as one with considerably divided parameters, symbolic or speculative, that can be efficiently used only in relation to everyone or adjacent hierarchies of structures. In this case, the Gibbs equation is considerably simplified in connection with negligibly small values of the majority of its isolated or individual members. Apparently, the validity of the thermodynamics of life origin, evolution and living beings' aging was seriously confirmed due to understanding that hierarchical thermodynamics is the foundation of the evolution theory by Ch. Darwin and A. Wallace, which is based on the natural selection mechanism [13]. It is necessary to point out that this famous theory is defined more precisely now from the thermodynamic perspective and is extended to all hierarchical systems. Thus, hierarchical thermodynamics makes it possible to claim that it was an "aggregate of abiogenous molecules" or some "blocks of life", not a living being, what was the "common ancestor" of living bodies. It should also be mentioned that there were many works by chemists and biologists, which made it possible to realize that evolution and aging can be explained from the perspective of hierarchical thermodynamics. For instance, the author intuitively felt the thermodynamic direction of the living world development when he studied works by Aleksandr Oparin, Arthur Kornberg, Frederick Sanger, M. Ichas, Leslie Eleazer Orgel, M.S. Kanungo, Charles Tanford, C.R. Cantor, P.R. Schimmel, and other researchers [5]. It should be noted that recent outstanding works by Professor Kenji Sorimachi unambiguously confirmed the directionality of prebiological and biological evolution. Thus, the thermodynamic theory of aging has started to use the idea of change in thermodynamic stability of molecular and supramolecular assemblies-structures of variable chemical and supramolecular composition. Such a correlation turned out to be useful, yet qualitative. The point is that thermodynamics cannot use absolute values of state functions (except for entropy) of elementary substances. Thermodynamics relies on the "standard reference level", which is set as identical and equal to zero for all chemical elements (elementary substances) in the standard state. To determine precisely the change in stability of biological tissues when their chemical composition changes, it would be necessary to study the change in stability in accordance with stoichiometric equations of the processes. It is quite acceptable to compare the chemical stability and supramolecular stability of substances and their compositions of very similar atomic composition, for instance, oils, fats, and lipid fractions. Let us discuss this problem in more detail due to its importance. One of the questions related to check of the thermodynamic theory of aging is as follows: "To what extent is it correct and reasonable to compare standard isobaric potentials of substance formation 0 f G ∆ (specific free Gibbs energies of formation) of different compounds with their relative thermodynamic stability?" This question, for instance, arises during study of the transition from chemical evolution to biological evolution. Free energy (isobaric potential) of a chemical substance 0 f G ∆ is a free energy change for the reaction during which the substance in its standard state is formed from elements (elementary substances) taken in their standard states. Thus, the value of 0 f G ∆ may be regarded as stability of a substance in relation to the "reference point", which is represented by the "stability of original substances" taken in amounts corresponding to the stoichiometric reaction of the chemical substance formation. Therefore it is necessary to keep in view that the specified stability is a value related only to the "original" composition and quantity of elementary substances. Thus, it may be uninformative, incorrect or even senseless to compare the values of 0 f G ∆ pertaining to different chemical compounds from the viewpoint of their "absolute stability". However, we should point out that such comparison applied to several compounds that are "thermodynamically identical in composition" and used by the living nature during life origin and development shows that values of 0 f G ∆ for various "small groups of elements" differ considerably. This proves that the specified qualitative correlation is reasonable. Indeed, the values of the "standard stability" of compounds containing atomic groups "C, N", "H, C, N", "C, O", "H, C, O", and "H, C, N, O" differ considerably and comply with our ideas on the relative physical and chemical tendency of chemical composition change in structures during aging and biological evolution. Table 1 and Table 2 show values of standard isobaric potentials of formation (Gibbs free energies of formation), 0 f G ∆ of several nitrogen-containing and oxygen-containing substances under standard conditions [19]. One can see there are clear differences in the relative stability-between the free energy of formation of the specified nitrogen-containing substances (including initial substances-"life blocks"), 0 0 f G ∆ > , and the free energy of formation of the specified oxygen-containing substances (some metabolism products), 0 0 f G ∆ < . This proves, though intuitively, the reasonableness of conclusions on the tendency of chemical structure change in bodies during evolution and aging. All compounds represented in Table 1 are, under standard conditions, unstable with respect to the initial elementary substances they formed from. We can point out that the first three compounds mentioned in Table 1 (Cyanogen, Dicyanoacetylene, Hydrocyanic acid) are known to be relatively stable at high temperatures: their stability increases to a certain degree as the temperature rises. Relative stability of the compounds listed in Table 2 is relatively high at low temperatures (their stability increases as the temperature drops). Compounds of the element groups "C, N", "H, C, N" are mainly formed in non-spontaneous processes under the influence of relatively high energies on earth or in space. Compounds of the groups "C, O" and "H, C, O" are mainly formed in spontaneous processes on earth or in space. Apparently, informativeness of the specified correlation is connected with a certain feature uniformity of the atoms forming part of individual groups of elements used by living beings. These small groups are formed by three (or two) separate elements chosen from among the atoms of H, C, N, O as well as P, S-"basic elements" of living nature. The stated qualitative considerations are important and may serve as evidence for thermodynamic direction of evolution. However, there are general quantitative confirmations of thermodynamic direction of the living systems' evolution. On the whole, it is worth noting that the presence of nitrogen and sulfur atoms in "carbon-hydrogen-containing" organic molecules is a sign of relative instability of substances under normal conditions [19]. At the same time, the presence of oxygen in "carbon-hydrogen-containing" organic molecules usually shows relative stability of these chemical compounds [19]. Simultaneous presence of nitrogen and oxygen atoms in organic molecules should be considered a borderline case that determines the thermodynamic direction of metabolism processes. Apparently, it will be interesting to detect the physical basis of the specified tendencies more precisely in the future. It seems likely that the periodic law by Dmitri Mendeleev is a key to the riddle. To detect aging directivity, it is primarily necessary to show how the supramolecular stability of molecular aggregates, including body's membranes, cells, tissues, changes during ontogenesis. The point is that this stability determines elasticity of supramolecular tissue structures, their metabolism rate, and other important features of body's biological structures. As already noted, comparison of stability of substances with similar atomic composition, for instance, some lipids, oils, and fats, may be considered valid and quite precise. Based on this fact, let us proceed to justify comparative thermodynamic evaluation of aging and gerontological quality of food products. Thus, as regards natural fatty acids, oils, and fats, we may accept to a good approximation that the initial atomic composition of these compounds (connected with the ratio of elements forming the substances) is almost identical. Therefore it is quite reasonable to use the free energy of formation of the specified substances 0 f G ∆ to compare their relative chemical stability under standard conditions. Conclusion on the supramolecular stability values for some oils and fats is confirmed by quantitative estimates made with the approximated Gibbs-Helmholtz-Gladyshev equation [31]. As applied to natural fats and oils, it can be written down as: [7]- [12] ( )( are known. They refer also to ontogenesis that, according to thermodynamics, repeats phylogenesis and evolution in general. Strict estimations unambiguously substantiate thermodynamic direction of bodies' aging, life origin processes and biological evolution. It would be sufficient to point out here that aging, just as chemical and biological evolution, is accompanied by "summary thermodynamic processes" which are characterized by free energy decrease [7]- [12] [28]. On the whole, it is worth mentioning that equilibrium and quasi-equilibrium thermodynamics has been widely used to detect direction of life origin, aging, and evolution [24] [32] [33]. Apoptosis is an issue of importance to thermodynamics. The question is whether this phenomenon exists. It is often said that aging is not genetically programmed. We believe that such understanding of aging process is incorrect from the viewpoint of hierarchical thermodynamics. Aging tendency, that is aging direction, is thermodynamically programmed. However, many apoptosis stages depend on changes in various environmental factors of all hierarchical body structures, including molecular and supramolecular environment. Body aging is conditioned not only by direct conversion of its chemical and supramolecular compositions. Aging processes also depend both on chemical and physical environmental effects [30] [34] [35]. Hierarchical thermodynamics claims that all molecular and supramolecular body structures, including genetic apparatuschromatin, grow old in the process of ontogenesis. Higher hierarchy structures also grow old in the living world. In this case, DNA aging results in additional directional aging of genetic apparatus functions and body tissue aging. These processes produce a "snowball effect" bringing the body closer to death. Apparently, it would be reasonable to speak about genetically inherited, genetically modified, and adaptive tissue aging, which (tissue aging) does not depend on changes in DNA sequence. The first two types of aging can be connected with epigenetic mechanisms. Tissue aging can sometimes be regarded as an adaptive process that can be reversed. In any case, hierarchical thermodynamics admits there are various types and ways of aging of all hierarchical body structures. Nowadays, some colleagues ask why the thermodynamic theory of aging has not yet become widespread and recognized. Primarily, this situation is likely to be connected with a stable fashion in science. As mentioned above, some researches still rely on the erroneous ideas on "living dissipative structures" by I. R. Prigogine and try to use "thermodynamics" of systems far from equilibrium. This "thermodynamics" uses the idea of "entropy" without an exact differential. Moreover, the specified "entropy" by Prigogine cannot be calculated or determined experimentally [26]. Some authors consider an issue of general entropy production by living beings. They believe this production leads to an increase in entropy of the whole planet. Of course, this value cannot be infor-mative in principle, and the corresponding hypothesis cannot be physically substantiated and checked. Besides, many science amateurs often discuss entropy and the second law of thermodynamics with an evident disregard of works by Rudolf Clausius, J. W. Gibbs, and other acknowledged scientists. At the same time, as mentioned above, it became obvious that in order to determine direction and degree of completion of ontogenesis (aging) and phylogenesis, one can use Gibbs free energy of formation or conversion of chemical and supramolecular body structures and make reasonable quantitative estimations. Let us proceed to discussion of genetics and epigenetics of aging. According to thermodynamics, these spheres of knowledge are interconnected. Both genetic and epigenetic conversions influence aging and heredity. Hereditary apparatus represented by chromatin should be subject to aging like all body structures and tissues. However, the speed of its aging may considerably differ from that of tissue aging. The main reason for this is enhanced supramolecular stability of DNA. Hence, DNA structure should depend less on food nature in comparison with other body structures and tissues. The author proved this conclusion when discussing the speed of aging and update of various substances during metabolism. Thus, it was pointed out that the update rate of chemical composition of fat tissues exceeds the update rate of muscular tissues. This effect can be observed when the nature of the used diets changes [7]- [12]. The change of gene structure of the organism is connected with DNA sequence change, whereas epigenetic changes are mainly conditioned by conformational transformation of chemical and supramolecular structure of the entire genetic apparatus. Genetic apparatus transformation is generally connected with chemical and supramolecular processes accompanied by an indistinct free energy change of various scale. This makes it difficult to distinguish definitely between chemical and supramolecular transformations in genetic apparatus. Thus, chemical reactions-DNA methylation processes are believed to change epigenetic features of genetic apparatus. Changes in the chemical and supramolecular structure of chromatin are also classified as epigenetic processes. In general, terms and statements used in "epigenetics" sometimes have a different meaning. Probably, this is due to the fact that many researchers did not use the term "epigenetics" before. For instance, M. S. Kanungo and many of his colleagues avoided using this and similar terms in their research on aging and genetic apparatus transformation [36]. The author of the article did not also use this terminology when dealing with epigenetic problems from the perspective of supramolecular thermodynamics, in other words, epigenetic thermodynamics. Genetics and Epigenetics from the Perspective of Thermodynamics The overwhelming number of works on aging is dedicated to aging mechanisms. In other words, researchers try to answer the question "How we age?" However, it seems more important to know "Why we age?" The answer to the latter question should be given by thermodynamics, the driving force of everything that happens in the world. Unfortunately, many authors do not even try to answer this question. The very word "thermodynamics" is absent in the vast majority of publications. As already noted, this is connected with numerous misconceptions about the second law of thermodynamics and possibility of its use to understand the life phenomenon. Nevertheless, it is worth mentioning that thermodynamics is widely used in life sciences. Aging of genes, chromatin, and tissues in ontogenesis goes in according with the laws of chemical and supramolecular thermodynamics of spontaneous processes and exterior environmental effects stimulating nonspontaneous body transformations. As already noted, these processes should be considered within the framework of a generalized Gibbs equation that is a generalized equation of the first and second laws of thermodynamics [24] [35]. In this case, the general Equation (1) is simplified because only the members related to molecular and supramolecular transformations should be taken into account. DNA chemical structure is quite conservative in accordance with the principle of substance stability. According to thermodynamics, the type of DNA structure is predetermined by structure of abiogenous molecules which form and exist in space and on celestial bodies. It appears that the chemical structure of nucleobases, genes, and even genetic code should be the same (or single-type) over the entire universe. That is the conclusion we come to, at least intuitively, according to the indisputable laws of thermodynamics. Thus, spontaneous aging of DNA under normal ontogenesis can apparently take place during unpairing of nucleic acid chains. In such cases, epigenetic (catalytic) transformation mechanisms should act. Another way of spontaneous DNA aging may be connected with the direct influence of chemical agents that quickly enter into a reaction with DNA chains or turn "epigenetic effect" into "chemical transformation" of structure of the very macromolecular sequence. The horizontal gene transfer should also be classified as a spontaneous process of DNA transformation. Non-spontaneous change in DNA sequence is influenced by various physical radiations and environmental forces. Influence of these factors can be treated as tropism phenomena [24] [35], which are represented in the form of individual members of a generalized Gibbs Equation (1). Numerous researches have been conducted in this sphere. However, it is not always easy to interpret them unambiguously. Nevertheless, we know for sure that the DNA structure changes during aging under the influence of various factors. Sometimes these factors are unpredictable. For instance, there are mutations caused by the paternal lifestyle and inherited by the children even if these mutations took place before their conception. Moreover, observations show that the germline mutations are present in all of the children's cells, including their own sex cells. This means that the paternal lifestyle "contains certain information" that influences the DNA of several generations, and not only of his direct descendants [37]. It is worth mentioning that the majority of gerontologists are convinced aging is connected with changes in the structure not only of DNA backbones, but also RNA and proteins. Apparently, Leonard Hayflick is right saying that the loss of precise or reliable information during aging results from accumulation of accidental exposures damaging essential molecules of DNA, RNA, and proteins. Special attention should be paid to research by Kenji Sorimachi [38] who claims that evolution is based on the genome structure. He convincingly showed that Darwin's natural selection is doubtless an important factor in biological evolution and that all species originated from a single life source [39]. This conclusion is in agreement with the thermodynamic direction of evolution [4] [13] and equally refers to the aging process. Interesting ideas are voiced by DMR Sekhar [40] who endows the genome with certain brain functions. He believes that life is a state of matter with primary emergent properties such as self-programmability (genopsych), consciousness and free will, the origin of which is traceable to the genome. He thinks, what mind is to brain is as genopsych is to genome. In principle, these statements do not contradict hierarchical thermodynamics and aging theory. In any case, DMR Sekhar's ideas set us thinking about the ways in which thermodynamic information recorded in prebiological molecules and atoms is transferred to all hierarchical levels of the living matter. It is important to point out the special role of epigenetic thermodynamics in aging processes. In this respect, we will take a closer look at the contemporary ideas on the direction of aging connected with epigenetics. Besides, for clarity sake, we will repeat some statements that have already been specified. As noted above, it is often difficult to distinguish between genetic and epigenetic changes in the hereditary apparatus since these changes can accompany each other. Generally, epigenetics is a branch of knowledge that studies inheritance of change in gene expression or cell phenotype, which is not connected with DNA sequence changes. Epigenetic changes may be preserved during division of somatic cells and transferred to the following generations. Epigenetic reactions include formation of various supramolecular structures with participation of DNA as well as changes in chemical composition of some nucleobases that do not change DNA sequence. The best studied processes are those of DNA methylation, which are not accompanied by change in the sequence of macromolecular chains of the very nucleic acid, but lead to changes in the supramolecular environment and its conformational structure. Of course, the DNA methylation reactions mainly connected with cytosine methylation proceed in accordance with the laws of chemical (molecular) thermodynamics. These processes are spontaneous and accompanied by a decrease in free energy. Epigenetic mechanisms have been of interest to researchers for a long time. However, these mechanisms were studied from the perspective of structural and some chemical transformations that do not change the nucleotide sequence of nucleic acids. Only a few authors mentioned the thermodynamic direction of the processes. It became possible to study bodies from the viewpoint of supramolecular thermodynamics after researchers had realized the possibility of independent determination of the molecular and supramolecular temporal hierarchies and the principle of substance stability during evolution and aging of living beings had been formulated (1977). It became obvious that supramolecular thermodynamics is the primary driving force of evolution and aging. Consequently, it turned out that conformations of biological polymers, proteins, saccharine, nucleic acids, and low molecular weight compounds could considerably change when their molecular environment changed. It meant that the body's genetic apparatus could be subject to transformation under the influence of small molecules penetrating the cells via the blood and skin. However, in previous years, epigenetic influence of low molecular weight substances on DNA or RNA conformation was not always called "epigenetic" since this usually referred to the possibility of using supramolecular (intermolecular) thermodynamics to any structures of the organism. As already noted, the author subsequently followed this tradition and did not use the term "epigenetics", though he continued his research in this sphere from the perspective of chemical and supramolecular thermody-namics. In fact, supramolecular thermodynamics is the basis of epigenetics. For instance, in his monograph ([7] p. 67), the author wrote the following, "It makes sense to discuss from the thermodynamic perspective only the principal role played by DNA as the carrier of genetic information, whereas the properties and functions of DNA are also determined to a certain extend by the chemical and supramolecular structures framing the double spiral." Another monograph and a special section "On Supramolecular Thermodynamics of Genes and Aging" ( [9], p. 99) as well as work [41] say, "… 'soft' anti-aging interference in DNA (RNA) supramolecular structures can be carried out by introducing chemically inert agents in nuclei and other cell elements. Such directed action does not promote changes in gene structure, but can affect processes of their adaptation to changes in the environmental conditions." Unfortunately, many researchers neglected achievements of the classical and hierarchical thermodynamics [5] [7] [21] and addressed the problem only from the perspective of empirical inductive methods. In several works, the author frequently emphasized the advisability of using recommendations of hierarchical thermodynamics for gerontology of nutrition and treatment of various diseases, including cancer. Thus, work [12] said: "Lastly, it is important to take into account, from the viewpoint of hierarchical thermodynamics, that anti-aging diets and many drugs can be used for prevention and treatment of cardiovascular diseases, cancer, and many other illnesses." We made this conclusion on the basis of thermodynamic estimations of the gerontological value of food products. Besides, we advanced a hypothesis about the role of dormant genes in the appearance of malignant neoplasm: The principle of substance stability allows us to understand the influence of some chemical substances on the supramolecular structures of nucleic acids [6]. In the result of the action of such substances, ancient dormant genes (accumulated during the evolution of living beings) can awake. These genes can stimulate some types of cancer [28]. What are experimental proofs of the epigenetic thermodynamic direction of aging? The main proof of the thermodynamic nature of aging of a cell genetic apparatus is a directed rise in the chromatin melting temperature in various tissues of man and animals during ontogenesis. References to pioneer works in this sphere are given, for instance, in a monograph by M. Kanungo [36]. Subsequently, the author analyzed these and other similar results using the methods of hierarchical thermodynamics [5]. Analysis with the use of the Equation (2) showed that the rise in the chromatin melting temperature was unambiguously connected with an increase in the thermodynamic stability of chromatin supramolecular structures, which is accompanied by aging. There are proofs that changes in chromatin structure are connected not only with supramolecular transformations, but also with a change in the sequence and destruction of the main chains. Lately, there have appeared articles which confirm predictions of the thermodynamic theory of life origin, evolution, and aging. It is worth paying attention to researches on the supramolecular structure of the genetic apparatus concerning epigenetic processes and gene expression. Thus, we can point out important works by Trivet O. Tollefsbol and his colleagues [42] [43]. Article [42] says that epigenetic processes are easily reversible. This circumstance is especially important from the viewpoint of treating epigenetically initiated diseases. The mentioned authors also showed that natural compounds could be epigenetically active in prevention and treatment of cancer. The presented experimental investigations [42] confirm the conclusions of the author of this article, which refer to thermodynamic supramolecular (epigenetic) mechanisms of aging. Besides, statements on the occurrence and treatment of cancer and some other diseases [12] [28] are becoming more grounded. Works by Professor V. K. Khavinson and his colleagues discuss the epigenetic effect of peptides, which he recommends as food supplements [44]. Conclusions of these works do not contravene supramolecular thermodynamics and are quite valid. On Epigenetics and Diets In works [10] [24] [29] [45]- [52], the author for the first time substantiated an idea that all components of the natural food consumed may be evaluated with the use of a gerontological value indicator based on thermodynamic parameters. The theory was based on the well-known facts of tissue enrichment with molecular fragments of the food used. For instance, systematic use of soft fats rather quickly increases the content of such (similar) fats in tissues of animals and humans and makes blood vessels elastic. At the same time, it was assumed that food constituents such as amino acids, sugars, fatty acids were directly used by the organism to synthesize pro-teins, polysaccharides, fats, and other "structural materials" of the body. Since the effects of change in tissue composition due to changes in food composition were considered reversible, they (effects) were not directly associated with DNA participation in adaptive transformation of the body's chemical structure. However, in the following studies, the author began paying attention to the fact that the food character influenced gene expression. He stated that this phenomenon was connected with the epigenetic action of the nutritive and similar molecules. Thus, we may assume that the character of food is reflected in the immediate adaptive changes in the body's composition and in inherited epigenetic changes. It is worth noting that adaptive and inherited changes explained by the principle of substance stability are interconnected, though they are manifested at different rates. Products of digestion are components suitable for absorption and participation in metabolism. In the process of proteolysis, proteins break down into amino acids and, partially, small peptides. Fats break split into glycerin and fatty acids as well as monoglycerides and diglycerides. Carbohydrates split into monosaccharides, yet part of the breakdown products is represented by trisaccharides and disaccharides. "Young food" (biomass of young living beings) contains a relative surplus of fragments of complex molecules. Such food is also rich in physiologically active low-molecular substances, for instance, hormones and their simulators as well as vitamins, combined microelements, and other compounds that do not degrade in the digestive tract. In other words, the character of substances entering the blood stream greatly depends not only on the food type, but also on the age of organisms or plants used for food. It should be noted that a lack in the variety of physiologically active substances in food can be made up for by using various food supplements and medicinal preparations. As already mentioned, the author called substances entering the blood stream after food digestion "nutritive molecules". The nutritive molecules include unchanging fragments of various high-molecular food components as well as low-molecular substances that do not undergo considerable transformations during digestion. Nutritive molecules are, actually, unchanging molecules or parts of molecules that broke down during food digestion. In English, nutritive molecules may also be referred to as "nutritive particle molecules" [52]. Surely, high concentration of nutritive molecules of young organisms in the human blood promotes the synthesis of proteins, fats, carbohydrates, and other metabolites that correspond to somewhat revived tissues by their chemical composition, comparing with tissues of a patient eating biomass of ontogenetically old or evolutionarily well-developed ("phylogenetically old") organisms. In our opinion, the conclusion we make results from the hierarchical thermodynamic theory of biological evolution and aging of living beings. The quantitative estimation of an increase in the stability of supramolecular structures during aging, for instance, by measuring melting temperatures of supramolecular formations (chromatin, tissue structures), is generally approximate. However, as already noted, the comparison of the stability of substances with slight differences in their general (atomic) composition, for instance, of many oils and fats, can be considered quite correct. Based on this fact, let us proceed to substantiate the comparative thermodynamic evaluation of aging and gerontological quality of food products. Thus, in terms of some natural oils, fatty acids, and fats on their basis, we may assume to a good approximation that their original atomic composition (correlation of elements) is almost identical. Therefore, we may use the free energy of formation of the specified products 0 f G ∆ to compare their relative chemical stability. However, the correlation of the known data on 0 f G ∆ [53] shows this comparison may be ambiguous. In general, any comparison of the gerontological value of food products and their components should be made according to the standard procedures, and one should take into account possible "variety" in the samples under study. Apparently, it is most reasonable to compare the temperatures of melting or congelation of oils and fats for the practical purposes. In conclusion of this section, we would like to pay attention to the fact that the thermodynamic theory of aging, including thermodynamic dietetics, is primarily based on physical chemistry, which relies (as a rule) on the models of ideal gases and ideal solutions. Physical chemistry also prefers studying transformations of individual chemical compounds. To transfer to research of complex systems requires various approximations. This applies particularly to the living systems which can be studied by examination of greatly simplified models. However, in spite of this, hierarchical thermodynamics makes it possible to detect regularities of aging processes, behavior of living systems and their evolution. Some Practical Recommendations From the perspective of the thermodynamic theory of aging, it is advisable to take into account gerontological value indicators of food products and medicinal preparations, gerontological purity of drinking water [10]- [12] [24] [45]- [48] [52]. However, in practice, it is convenient to follow the recommendations listed below. To use as much as possible clean drinking water, the consumption of which is not contrary to medical indications. To eat biomass of phylogenetically young (ancient) species of plants and animals-relatively young food (for instance, algae, some species of fish and amphibians). Preference should be given to: Seafood, especially products of cold seas and rivers; Biomass of plants and animals growing and living in cold regions: the extreme northern and southern areas of the planet, highlands; Fats and oils with low melting points (algae oil, flax seed oil, cedar wood oil, sunflower oil, corn oil, soybean oil). In case of propensity for diabetes and some other pathology, it is advisable to prefer vegetables to fruits. Besides, it is recommended to minimize consumption of carbohydrate-containing products (for instance, bread and floury products, rice, potatoes). It is recommended to use food extracts and medicinal extracts of young medicinal plants growing in cold regions. It is recommended to avoid eating overdone and processed food products with carcinogenic properties. Avoid taking medicines and food supplements with a low gerontological value. The specified recommendations agree with the centuries-old human experience. There are especially evident connections between GPG or G G gerontological value indicator [45]- [48] and known experimental observations of human aging. G G indicator is estimated via the specific free energy of lipid fraction formation in biological tissues or via the congelation temperature of the lipid fraction. It is easy to ensure that the presented recommendations are well-grounded if we compare experimentally obtained medical recommendations with conclusions of thermodynamic dietetics that takes into account physical and chemical characteristics of food products. For instance, let us compare some dietary fatty acids with fats widely used by people. Table 3 shows the content of saturated, mono-unsaturated, and poly-unsaturated fatty acids in dietary oils and fats. Dieticians will immediately notice that they, as a rule, recommend eating mainly oils and fats that contain unsaturated, especially poly-unsaturated, compounds. Figure 3 shows dependence of the gerontological value indicator of individual dietary oils and fats, GPG i , from their congelation temperature, T Cong . These data were obtained by calculations according to Gibbs-Helmholtz-Gladyshev Equation (2) [7]-[9] [31]. GPG i values and the corresponding T Cong values are shown in the form of large circles. The calculation results are depicted in such a way as to attract the reader's attention to the dependence of the shown indicators and several known factors of environment. Thus, the congelation temperature of the indicated food products and, consequently, GPG i indicator change considerably with a change in the concentration of components of various fats and oils in the food products. These changes depend on the age of the organisms, environmental temperature for plants and animals as well as other environmental conditions. For instance, a reference is made to the phenomenon of Ali Gazayev, which is connected with a change in the congelation temperature of sea-buckthorn berries and other plant fruits growing at different heights in mountainous areas https://gladyshevevolution.wordpress.com/. Comparison of the data presented in Table 3 and on Figure 3 shows that the known nutritional and medical recommendations correspond with physical chemical calculations made on the basis of thermodynamic theory. At the same time, the theory specifies and supplements medical indications that are often of a purely qualitative nature. Thus, when giving thermodynamic estimations of gerontological quality of fats and oils, it is convenient to rely on their congeal temperature in practice. For instance, from the viewpoint of the thermodynamic theory, sunflower oil usually has a better anti-aging effect than olive oil. As already noted, coconut and palm oils contain high-melting fractions of fatty acids that is why they do not have an anti-aging effect. These data are quite logical in terms of metabolism. Sunflower and soybean oils promote vessel elasticity and relatively quickly participate in the body's metabolism. Coconut and palm oils make vessels more fragile and are assimilated relatively slowly. Therefore, coconut and palm oils are not wholesome in terms of anti-aging medicine. The same conclusions follow from Table 3: sunflower oil contains more poly-unsaturated fatty acids that olive oil. The corresponding conclusion can be made regarding coconut and palm oils. However, we would like to point out once again that anti-aging features depend on the temperature of plant growth and some other environmental characteristics. Anti-aging quality of oils and fats corresponds, according to patent [48], to the analogous quality of proteins and other ingredients of natural food products. Therefore, we may assume that GPG i (G G ) indicator is a general anti-aging quality feature of a natural food product. The recommendations presented in this section may be extended with account of all the factors considered by the thermodynamic theory of aging. We advise that people of all ages, especially senior citizens, use all of the given recommendations to the extent reasonable. Conclusions When studying life phenomena, including aging process, one can take into account the following general principle: Hierarchical thermodynamics brings together two ways of investigation of chemical and biological matter: the high road of classical thermodynamics by R. Clausius and J.W. Gibbs with the road of natural selection laid by Ch. Darwin and A. Wallace [13]. Despite the difficulty in making qualitative estimation of the thermodynamic direction of aging, in the author's opinion, it is worth noting that there are convincing proofs of the conclusions made by hierarchical thermodynamics [4]- [9] [54]- [61]. 1) Thermodynamically favorable removal of water from developing living beings is accompanied by the accumulation of energy-intensive unstable organic substances in the tissues of organisms. This phenomenon leads to drying of body tissues, which is accompanied by its aging and brings to death. 2) Thermodynamically directed enrichment of body tissues with relatively stable supramolecular structures in ontogenesis during aging can be proved with known examples of adsorption (absorption) of energy-intensive substances (for instance, fats, oils, peptides, and proteins) due to their high affinity (adhesion) for organic adsorbents, that is supramolecular structures of body tissues. We may assume that the evaluation of change in the stability of the supramolecular structure of body tissues during aging (in spite of approximations made during calculations) is rather convincing. In fact, this evaluation (under standard conditions) is based on measurement of the average congelation temperature (melting temperature), for instance, of oils, fats or supramolecular structures of proteins, polysaccharides, and other similar structures of the living body. These two experimentally confirmed statements form the basis for formulation of the substance stability principle. This principle predicts enrichment of an aging living being with an energy-intensive (relatively unstable) chemical substance as a consequence of the second law of thermodynamics-the specific Gibbs function of formation of the supramolecular structure of molecular associates, organelles, cells, and tissues of the body tends towards the maximum negative value. There are grounds to believe that aging of the body under its normal development, from the perspective of genetic apparatus transformation, is connected with small, yet reliably identified changes in the molecular structure of genes. Epigenetic thermodynamic mechanisms exercise significant influence on aging; they condition changes in the chemical structure of nucleobases (without changes in DNA sequence), conformations of DNA and RNA as well as in the chemical and conformational structure of chromatin, various proteins and related compounds. In brief, nowadays we may claim that all epigenetic changes in the hereditary apparatus exert considerable influence on aging. Thermodynamic approaches make it possible to detect the optimum living conditions, to choose diets, food supplements, and medicinal preparations for the directional change in expression of certain genes. This helps prevent and treat many diseases and prolong a healthy human life. There are numerous works in the sphere of gerontology and geriatrics. The author wrote only about his own humble results concerning the thermodynamic theory of aging and provided some examples of studies consistent with this theory. Even though that nowadays the thermodynamic direction of aging is evident, there are a lot of tasks and questions which should be resolved.
10,977.4
2015-04-27T00:00:00.000
[ "Physics" ]
Effect of Short Fiber Fillers on the Optical Properties of Composite Resins Objectives: The aim was to evaluate the effect of different fractions of fiber fillers on the translucency and color change of short fiber composite with various thicknesses. Methods: Fiber composite resin was prepared by mixing resin matrix with various weight fractions of short (3 mm in length) E-glass fiber fillers (0, 11.7, 21.0, 28.5, 34.7 wt%) and then silane treated particulate silica fillers were gradually added by using high speed mixing machine. Particulate filler composite resin without fibers was used as control. Composite resins disks of 10 mm in diameter and with various thicknesses (1.0, 2.0, 3.0, 4.0, and 5.0 mm) of each group were prepared (n=3). Translucency parameter (TP) and color change (∆E) were calculated over a white and black background using spectrophotometer to determine the CIELAB values of each specimen. Data were statistically analyzed with analysis of variance (ANOVA). Results: ANOVA revealed that fraction of fiber fillers had a significant effect (P<0.05) on the translucency and color change values of the short fiber composite resin. Translucency values at various thicknesses of short fiber composite was significantly lower than particulate filler composite with same total fillers weight fractions. Significance: Inclusion of short glass fiber fillers reduced the translucency values of the composite resins. Thus, the masking ability of short fiber composite resin at various thicknesses was better than particulate filler composite. Color change was also altered with an increase of fractions of fiber fillers. Introduction One of the major goals in esthetic restorative dentistry is to produce restorations that match the optical properties of natural tooth (Joiner, 2004).Color, translucency, fluorescence and opalescence are optical properties that give natural tooth its vital-looking appearance (Powers, 2006).Among these esthetic attributes, color and translucency have the greatest impact on the vital appearance of natural tooth because they are the most readily observed (Yu & Lee, 2008a).Translucency is the ability of a layer of colored substance to allow the appearance of an underlying background to show through (Johnston, et al., 1995).It is usually determined by translucency parameter (TP) or contrast ratio (CR) (Johnston, et al. 1995, Miyagawa, et al., 1981).TP refers to the color difference between a uniform thickness of material over a black and a white background and corresponds directly to common visual assessments of translucency (Miyagawa, et al., 1981).The degree of color change can be affected by a number of factors, including the structure of composite resin, the degree of polymerization and water sorption (Powers, et al., 1978).The main component of composite resins that significantly affects the color and translucency is the inorganic filler.Many studies have focused on the influence of the filler on color and translucency of dental composites in terms of filler type, particle size and content (Emami, et al., 2005; Yu & Lee, 2008; Lim, et al., 2008). Recently, short fiber reinforced composite was introduced as a dental restorative composite resin (Garoushi, et al., 2007a(Garoushi, et al., , 2007b(Garoushi, et al., , 2008)).The composite resin is intended to be used in high stress bearing areas especially in molars.The results of the mechanical tests revealed substantial improvements in the load bearing capacity, the flexural strength and fracture toughness of dental composite resin reinforced with short E-glass fiber fillers in comparison with conventional particulate filler restorative composite resin (Garoushi, et al., 2007a, 2007b, 2011).The short fiber composite resin has also revealed control of the polymerization shrinkage stress by fiber orientation and, thus, marginal microleakage was reduced compared with a conventional particulate filler restorative composite resins (Garoushi, et al., 2008). Glass fibers are translucent and the relative refractive indices of the two components, i.e. the resin matrix and the glass fiber fillers can affect the color and translucency.A previous investigations showed a relationship between fiber orientation and translucency characteristics of composite resin (Le Bell, et al., 2003, Chirdon, et al., 2006).It also seems possible that differences in fractions of fiber fillers might have a great effect on the optical properties of composite resin.Moreover, when a incremental layering technique is used, it would be beneficial to have information on optical properties of composite resin in order to establish a successful color match. It was hypothesized that the addition of glass fiber fillers might further alter the optical properties of a composite resin.Therefore, the purpose of this study was to evaluate the color and translucency characteristics of short fiber reinforced composite with different fractions of fiber fillers and various thicknesses of the composite by using reflection spectrophotometry L a b , based on the CIE (Commission Internationale de I'Eclairage) color system. Short fiber composite resins were prepared by mixing cut E-glass fibers (3 mm in length and 15 μm in diameter) and BaAlSiO 2-radio-opacity-fillers were mixed in different weight fractions to the resin matrix.Classification of test groups according to the fillers is given in Table 1.The mixing was carried by using high speed mixing machine for 5 min (SpeedMixer, DAC, Germany, 3500 rpm).Particulate fillers composite resin without fiber fillers was used as control group. Composite resins disks of 10 mm in diameter and with various thicknesses (1.0 mm, 2.0 mm, 3.0 mm, 4.0 mm, and 5.0 mm) of each test material were prepared (n=3) by manual condensing of each resin into molds Composite resin was pressed between celluloid strips and glass plates to flatten and smoothen the surfaces.The composite was photo-polymerized for 40 s from both sides using a light source with an irradiance of 800 mW/cm 2 (Optilux-500, Kerr, CT, USA).After curing, the celluloid strips and glass plates were removed and specimens stored dry at room temperature for 24 h before measurement.The translucency of the resin composites at various thicknesses was obtained by calculating the color difference between the specimen over the white background and the specimen over the black background: where the subscript 'W' refers to the color coordinates over the white background and the subscript 'B' refers to those over the black background (Yu & Lee, 2008, Bin, et al., 2008). The color differences (∆E) of the fiber composite with different fiber fillers fractions was calculated from the L a b mean ∆ , ∆ , ∆ values for each specimen using the following formula (Yu & Lee, 2008) where, ∆L a b , ∆ , ∆ are differences in L* a* b* values of A2 group (control) and other fiber composite groups. To evaluate the differences in translucency and color variation values between the tested composite specimens at various thicknesses and fractions, data were statistically analyzed with analysis of variance (ANOVA) at the P<0.05 significance level with SPSS (version 13, Statistical Package for Social Science, SPSS Inc, Chicago, IL, USA), followed by Tukey's post hoc analysis to determine the differences among the groups. Results The TP values of the tested composite resins (Groups A and B) with various fractions of fillers and fibers and thicknesses of the resin are presented in Figure 1.The background effect characterized by using TP values significantly correlated with filler fractions and thicknesses of the composite disks (p<0.05) in both of the Groups, as filler fractions and thicknessese increased, the TP values decreased for each group.Mean TP values of the short fiber composite resin (Group B) showed statistically lower values (p<0.05)than particulate filler composite resin (Group A) with similar weight fractions of fillers (wt%). By visual inspection of the composite discs, color differences were observed between the Groups A and B. Color change (∆E) values of the composite resins with different fractions of fiber fillers and various thicknesses were in the range of 2.6-12.4∆E units, whereas color change values of the composite resins with different fractions of particulate fillers only were in the range of 0.3-4.6 ∆E units.As seen in Figure 2, color change (∆E) of the composite with high fractions of fiber fillers displayed the highly statistically significant differences (p<0.05)values compared to other composites with lower fractions of fiber fillers but similar thicknesses.Maximum difference in color was observed with ∆L values, denoting the change in lightness of color, followed by difference in the blue-yellow axis, as indicated by the higher values of ∆b . Discussion Color strongly influences restoration appearance, but geometric attributes such as translucency also influence appearance (Yu & Lee, 2008).In 'through and through' class III and IV restorations or in the presence of discolored tooth structures, the harmonization of restoration color with the natural tooth system is made even more difficult by the transmission of background color (Bin, et al., 2008).The translucency parameter of a material refers to the difference in color between a uniform thickness of the material over a white background and the same thickness of the material over a black background and provides a value corresponding to the common visual perception of translucency (Yu & Lee, 2008b; Kim, et al., 2009).A higher value for the translucency parameter represents greater translucency; if the material is completely opaque, the value of this parameter is zero (Yu & Lee, 2008a; Yu & Lee, 2008b; Kim, et al., 2009).Although there have been several studies on the translucency and color of composite resins, no one has examined the effect of short fiber fillers fractions on translucency characteristic and color change of composite resin at various thicknesses. The general trend noted in particulate fillers composite resin (Group A, control) and short fiber fillers composite resin (Group B) was that translucency decreased with increased filler fractions and thicknesses (Figure 1).This is in accordance with Yu and Lee, which showed that mean TP values of flowable composite resin with different thicknesses were higher than those of the corresponding universal composite resin of the same brand, which reflects the fact that the less the filler content is, the higher is the translucency (Yu & Lee, 2008a).There is an overall decrease in translucency of short fiber reinforced composite as compared with particulate filler composite resin (control) with same range of fillers fractions.This is clinically significant finding for masking e.g. a stained tooth.This effect may be attributed to the scattering effects by glass fiber fillers, thus allowing less light to be transmitted through the composite structure.Le Bell et al. and Lehtinen et al. have shown that unidirectional E-glass fiber-reinforced composites conduct and scatter the light better than conventional composite resins (Le Bell, et al., 2003; Lehtinen, et al., 2008).They also showed that by polymerization, the monomer system to polymer, the light scattering improves.However, the short E-glass fibers are randomly oriented in the tested experimental fiber composite.Chindron et al. showed that orientation of the fiber fillers affects the absorption and scattering coefficient which is necessary to understand and predict the translucency of composites at various thicknesses (Chirdon, et al., 2006). Discoloration can be evaluated by visual and instrumental techniques.Spectrophotometry, used in our investigation, can eliminate the subjective interpretation of visual color comparison and it has been reported to be a reliable technique in dental materials studies (Yannikakis, et al., 1998;Reis, et al., 2003).Overall color change (∆E) is significant between the short fiber composite (Group B) and particulate filler composite (Group A).As seen in Figure 2, ∆E value of the composite with high fiber fillers fractions displayed the highest statistically significant (p<0.05)mean color difference compared to other composites with lower fiber fillers fractions at similar thicknesses.On other hand, color change was not significant with increasing particulate fillers fractions in composite resins (Group A).These significant ∆E values may arise from more than one factor.It may involve scattering effects, refraction and dispersion.The refractive index of glass fiber fillers is different from that of the surrounding composite matrix along with its particulate fillers (Sampath & Ramachandra, 2008) This might explain the reduction in ∆L values and hence the darker appearance of short fiber composite (Group B) as compared with the particulate fillers composite (Group A).In other words, this can be due to a lesser amount of light being reflected back and more being scattered away or absorbed. In addition to these effects, translucency and color change values may also be affected by the presence of small unavoidable voids in mixing and preparing the experimental short fiber fillers composite resins. It was reported that water has some influences on the optical properties of composite resins and our specimens were stored dry before color measurement, which is a limitation of this in vitro research. Conclusions Within the limitation of the present study, inclusion of short glass fiber fillers reduced the translucency values of the composite resins.Thus, the masking ability of short fiber composite at various thicknesses was better than particulate filler composite resins.Color change was also altered with an increase of fiber fillers fractions. Color was measured according to the CIELAB color scale relative to the standard illuminant D65 over a white tile (CIE L* = 99.25,a* =−0.09 and b* =0.05) and a black tile (CIE L* =0, a* =0.01 and b*=0.03) on a reflection spectrophotometer (CM-700d, Konica-Minolta, Japan).The aperture size was Ø 3 mm, and the illuminating and viewing configuration was CIE diffuse/10 • geometry with the specular component included (SCI) geometry (Commission Internationale de l'Eclairage 2004). Figure 1a .Figure 2 . Figure 1a.Mean TP values of the particulate filler composites (Group A) with various fractions of fillers and thicknesses of the disc Table 1 . Classification of test groups used in the study according to their filler content and composition (n=3, per group) Classification of test groups used in the study according to their filler content and composition No fiber fillers; A: experimental particulate fillers composite; B: experimental short fiber fillers composite
3,120.6
2012-03-31T00:00:00.000
[ "Medicine", "Materials Science" ]
Phosphorylation and Inhibition of Type III Adenylyl Cyclase by Calmodulin-dependent Protein Kinase II in Vivo * Inhibition of type III adenylyl cyclase (III-AC) by in- tracellular Ca 2 (cid:49) in vivo provides a mechanism for attenuation of hormone-stimulated cAMP signals in olfactory epithelium, heart, and other tissues (Wayman, G. A., Impey, S., and Storm, D. R. (1995) J. Biol. Chem. 270, 21480–21486). Although the mechanism for Ca 2 (cid:49) inhibition of III-AC in vivo has not been defined, inhibition is not mediated by G i , cAMP-dependent protein kinase, or protein kinase C. However, Ca 2 (cid:49) inhibition of III-AC is antagonized by KN-62, a CaM-dependent kinase inhibi- tor. In addition, constitutively activated CaM kinase II inhibits the enzyme. These data suggest that CaM kinase II regulates the activity of III-AC by direct phosphorylation or by an indirect mechanism involving phospho- rylation of a protein that inhibits III-AC. Here we report that III-AC is phosphorylated in vivo when intracellular Ca 2 (cid:49) is increased and that phosphorylation is prevented by CaM-dependent kinase inhibitors. Site-directed mu- tagenesis of a CaM kinase II consensus site (Ser-1076 to Ala-1076) in III-AC greatly reduced Ca 2 (cid:49) -stimulated phosphorylation and inhibition of III-AC in vivo . These data support the hypothesis that Ca 2 (cid:49) inhibition of III-AC is due to direct phosphorylation of the enzyme by CaM kinase II in vivo . Adenylyl cyclases by and signals (2, Adenylyl cyclases exhibit diverse regulatory properties that provide interesting mechanisms for regulation of cAMP by extracellular and intracellular signals (2,3). These enzymes are regulated by intracellular Ca 2ϩ , G s -and G i -coupled receptors, PKA, 1 PKC, and membrane potential (for a general review, see Ref. 2). Regulation of adenylyl cyclases by various protein kinases generates cross-talk between the cAMP regulatory system and other signal transduction systems as well as mechanisms for feedback inhibition or amplification of cAMP signals. Because most cells express distinct combinations of adenylyl cyclases, phosphodiesterases, and protein kinases, the patterns of cross-talk between signal transduction systems are cell specific. III-AC is expressed in several tissues including brain, heart, and retina (4), but it is particularly abundant in olfactory tissue, where it may play a major role in coupling olfactory receptors to cAMP and ion channel regulation (5). Although the enzyme is synergistically stimulated by Ca 2ϩ and G s -coupled receptors in vitro (6), it is inhibited by Ca 2ϩ in vivo (1). Ca 2ϩ inhibition of III-AC may contribute to cAMP transients and provide a novel mechanism for generation of Ca 2ϩ and cAMP oscillations (7). Although the mechanism for Ca 2ϩ inhibition of III-AC in vivo has not been established, preliminary evidence suggests that the enzyme may be directly or indirectly regulated by CaM kinase II in vivo (1). To address this issue, we examined Ca 2ϩ inhibition and phosphorylation of III-AC in vivo using an antibody specific to III-AC. The data indicate that III-AC is directly phosphorylated by CaM kinase II in vivo. EXPERIMENTAL PROCEDURES Cell Culture-Human embryonic kidney 293 (HEK-293) cells were grown at 37°C in DMEM supplemented with 10% fetal bovine serum in a humidified 95% air, 5% CO 2 incubator. Unless otherwise noted, components for cell culture were from Life Technologies, Inc. Expression of III-AC in HEK-293 Cells-The III-AC cDNA clone (5) was generously provided by R. R. Reed (The John Hopkins University, Baltimore, MD). The coding sequence of III-AC was ligated into CDM-8 for expression in HEK-293 cells. HEK-293 cells stably expressing III-AC have been described previously (1,6). Site-directed Mutagenesis and cDNA Transient Transfection in HEK-293 Cells-Mutagenesis of III-AC cDNA was performed using a Strategene kit (Chameleon TM double-stranded site-directed mutagenesis kit) according to the manufacturer's recommendations (8). Mutant cDNA was cloned into the pCDM-8 expression vector, and mutations were confirmed by sequencing using a DNA sequencing kit from U. S. Biochemical Corp. The wild type III-AC and the mutant III-AC in which Ser-1076 was converted to Ala-1076 (m-III) were transiently transfected into HEK-293 cells. For transfection, HEK-293 cells were plated at a density of 3 ϫ 10 6 cells/100-mm plate and were maintained in DMEM, 10% fetal bovine serum, 100 units/ml penicillin, and 100 g/ml streptomycin 18 -24 h before transfection. On the day of transfection, the medium was aspirated, cells were rinsed with serum-free DMEM, and the medium was replaced with 6.4 ml of serum-free DMEM. Eight g of DNA (either control CDM-8 alone, CDMIII-AC, or CDMIII-AC(S1076A)) in 800 l of serum-free DMEM and 64 l of Lipofectamine (Life Technologies, Inc.) in 800 l of serum-free DMEM were mixed, and the DNA-lipid complex was allowed to incubate for 30 min. The DNAlipid mixture was added to each plate to be transfected, and cells were then incubated at 37°C, 5% CO 2 for 6 h. The cells were then split into 6-well plates containing DMEM and 10% fetal bovine serum. On day 2, the cells were labeled with DMEM containing [ 3 H]adenine (2.0 Ci/ml; ICN) for 16 -20 h. On day 3, the cells were assayed for cAMP accumulation as described below. cAMP Accumulation-Changes in intracellular cAMP were measured by determining the ratio of [ 3 H]cAMP to a total ATP, ADP, and AMP pool in [ 3 H]adenine-loaded cells as described by Wong et al. (9). This assay system allows rapid and sensitive measurements of relative changes in intracellular cAMP levels in response to various effectors. Although absolute numbers for cAMP accumulation generally show some variation between experiments using different sets of cells (10), relative changes in cAMP were consistent between experiments. Confluent cells in 6-well plates were initially incubated in DMEM containing [ containing 1.0 mM isobutylmethylxanthine and various effectors as indicated. Reactions were terminated by aspiration, washing cells once with 150 mM NaCl, and adding 1.0 ml of ice-cold 5% trichloroacetic acid containing 1.0 M cAMP. Culture dishes were maintained at 4°C for 1-4 h, and acid-soluble nucleotides were separated by ion-exchange chromatography as described previously (10). Reported data are the average of triplicate determinations Ϯ S.D. Membrane Preparation and Immunoprecipitation-The anti-III-AC antibody (Santa Cruz Biotechnology) used for immunoprecipitation of III-AC was a peptide-specific antibody raised against the C-terminal amino acid sequence (amino acids 1125-1144, PAAFPNGSSVTLPHQV-VDNP). For [ 35 S]methionine labeling of proteins, the cells were starved in cysteine/methionine-free medium for 2 h and labeled with [ 35 S]methionine (200 Ci/ml for stably transfected cells or 500 Ci/ml for transiently transfected cells; DuPont NEN) for 4 h in the same medium. For 32 P labeling, the cells were starved in phosphate-free medium for 45 min and labeled with [ 32 P]orthophosphate (200 Ci/ml for stably transfected cells or 500 Ci/ml for transiently transfected cells; DuPont NEN) for 3 h in the same medium. After metabolic labeling, cells were washed with cold PBS and harvested in ice-cold homogenization buffer (50 mM Tris, 250 mM sucrose, 5 mM MgCl 2 , 1 mM EGTA, and 1 mM dithiothreitol) supplemented with protease inhibitor mixture (1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 5 g/ml leupeptin, and 10 g/ml pepstatin). After homogenization in a Dounce homogenizer, cells were centrifuged in Corex tubes at 2,500 rpm for 5 min. The supernatants were collected and centrifuged at 25,000 rpm for 30 min. The pellet was resuspended in solubilization buffer (PBS, 1% Nonidet P-40, 0.5% deoxycholate, 0.1% SDS, 1 mM EDTA and EGTA, 50 mM NaF, 1 mM Na 3 VO 4 , and 10 mM sodium pyrophosphate) with a protease inhibitor mixture to a final concentration of 3-5 mg of protein/ml. The suspension was gently shaken at 4°C for 3 h and centrifuged at 40,000 rpm for 30 min. Supernatants were incubated overnight with affinitypurified rabbit polyclonal antibodies directed against the C-terminal sequence of rat III-AC protein (Santa Cruz). Protein A-agarose beads (Pierce) were then added, and the incubation was continued for 3 h. The Protein A-agarose beads, separated by brief centrifugation, were washed five times with solubilization buffer. For peptide-N-glycosidase F (Boehringer Mannheim) treatment, the beads were incubated with peptide-N-glycosidase F in an incubation buffer made according to the manufacturer's instructions for 1 h at 37°C. Antibody-III-AC complexes were eluted from the beads by heating in SDS-PAGE sample buffer according to the method of Laemmli (11). Immunoprecipitates were resolved by SDS-PAGE (7.5% acrylamide) and subjected to autoradiography. Immunoblotting-The immunoprecipitation of III-AC from unlabeled cells was performed as described above. After SDS-PAGE, proteins were transferred to polyvinylidene difluoride membranes (Immobilon-P, Millipore) by electroblotting at 50 V for 3 h at room temperature. Blots were blocked overnight in TBS with 0.05% Tween 20, 3% gelatin, and 3% bovine serum albumin at 4°C and then incubated with anti-III-AC antibody (1:100) for 1.5 h at room temperature in TBS with 1% gelatin and 1% bovine serum albumin. After washing with TBS three times, the blots were incubated with alkaline phosphataseconjugated goat anti-rabbit IgG (1:1000, Cappel) for 1 h at room temperature. After several washes with Tris-buffered saline with 0.05% Tween 20, immunoreactive proteins were detected by an alkaline phosphatase conjugate substrate kit (Bio-Rad). Other Procedures-Protein concentrations were determined by the method of Bradford using bovine serum albumin as a standard (12). Immunoadsorption and Western Analysis of III-AC Expressed in HEK-293 Cells-In vivo phosphorylations were monitored by immunoadsorption of III-AC stably expressed in HEK-293 cells. Cells were prelabeled with [ 32 P]orthophosphate to monitor phosphorylation or [ 35 S]methionine to label proteins. III-AC was isolated using a rabbit anti-peptide antibody (Santa Cruz), followed by Protein A-agarose. The adsorbed protein was subjected to SDS-PAGE and analyzed by Western analysis using the anti-III-AC antibody. The specificity of the antibody for III-AC was verified using control cells transfected with the expression vector without an adenylyl cyclase coding sequence or the expression vector containing the I-AC coding sequence. Although control cells express low levels of III-AC, control cells or I-AC-expressing cells did not give a positive Western signal with the III-AC antibody (Fig. 1, lanes 1 and 2). The levels of III-AC expressed in control cells were too low to be detected by the anti-III-AC antibody. The antibody immunoadsorbed two polypeptides with molecular masses of 125 and 195 kDa from III-AC-transfected cells that were detected by Western analysis. Immunoadsorption of both polypeptides was blocked by the control peptide used to generate the anti-III-AC antibody (Fig. 1, lane 5). Because the predicted molecular mass of III-AC is 129 kDa, we suspected that the 195-kDa polypeptide was a glycosylated form of the enzyme. After treatment of the absorbed protein with peptide-N-glycoside F, an enzyme that deglycosylates glycoproteins, only the 125-kDa polypeptide was detected. These data suggest that the upper band is the glycosylated form of III-AC, whereas the 125-kDa polypeptide is the nonglycosylated form of the enzyme. Ca 2ϩ -stimulated Phosphorylation of III-AC in Vivo-The anti-III-AC antibody also immunoprecipitated III-AC from III-AC-expressing cells prelabeled with [ 35 S]methionine ( Fig. 2A). Two 35 S-labeled proteins of 125 and 195 kDa were immunoprecipitated by the antibody. To determine if III-AC is phosphorylated when intracellular Ca 2ϩ is increased, III-AC-expressing cells were preincubated with [ 32 P]orthophosphate and then treated with the Ca 2ϩ ionophore A23187 in the presence of extracellular Ca 2ϩ . III-AC phosphorylation was monitored by immunoadsorption and SDS-PAGE as described under "Experimental Procedures." Treatment with A23187 in the presence of extracellular Ca 2ϩ stimulated phosphorylation of III-AC in vivo (Fig. 2B). This phosphorylation was blocked by the CaMdependent kinase inhibitors KN-93 or KN-62. Because HEK-293 cells express low levels of CaM kinase II but not CaM-dependent protein kinase IV, 2 these data suggest that Ca 2ϩstimulated phosphorylation of III-AC was most likely due to CaM kinase II. Furthermore, expression of constitutively activated CaM kinase II inhibited hormone stimulation of III-AC in vivo (1). Interestingly, only the 195-kDa glycosylated form of III-AC was phosphorylated when intracellular Ca 2ϩ was increased. Nonglycosylated III-AC may not be accessible to CaM kinase II because it is present in the Golgi or endoplasmic reticulum. Alternatively, the conformation of the nonglycosyl-2 T. Soderling, personal communication. ated enzyme in membranes may be different from the fully processed enzyme. Because KN-62 and KN-93 inhibit CaM-dependent kinase in vivo but not PKC or PKA (13), the data reported in Fig. 2 suggest that CaM kinase II is responsible for Ca 2ϩ -stimulated phosphorylation of III-AC. To determine if PKA or PKC activities are required for phosphorylation of III-AC, the effects of several other protein kinase inhibitors were examined (Fig. 3). Ca 2ϩ -stimulated phosphorylation of III-AC was not significantly blocked by the PKC inhibitors bisindolylmaleimide I or chelerythrine chloride. H89, an inhibitor of PKA, also did not inhibit Ca 2ϩ -stimulated phosphorylation of III-AC in vivo. These data indicate that Ca 2ϩ -dependent phosphorylation of III-AC in vivo is due primarily to CaM kinase II and not dependent upon the activities of PKA or PKC. Mutagenesis of Ser-1076 Ablates Ca 2ϩ Inhibition of III-AC in Vivo-If CaM kinase II directly phosphorylates III-AC in vivo, then mutagenesis of CaM-dependent kinase consensus phosphorylation sites within III-AC should prevent Ca 2ϩ -stimulated inhibition and phosphorylation of the enzyme. The most likely CaM-dependent kinase phosphorylation site within an intracellular domain of III-AC is Ser-1076. This putative CaMdependent kinase phosphorylation domain contains an Arg 3 residues N-terminal of a Ser (-Arg-Met-Asp-Ser-). To determine if Ser-1076 is a regulatory phosphorylation site, it was mutated to Ala by site-directed mutagenesis. Wild type III-AC and m-III were transiently transfected into HEK-293 cells. The mutation did not affect isoproterenol or forskolin stimulation of the enzyme (Fig. 4, A and B); m-III was stimulated 10.4 Ϯ 0.51-and 102.6 Ϯ 2.41-fold by isoproterenol or forskolin, respectively. III-AC was stimulated 11.02 Ϯ 1.52-and 126.5 Ϯ 10.66-fold by isoproterenol or forskolin. Increases in intracellular Ca 2ϩ inhibited isoproterenol stimulation of III-AC 47.8 Ϯ 6.0% (Fig. 4A). In contrast, increased Ca 2ϩ had no significant effect on isoproterenol stimulation of m-III (10.4 Ϯ 0.51-fold versus 9.2 Ϯ 0.8-fold). Furthermore, Ca 2ϩ inhibited forskolin stimulation of III-AC 56 Ϯ 8.7% but had no effect on forskolin stimulation of m-III (Fig. 4B). These data indicate that inhibition of III-AC by CaM kinase II is very likely due to direct phosphorylation at Ser-1076. We also coexpressed III-AC and m-III with constitutively active CaM kinase II (CaMKII290) in HEK-293 cells to determine if mutation of Ser-1076 to Ala-1076 affected inhibition of III-AC by exogenously expressed CaM kinase II. CaMKII290 is a truncated form of the kinase that is constitutively active even in the absence of increased Ca 2ϩ (14). Stable transfectants expressing CaMKII290 under the control of a metallothionein promoter were made, and these cells were then transiently transfected with constructs encoding III-AC or m-III. The response of III-AC to CaMKII290 was determined by inducing the expression of the kinase with Zn 2ϩ . Zn 2ϩ treatment of cells not expressing CaMKII290 had no effect on basal, isoproterenol-stimulated, or forskolin-stimulated III-AC activities (data not shown). Induction of CaMKII290 in cells expressing III-AC inhibited isoproterenol-or forskolin-stimulated activities 43.1 Ϯ 6.4 and 74.3 Ϯ 7.3% (Fig. 5). However, induction of CaMKII290 in cells expressing m-III did not significantly inhibit isoproterenol or forskolin stimulation of m-III (5 Ϯ 3 and 10 Ϯ 4%). These data strongly suggest that direct phosphorylation of III-AC by CaM kinase II at Ser-1076 inhibits stimulation by ␤-adrenergic agonists or forskolin in vivo. Mutagenesis of Ser-1076 Abolishes Ca 2ϩ -stimulated Phosphorylation of III-AC in Vivo- To determine if Ser-1076 is the primary site of phosphorylation in vivo, phosphorylation of III-AC and m-III was compared (Fig. 6). Examination of 35 Slabeled cells indicated that comparable levels of III-AC or m-III were expressed and immunoadsorbed from HEK-293 cells (Fig. 6B). Mutagenesis of Ser-1076 to Ala-1076 reduced the Ca 2ϩstimulated phosphorylation of III-AC by at least 95% (Fig. 6A). The residual level of Ca 2ϩ -stimulated phosphorylation seen with m-III may be due to other Ca 2ϩ -stimulated adenylyl cyclases that phosphorylate at other sites. Because the activity of m-III was not inhibited by Ca 2ϩ and its phosphorylation relative to the wild type enzyme was greatly reduced, the major site for Ca 2ϩ -stimulated inhibition and phosphorylation is very likely Ser-1076. DISCUSSION Inhibition of adenylyl cyclase activity by submicromolar Ca 2ϩ has been reported for a number of tissues and cell lines including heart (15,16), pituitary (17), somatotrophs (18), platelets (19), GH3 cells (20), C6 glioma cells (21), neuroblastoma cells (22), and cardiac myocytes (23). CaM-dependent kinase inhibition of adenylyl cyclases generates cross-talk be-tween the Ca 2ϩ and cAMP signal transduction systems and provides another mechanism, in addition to G i -coupled receptors, for attenuation of hormone-stimulated cAMP increases. Ca 2ϩ -inhibitable adenylyl cyclases also provide a novel mechanism for the generation of Ca 2ϩ oscillations in animal cells (7). All of the mammalian adenylyl cyclases are inhibited by high concentrations of Ca 2ϩ (Ͼ100 M free Ca 2ϩ ). This has been attributed to Ca 2ϩ competition with Mg 2ϩ for binding to ATP or a divalent metal ion regulatory site on adenylyl cyclases (24). Although it has been reported that III-AC, type V adenylyl cyclase, and type VI adenylyl cyclase are inhibited by submicromolar intracellular Ca 2ϩ (1,25,26), mechanisms for Ca 2ϩ inhibition have not been defined. The objectives of this study were to determine if III-AC is directly phosphorylated in vivo when intracellular Ca 2ϩ is raised and to identify the primary site of phosphorylation. The data indicate that increases in intracellular Ca 2ϩ cause phosphorylation of III-AC, that the phosphorylation is blocked by CaM-dependent kinase inhibitors, and that Ser-1076 is the major site of phosphorylation. Because Ser-1076 is within a putative CaM-dependent kinase phosphorylation domain, we conclude that III-AC is directly phosphorylated at this site by CaM kinase II in vivo. The glycosylated form of III-AC (but not the nonglycosylated form) was phosphorylated in vivo, suggesting that only the fully processed enzyme is a physiological substrate for CaM kinase II. III-AC is synergistically stimulated by G s -stimulated receptors and Ca 2ϩ in vitro but inhibited in vivo through the action (Isoproterenol ϩ A23187). B, forskolin stimulation was measured in the absence (Forskolin) or in the presence of 10 M A23187 and 1.8 mM CaCl 2 (Forskolin ϩ A23187). When present, isoproterenol and forskolin were 10 and 50 M, respectively. cAMP accumulations were monitored as described under "Experimental Procedures" and are expressed as the ratio cAMP/(AMP ϩ ADP ϩ ATP) ϫ 100. The basal, isoproterenol, and forskolin activities of III-AC before treatment with A23187 and CaCl 2 were 0.21, 2.34, and 27.8, respectively. After treatment with A23187 and CaCl 2 , the basal, isoproterenol, and forskolin activities of III-AC were 0.21, 1.12, and 12.2, respectively. The basal, isoproterenol, and forskolin activities of m-III before treatment with A23187 and CaCl 2 were 0.22, 2.30, and 22.6, respectively. After treatment with A23187 and CaCl 2 , the basal, isoproterenol, and forskolin activities of m-III were 0.21, 2.18, and 22.4, respectively. The data are the mean Ϯ S.D. of triplicate assays and are plotted as -fold increase relative to basal adenylyl cyclase activity. Ca 2ϩ inhibition of III-AC was abolished when Ser-1076 was mutagenized to Ala-1076. (1). The CaMKII290-expressing cells were transiently transfected with either wild type III-AC or mutant III-AC in which Ser-1076 was mutagenized to Ala-1076 (m-III) and exposed to either isoproterenol or forskolin. cAMP accumulations were then assayed in cells that were either untreated or treated with Zn 2ϩ to induce the expression of CaMKII290. cAMP accumulations were measured as described under "Experimental Procedures" and are expressed as the ratio cAMP/(AMP ϩ ADP ϩ ATP) ϫ 100. The basal, isoproterenol, and forskolin activities of III-AC before induction of CaM kinase II were 0.23, 2.44, and 4.80, respectively. The basal, isoproterenol, and forskolin activities of m-III before induction of CaM kinase II were 0.24, 2.40, and 4.70, respectively. When present, isoproterenol and forskolin were 10 and 50 M, respectively. The data are the mean Ϯ S.E. of triplicate assays and are presented as the percentage inhibition of cAMP accumulation caused by induction of CaMKII290 expression. Induction of CaMKII290 expression inhibited isoproterenol and forskolin stimulation of III-AC by 43 and 74%, respectively. CaMKII290 had very little effect on isoproterenol or forskolin stimulation of the mutant enzyme. of CaM kinase II. It might be argued that inhibition of III-AC by CaM kinase II masks Ca 2ϩ /CaM stimulation of the adenylyl cyclase in vivo. However, the mutant enzyme lacking the CaMdependent kinase inhibitory site (m-III) was not inhibited or stimulated by intracellular Ca 2ϩ . CaM apparently does not directly modulate the activity of III-AC in vivo. What is the physiological importance of CaM kinase II inhibition of III-AC? III-AC is expressed in several tissues, including olfactory sensory neurons, brain, retina, and heart (4, 5). Furthermore, CaM kinase II is expressed in most mammalian tissues, including heart (27) and olfactory tissue (28). Although CaM kinase II inhibition of III-AC is relatively modest (40 -50% inhibition of hormone-stimulated activity), it is comparable to G i -mediated inhibition. These levels of adenylyl cyclase inhibition are physiologically relevant, and cAMP changes of this magnitude can have significant effects on physiological functions (29). The presence of Ca 2ϩ -inhibitable adenylyl cyclases in heart may provide mechanisms for negative-feedback inhibition of cAMP-stimulated Ca 2ϩ increases and the generation of cAMP and Ca 2ϩ oscillations. In olfactory sensory neurons, odorants stimulate rapid cAMP increases that rise and fall within milliseconds to seconds (30). These increases in cAMP are likely due to stimulation of III-AC and other adenylyl cyclases through G s -or G o -coupled olfactory receptors. There are several possible mechanisms for the subsequent decreases in cAMP, including the actions of cyclic nucleotide phosphodiesterases (31). Because intracellular Ca 2ϩ is elevated during odorant exposure (32,33), Ca 2ϩ inhibition of III-AC and stimulation of CaM-sensitive phosphodiesterases may both contribute to the transient cAMP response. In summary, CaM kinase II phosphorylation of III-AC in vivo is a mechanism for attenuation of hormone-stimulated cAMP increases that generates unique patterns of cross-talk between the Ca 2ϩ and cAMP signal transduction systems. This is the only documented mechanism for Ca 2ϩ inhibition of adenylyl cyclases, and it is possible that Ca 2ϩ inhibition of other adenylyl cyclases in vivo may be mediated by the CaM-dependent kinases.
5,161.8
1996-09-27T00:00:00.000
[ "Biology", "Chemistry", "Computer Science" ]
ENNET: inferring large gene regulatory networks from expression data using gradient boosting Background The regulation of gene expression by transcription factors is a key determinant of cellular phenotypes. Deciphering genome-wide networks that capture which transcription factors regulate which genes is one of the major efforts towards understanding and accurate modeling of living systems. However, reverse-engineering the network from gene expression profiles remains a challenge, because the data are noisy, high dimensional and sparse, and the regulation is often obscured by indirect connections. Results We introduce a gene regulatory network inference algorithm ENNET, which reverse-engineers networks of transcriptional regulation from a variety of expression profiles with a superior accuracy compared to the state-of-the-art methods. The proposed method relies on the boosting of regression stumps combined with a relative variable importance measure for the initial scoring of transcription factors with respect to each gene. Then, we propose a technique for using a distribution of the initial scores and information about knockouts to refine the predictions. We evaluated the proposed method on the DREAM3, DREAM4 and DREAM5 data sets and achieved higher accuracy than the winners of those competitions and other established methods. Conclusions Superior accuracy achieved on the three different benchmark data sets shows that ENNET is a top contender in the task of network inference. It is a versatile method that uses information about which gene was knocked-out in which experiment if it is available, but remains the top performer even without such information. ENNET is available for download from https://github.com/slawekj/ennet under the GNU GPLv3 license. Background Regulation of gene expression is a key driver of adaptation of living systems to changes in the environment and to external stimuli. Abnormalities in this highly coordinated process underlie many pathologies. At the transcription level, the control of the amount of mRNA transcripts involves epigenetic factors such as DNA methylation and, in eukaryotes, chromatin remodeling. But the key role in both prokaryotes and eukaryotes is played by transcription factors (TF), that is, proteins that can bind to DNA in the regulatory regions of specific genes and act as repressors or inducers of their expression. Many interactions between transcription factors and genes they http://www.biomedcentral.com/1752-0509/7/106 noisy, high dimensional, and sparse [3]. Moreover, discovering direct causal relationships between genes in the presence of multiple indirect ones is not a trivial task given the limited number of knockouts and other controlled experiments. Attempts to solve this problem are motivated from a variety of different perspectives. Most existing computational methods are examples of influence modeling, where the expression of a target transcript is modeled as a function of the expression levels of some selected transcription factors. Such a model does not aim to describe physical interactions between molecules, but instead uses inductive reasoning to find a network of dependencies that could explain the regularities observed among the expression data. In other words, it does not explain mechanistically how transcription factors interact with regulated genes, but indicate candidate interactions with a strong evidence in expression data. This knowledge is crucial to prioritize detailed studies of the mechanics of the transcriptional regulation. One group of existing methods describes GRN as a system of ordinary differential equations. The rate of change in expression of a transcript is given by a function of the concentration levels of transcription factors that regulate it. Network inference includes two steps: a selection of a model and an estimation of its parameters. Popular models imply linear functions a priori [4][5][6][7]. Bayesian Best Subset Regression (BBSR) [8] has been proposed as a novel model selection approach, which uses Bayesian Information Criterion (BIC) to select an optimal model for each target gene. Another group of methods employ probabilistic graphical models that analyze multivariate joint probability distributions over the observations, usually with the use of Bayesian Networks (BN) [9][10][11], or Markov Networks (MN) [12]. Various heuristic search schemes have been proposed in order to find parameters of the model, such as greedy-hill climbing or the Markov Chain Monte Carlo approach [13]. However, because learning optimal Bayesian networks from expression data is computationally intensive, it remains impractical for genome-wide networks. Other approaches are motivated from statistics and information theory. TwixTwir [14] uses double two-way ttest to score transcriptional regulations. The null-mutant z-score algorithm [15] scores interactions based on a z-score transformed knockout expression matrix. Various algorithms rely on estimating and analyzing crosscorrelation and mutual information (MI) of gene expression in order to construct a GRN [16][17][18][19][20], including ANOVA η 2 method [21]. Improvements aimed at removing indirect edges from triples of genes have been proposed, including techniques such as the Data Processing Inequality in ARACNE [22,23], and the adaptive background correction in CLR [24]. Another method, NAR-ROMI [25], eliminates redundant interactions from the MI matrix by applying ODE-based recursive optimization, which involves solving a standard linear programming model. Recently, machine-learning theory has been used to formulate the network inference problem as a series of supervised gene selection procedures, where each gene in turn is designated as the target output. One example is MRNET [26], which applies the maximum relevance/minimum redundancy (MRMR) [27] principle to rank the set of transcription factors according to the difference between mutual information with the target transcript (maximum relevance) and the average mutual information with all the previously ranked transcription factors (minimum redundancy). GENIE3 [28] employs Random Forest algorithm to score important transcription factors, utilizing the embedded relative importance measure of input variables as a ranking criterion. TIGRESS [29] follows a similar approach but is based on the least angle regression (LARS). Recently, boosting [30,31] was also used to score the importance of transcription factors, in ADANET [32] and OKVAR-Boost [33] methods. In this paper, we propose a method that combines gradient boosting with regression stumps, augmented with statistical re-estimation procedures for prioritizing a selected subset of edges based on results from the machine-learning models. We evaluated our method on the DREAM3, DREAM4 and DREAM5 network inference data sets, and achieved results that in all cases were better than the currently available methods. The ENNET algorithm Formulating the gene network inference problem The proposed algorithm returns a directed graph of regulatory interactions between P genes in form of a weighted adjacency matrix V ∈ R P×P , where v i,j represents regulation of gene j by gene i. As an input, it takes gene expression data from a set of experiments, together with the meta-data describing the conditions of the experiments, including which genes were knocked out. Usually, the raw expression data need to be pre-processed before any inference method could be applied to reverse-engineer a GRN. Pre-processing has a range of meanings, here it is regarded as a process of reducing variations or artifacts, which are not of the biological origin. It is especially important when the expression is measured with multiple high-density microarrays [34]. Concentration levels of transcripts must be adjusted and the entire distribution of adjusted values aligned with a normal distribution. Methods for normalization of expression data are outside of the scope of our work. The data we used were already normalized using RMA [34,35] by the DREAM challenge organizers. We further normalized the expression data to zero mean and unit standard deviation. http://www.biomedcentral.com/1752-0509/7/106 The network inference process relies heavily on the type of expression data provided as an input. Two main groups of expression profiles are: the one with known, and the one with unknown initial perturbation state of the expression of genes in the underlying network of regulatory interactions. For example, knockout and knockdown data are provided with the additional metadata, which describe which genes were initially perturbed in each experiment. On the other hand, multifactorial and time series data are usually expression profiles of an unknown initial state of genes. Wildtype, knockout, knockdown, and multifactorial data describe the expression of initially perturbed genes, which are however in a steady state at the time of measurement, whereas time series data describe the dynamics of the expression levels of initially perturbed genes. The types of data available in popular benchmark data sets are summarized in Table 1. The variability of possible input scenarios poses a problem of representing and analyzing expression data. Here, we operate on an N × P expression matrix E, where e i,j is the expression value of the j-th gene in the i-th sample. Columns of matrix E correspond to genes, rows correspond to experiments. We also define a binary perturbation matrix K, where k i,j is a binary value corresponding to the j-th gene in the i-th sample, just like in the matrix E. If k i,j is equal to 1, it means that the j-th gene is known to be initially perturbed, for example knocked out, in the i-th experiment. Otherwise k i,j is equal to 0. If no information is available about knockouts, all values are set to 0. Decomposing the inference problem into gene selection problems We decompose the problem of inferring the network of regulatory interactions targeting all P genes into P independent subproblems. In each subproblem incoming edges from transcription factors to a single gene transcript are discovered. For the k-th decomposed subproblem we create a target expression vector Y k and a feature expression matrix X −k . Columns of the X −k matrix constitute a set of possible transcription factors. Vector Y k corresponds to the expression of the transcript, which is possibly regulated by transcription factors from X −k . In a single gene selection problem we decide which TFs contribute to the target gene expression across all the valid experiments. Columns of X −k correspond to all the possible TFs, but if a target gene k is also a transcription factor, it is excluded from X −k . We do not consider a situation in which a transcription factor would have a regulatory interaction with itself. When building the target vector Y k corresponding to the k-th target gene, k ∈ {1, ..., P}, we consider all the experiments valid except from the ones in which the k-th gene was initially perturbed, as specified in the perturbation matrix K. We reason that the expression value of the k-th gene in those experiments is not determined by its TFs, but by the external perturbation. Each row in the Y k vector is aligned with a corresponding row in the X −k matrix. In order to justify all the possible interactions we need to solve a gene selection problem for each target gene. For example, if a regulatory network consists of four genes (P = 4), we need to solve four gene selection problems. In the k-th problem, k ∈ {1, 2, 3, 4}, we find which TFs regulate the k-th target gene. In other words, we calculate the k-th column of the output adjacency matrix V. Solving the gene selection problems Once the target gene expression vector Y k and the TF expression matrix X −k are created for each gene k, we solve each k-th gene selection problem independently, in the following way. We search for the subset of columns in X −k that are related to the target vector Y k by an unknown function f k , as shown in Equation 1, where k is a random noise. A function f k represents a pattern of regulatory interactions that drive the expression of the k-th gene. We want f k to rely only on a small number of genes acting as transcription factors, those that are the true regulators of gene k. Essentially, this is a feature selection or a gene selection task [28,32,36,37], where the goal is to model the target response Y k with an optimal small set of important predictor variables, i.e., a subset of columns of the X −k matrix. A more relaxed objective of the gene selection is the variable ranking, where the relative relevance for all input columns of the X −k matrix is obtained with respect to the target vector Y k . The higher a specific column is in that ranking, the higher the confidence that a corresponding TF is in a regulatory interaction with the target gene k. Our solution to the variable ranking involves ensemble learning. We use an iterative regression method, which in each iteration chooses one transcription factor based http://www.biomedcentral.com/1752-0509/7/106 on an optimality criterion, and adds it to the non-linear regression ensemble. The main body of our method, presented in Figure 1, is based on Gradient Boosting Machine [38] with a squared error loss function. First, ENNET initializes f 0 to be an optimal constant model, without selecting any transcription factor. In other words, f 0 is initialized to an average of Y k . At each next t-th step the algorithm creates an updated model f t , by fitting a base learner h t and adding it to the previous model f t−1 . The base learner is fitted to a sample of pseudo residuals, with respect to a sample of transcription factors, and thus is expected to reduce the error of the model. Pseudo-residuals are re-calculated at the beginning of each iteration with respect to the current approximation f t . As a base learner, we use regression stumps, which select a single TF that best fits pseudo residuals. A regression stump h t (x) partitions the expression values x of a candidate TF into two disjoint regions R 1t and R 2t , where R 2t = R − R 1t , and Figure 1 The flowchart of the ENNET algorithm. ENNET algorithm is a modification of a Gradient Boosting Machine algorithm, with a squared error loss function and a regression stump base learner. The algorithm calculates a vector of importance scores of transcription factors, which can possibly regulate a target gene. It is invoked P times in a problem of inferring a P-gene network, i.e., a P-column adjacency matrix V. http://www.biomedcentral.com/1752-0509/7/106 returns values γ 1t and γ 2t , respectively, for those regions, as shown in Equation 2, where I is the identity function returning the numerical 1 for the logical true, and the numerical 0 for the logical false. Regions R 1t , R 2t are induced such that the least-squares improvement criterion is maximized: where w 1t , w 2t are proportional to the number of observations in regions R 1t , R 2t respectively, and γ 1t , γ 2t are corresponding response means. That is, γ 1t is the average of the values from the vector of pseudo-residuals for those samples where an expression of the chosen TF falls into the region R 1t . The value of γ 2t is defined in an analogous way. The averages γ 1t and γ 2t are used as the regression output values for regions R 1t and R 2t , respectively, as shown in Equation 2. The criterion in Equation 3 is evaluated for each TF, and the transcription factor with the highest improvement is selected. In each t-th step, we only use a random portion of rows and columns of X −k , sampled according to the observation sampling rate s s , and the TF sampling rate s f . The procedure outlined above creates a non-linear regression model of the target gene expression based on the expression of transcription factors. However, in the network inference, we are interested not in the regression model as a whole, but only in the selected transcription factors. In each t-th step of the ENNET algorithm, only one TF is selected as the optimal predictor. The details of the regression model can be used to rank the selected TFs by their importance. Specifically, if a transcription factor ϕ t is selected in an iteration t, an improvement i 2 t serves as an importance score I 2 ϕ t for that ϕ t -th TF. If the same TF is selected multiple times at different iterations, its final importance score is a sum of the individual scores. In the training of the regression model, the parameter ν, known as a shrinkage factor, is used to scale a contribution of each tree by a factor ν ∈ (0, 1) when it is added to the current approximation. In other words, ν controls the learning rate of the boosting procedure. Shrinkage techniques are also commonly used in neural networks. Smaller values of ν result in a larger training risk for the same number of iterations T. However, it has been found [38] that smaller values of ν reduce the test error, and require correspondingly larger values of T, which results in a higher computational overhead. There is a trade-off between these two parameters. Refining the inferred network Once the solutions of the independent gene selection problems are calculated, we compose the adjacency matrix V representing a graph of inferred regulatory interactions. Each of the solutions constitutes a single columnvector, therefore we obtain the adjacency matrix V by binding all the partial solutions column-wise. Then we apply a re-evaluation algorithm to achieve an improved final result. The first step does not require any additional data to operate other than the previously calculated adjacency matrix V. It exploits the variance of edge probabilities in the rows of V, i.e., edges outgoing from a single transcription factor, as a measure of the effect of transcriptional regulation. We score transcription factors based on their effects on multiple targets. We assume that the effect of transcriptional regulation on a directly regulated transcript is stronger than the one of the regulation on indirectly regulated transcripts, e.g. transcripts regulated through another transcription factor. Otherwise, knocking out a single gene in a strongly connected component in a network of regulatory interactions would cause the same rate of perturbation of the expression level of all the transcripts in that component. As a measure of that effect we use previously a calculated adjacency matrix V and multiply each row of V matrix by its variance σ 2 i . An updated adjacency matrix V 1 is given by Equation 4: where σ 2 i is a variance in the i-th row of V. Note that V matrix is built column-wise, i.e., a single column of V contains the relative importance scores of all the transcription factors averaged over all the base learners with respect to a single target transcript. On the other hand, rows of V matrix are calculated independently in different subproblems of the proposed inference method. Each row of V contains relative importance scores with respect to a different target transcript. We reason that if a transcription factor regulates many target transcripts, e.g. a transcription factor is a hub node, the variance in a row of V corresponding to that transcription factor is elevated and therefore it indicates an important transcription factor. The second step of refining the network requires knockout expression data. We reason that direct regulation of a transcript by a transcription factor would lead to a distinct signature in the expression data if the transcription factor was knocked out. A similar reasoning gave foundations for the null-mutant z-score method [15] of reverseengineering GRNs. However, in the proposed method this step is only applied if knockout expression profiles are available. In this step we calculate an adjacency matrix V 2 , which is an update to an already derived adjacency matrix V 1 , as shown in Equation 5: where e α(i),j is an average expression value of the j-th transcript in all the experiments α(i) in which the i-th gene was knocked-out, as defined by K matrix, e β(i),j is the mean expression value for that transcript across all the other knockout experiments, β(i), and σ j is the standard deviation of the expression value of that transcript in all the knockout experiments. The | e α(i),j −e β(i),j σ j | coefficient shows how many standard deviations the typical expression of the j-th transcript was different from the average expression in the experiment in which its potential i-th transcription factor was knocked-out. Performance evaluation A considerable attention has been devoted in recent years to the problem of evaluating performance of the inference methods on adequate benchmarks [35,39]. The most popular benchmarks are derived from well-studied in vivo networks of model organisms, such as E. coli [40] and S. cerevisiae [41], as well as artificially simulated in silico networks [39,[42][43][44][45]. The main disadvantage of in vivo benchmark networks is the fact that experimentally confirmed pathways can never be assumed complete, regardless of how well the model organism is studied. Such networks are assembled from known transcriptional interactions with strong experimental support. As a consequence, gold standard networks are expected to have few false positives. However, they contain only a subset of the true interactions, i.e., they are likely to contain many false negatives. For this reason, artificially simulated in silico networks are most commonly used to evaluate network inference methods. Simulators [39] mimic real biological systems in terms of topological properties observed in biological in vivo networks, such as modularity [46] and occurrences of network motifs [47]. They are also endowed with dynamical models of a transcriptional regulation, thanks to the use of non-linear differential equations and other approaches [42,48,49], and consider both transcription and translation processes in their dynamical models [48][49][50] using a thermodynamic approach. Expression data can be generated deterministically or stochastically and experimental noise, such as the one observed in microarrays, can be added [51]. Here, we used several popular benchmark GRNs to evaluate the accuracy of our proposed algorithm and compare it with the other inference methods. The data sets we used come from Dialogue for Reverse Engineering Assessments and Methods (DREAM) challenges and are summarized in Table 1. We evaluated the accuracy of the methods using the Overall Score metric proposed by the authors of DREAM challenges [35], as shown in Equation 6: where p aupr and p auroc are geometric means of p-values of networks constituting each DREAM challenge, relating to an area under the Precision-Recall curve (AUPR) and an area under the ROC curve (AUROC), respectively. Results and discussion We assessed the performance of the proposed inference algorithm on large, universally recognized benchmark networks of 100 and more genes, and compared it to the state-of-the-art methods. We summarize the results of running different inference methods in Figure 2. For a comparison we selected a range of established methods from literature: ARACNE, CLR, and MRNET as implemented in the minet R package [52], GENIE3 and C3NET as implemented by their respective authors, our previously reported method ADANET, and the top three performers in each of the three DREAM challenges as listed on the DREAM web site. Some of the methods were designed for use with knockout data, while others are developed with multifactorial data in mind, where no information is given about the nature of the perturbations. Therefore, depending on the nature of the particular DREAM data set, only the suitable group of methods is used for the comparison. The accuracy of ENNET DREAM3 [15,53,54] features in silico networks and expression data simulated using GeneNetWeaver software. Benchmark networks were derived as subnetworks of a system of regulatory interactions from known model organisms: E. coli and S. cerevisiae. In this study we focus on a DREAM3 size 100 subchallenge, as the largest of DREAM3 suite. The results of all the competing methods except those that are aimed at multifactorial problems are summarized in Table 2 [15,53,54] was posted one year after DREAM3 challenge. It features two large subchallenges: DREAM4 size 100, and DREAM4 size 100 multifactorial. For each subchallenge, the topology of the benchmark networks were derived from the transcriptional regulatory system of E. coli and S. cerevisiae. In DREAM4 size 100 subchallenge all the data types listed in Table 1 were available except multifactorial, therefore ADANET, GENIE3, CLR, C3NET, MRNET, and ARACNE methods were excluded from the comparison. The results of all the methods are summarized in Table 3. ENNET method http://www.biomedcentral.com/1752-0509/7/106 Overall Score clearly outperformed all the others and achieved consistently high scores across all the benchmark networks. In the second DREAM4 large subchallenge, DREAM4 size 100 multifactorial, only multifactorial data were available, therefore all the methods were included in the comparison, and run as originally designed. The results of all the methods are summarized in Table 4. ENNET achieved the best Overall Score. Three benchmark networks in DREAM5 [35] were different in size, and structured with respect to different model organisms. However, this time expression data of the only one network were simulated in silico, the two other sets of expression data were measured in real experiments in vivo. Like in all DREAM challenges, in silico expression data were simulated using an open-source GeneNetWeaver simulator [54]. However, DREAM5 was the first challenge where participants were asked to infer GRNs on a genomic scale, e.g. for thousands of target genes, and hundreds of known transcription factors. Gold standard networks were obtained from two sources: RegulonDB database [40], and Gene Ontology (GO) annotations [55]. The results of all the inference methods for DREAM5 expression data are summarized in Table 5. ENNET achieved the best score for the in silico network, and the best Overall Score, as well as the best individual AUROC scores for all the networks. Clearly all the participating methods achieved better scores for an in silico network than for either one of in vivo networks. Results of the different inference methods on DREAM3 networks, challenge size 100. An area under the ROC curve (AUROC) and an area under the Precision-Recall curve (AUPR) are given for each network respectively. The overall Score for all the networks is given in the last column. The best results for each column are in bold. Numbers in the "Experimental results" part of the table were collected after running the algorithms with the default sets of parameters on pre-processed data. However, ADANET, GENIE3, CLR, C3NET, MRNET, and ARACNE methods, as they are originally defined, take a multifactorial matrix as an input, which is unavailable in this challenge. Therefore they were excluded from the comparison. Numbers in the "Winner of the challenge" part of the table correspond to the best methods participating in the challenge. http://www.biomedcentral.com/1752-0509/7/106 Results of the different inference methods on DREAM4 networks, challenge size 100. An area under the ROC curve (AUROC) and an area under the Precision-Recall curve (AUPR) are given for each network respectively. The Overall Score for all the networks is given in the last column. The best results for each column are in bold. Numbers in the "Experimental results" part of the table were collected after running the algorithms with the default sets of parameters on pre-processed data. However, ADANET, GENIE3, CLR, C3NET, MRNET, and ARACNE methods, as they are originally defined, take a multifactorial matrix as an input, which is unavailable in this challenge. Therefore they were excluded from the comparison. Numbers in the "Winner of the challenge" part of the table correspond to the best methods participating in the challenge. ENNET shows better in vivo results than the other methods in terms of an area under the the ROC curve. Still, predictions for in vivo expression profiles show a low overall accuracy. One of the reasons for a poor performance of the inference methods for such expression profiles is a fact that experimentally confirmed pathways, and consequently gold standards derived from them, cannot be assumed complete, regardless of how well is a model organism known. Additionally, there are regulators of gene expression other than transcription factors, such as miRNA, and siRNA. As shown in this study, in silico expression profiles provide enough information to confidently reverse-engineer their underlying structure, whereas in vivo data hide a much more complex system of regulatory interactions. Computational complexity of ENNET Computational complexity of ENNET depends mainly on the computational complexity of the regression stump base learner, which is used in the main loop of the algorithm. As shown in Figure 1 Table 6. Note that the measure for Results of the different inference methods on DREAM4 networks, challenge size 100 multifactorial. An area under the ROC curve (AUROC) and an area under the Precision-Recall curve (AUPR) are given for each network respectively. The Overall Score for all the networks is given in the last column. The best results for each column are in bold. Numbers in the "Experimental results" part of the table were collected after running the algorithms with the default sets of parameters on pre-processed data. Numbers in the "Winner of competition" part of the table correspond to the best methods participating in the challenge. http://www.biomedcentral.com/1752-0509/7/106 the information-theoretic methods: CLR, MRNET, and ARACNE does not include a calculation of the mutual information matrix. When implementing ENNET algorithm we took advantage of the fact that gene selection problems are independent of each other. Our implementation of the algorithm is able to calculate them in parallel if multiple processing units are available. User can choose from variety of parallel backends including multicore package for a single computer and parallelization based on Message Passing Interface for a cluster of computers. The biggest data we provided as input in our tests were in vivo expression profiles of S. cerevisiae from the DREAM 5 challenge. These are genome-wide expression profiles of 5950 genes (333 Table 6 The computational complexity of ENNET and the other GRN inference methods Method Complexity The computational complexity of ENNET and the other GRN inference methods with respect to the number of genes P and the number of samples N. The computational complexity of CLR, MRNET, and ARACNE is given without calculating the Mutual Information matrix. of them are known transcription factors) measured in 536 experiments. It took 113 minutes and 30 seconds to calculate the network on a standard desktop workstation with one Intel®Core™i7-870 processor with 4 cores and two threads per core (in total 8 logical processors) and 16 GB RAM. However, it took only 16 minutes and 40 seconds to calculate the same network on a machine with four AMD Opteron™6282 SE processors, each with 8 cores and two threads per core (in total 64 logical processors) and 256 GB RAM. All the data sets from the DREAM 3 and the DREAM 4 challenges were considerably smaller, up to 100 genes. It took less than one minute to calculate each of these networks on a desktop machine. Setting parameters of ENNET The ENNET algorithm is controlled by four parameters: the two sampling rates s s and s f , the number of iterations T and the learning rate ν. The sampling rate of samples s s and the sampling rate of transcription factors s f govern the level of randomness when selecting, respectively, rows and columns of the expression matrix to fit a regression model. The default choice of the value of s s is 1, i.e., we select with replacement a bootstrap sample of observations of the same size as an original training set at each iteration. Because some observations are selected more than once, around 0.37 of random training samples are out of bag in each iteration. It is more difficult to choose an optimal value of s f , which governs how many transcription factors are used to fit each base learner. Setting this parameter to a low value forces ENNET to score transcription factors, even if their improvement criterion, as shown http://www.biomedcentral.com/1752-0509/7/106 in Equation 2, would not have promoted them in a pure greedy search, i.e., s f = 1. However, if a chance of selecting a true transcription factor as a feature is too low, ENNET will suffer from selecting random genes as true regulators. Even though reverse-engineering of GRNs does not explicitly target a problem of predicting gene expression, we choose the values of sampling rates such that the squared-error loss of a prediction of the target gene expression as given by f T (see Figure 1) is minimal. This is done without looking at the ground truth of regulatory connections. For each benchmark challenge we performed a grid search over (s s , s f ) ∈ {0.1, 0.3, 0.5, 0.7, 1} × {0.1, 0.3, 0.5, 0.7, 1} with fixed ν = 0.001, T = 5000. For each specific set of parameters we analyzed an average 5-fold cross-validated loss over all the observations (across all gene selection problems). We further analyze our approach with respect to one of the challenges: DREAM4 size 100, as shown in Figure 3. The minimal average loss was achieved for s s = 1 and s f = 0.3 (see Figure 3 A), which is consistent with the default parameters proposed for Random Forest algorithm [28]. We also compared the measure based on an average loss with the Overall Score as defined by Equation 6. The results were consistent across the two measures, i.e., a selection of parameters that gave a low average loss also led to the accurate network predictions (see Figure 3 B). An advantage of the average loss measure is a fact that the gold standard network is not used to tune parameters. In Figure 4 we present a detailed analysis of the accuracy of the GRN inference across different networks of the DREAM4 size 100 challenge. Each point on both which are well preserved across the five networks: for each separate network we observe that AUPR and AUROC decreases in a function of an average loss. As the Overall Score is closely related to AUPR and AUROC, the results shown in Figure 4 explain the shape of a surface shown in Figure 3. As ENNET uses boosting, it needs a careful tuning of the number of iterations T and the learning rate ν. It has been shown [38] that parameters T and ν are closely coupled. Usually the best prediction results are achieved when ν is fixed to a small positive number, e.g. ν ≤ 0.001, and the optimal value of TY is found in a process of cross-validation. As described above, we reason that the choice of parameters, which gives a low average loss on a cross-validated test set, leads to an accurate network prediction. Therefore in Figure 5 we present how an average loss depends on T ∈ {1, ..., 5000} for different values of ν ∈ {0.001, 0.005, 0.01, 0.05, 0.1}, with fixed s s = 1, s f = 0.3. Each of the line shows how much ENNET overtrains the data for a given T and ν. Finally, the optimal choice of parameters for DREAM4 size 100 challenge is s s = 1, s f = 0.3, T = 5000, ν = 0.001. Following the same practice, we used this default set of parameters: s s = 1, s f = 0.3, T = 5000, ν = 0.001 to evaluate ENNET We also compared the measure based on an average loss with the original Overall Score, as proposed by the authors of the DREAM challenge. The results were consistent across the two measures, i.e., the parameters that gave low average loss also led to accurate network predictions (a high Overall Score). http://www.biomedcentral.com/1752-0509/7/106 algorithm on all the benchmark networks using ground truth, i.e., for calculating the Overall Score and comparing it to the other algorithms. Stability of ENNET Because ENNET uses random sampling of samples and features at each iteration of the main loop, as shown in Figure 1, it may calculate two different networks for two different executions on the same expression data. With the default choice of parameters, i.e., s s = 1, s f = 0.3, T = 5000, ν = 0.001, we expect numerous random resamplings, and therefore we need to know if a GRN calculated by ENNET is stable between different executions. We applied ENNET to the 5 networks that form DREAM 4 size 100 benchmark, repeating the inference calculations independently ten times for each network. Then, for each network, we calculated a Spearman's rank correlation between all pairs among the ten independent runs. The lowest correlation coefficient we obtained was ρ > 0.975, with p-value < 2.2e − 16, indicating that the networks that result from independent runs are very similar. This proves that ENNET, despite being a randomized algorithm, finds a stable solution to the inference problem. Conclusions We have proposed the ENNET algorithm for reverseengineering of Gene Regulatory Networks. ENNET uses a variety of types of expression data as an input, and shows robust performance across different benchmark networks. Moreover, it does not assume any specific model of a regulatory interaction and do not require fine-tuning of its parameters, i.e., we define the default set of parameters, which promises accurate predictions for the future networks. Nevertheless, together with the algorithm, we propose a procedure of tuning parameters of ENNET towards minimizing empirical loss. Processing genome-scale expression profiles is feasible with ENNET: including up to a few hundred transcription factors, and up to a few thousand regulated genes. As shown in this study, the proposed method compares favorably to the state-of-the-art algorithms on the universally recognized benchmark data sets.
8,596
2013-10-22T00:00:00.000
[ "Biology", "Computer Science" ]
Transcriptional Adaptation to Cystic Fibrosis Transmembrane Conductance Regulator Deficiency* Cystic fibrosis, the most commonly inherited lethal pulmonary disorder in Caucasians, is caused by mutations in the cystic fibrosis transmembrane conductance regulator gene (CFTR). To identify genomic responses to the presence or absence of CFTR in pulmonary tissues in vivo, microarray analyses of lung mRNAs were performed on whole lung tissue from mice lacking (CFTR(−)) or expressing mouse CFTR (CFTR(+)). Whereas the histology of lungs from CFTR(−) and CFTR(+) mice was indistinguishable, statistically significant increases in the relative abundance of 29 and decreases in 25 RNAs were identified by RNA microarray analysis. Of RNAs whose expression was consistently altered by the absence of CFTR, functional classes of genes influencing gene transcription, inflammation, intracellular trafficking, signal transduction, and ion transport were identified. RNAs encoding the transcription factor CCAAT enhancer-binding protein (CEBP) δ and interleukin (IL) 1β, both known to regulate CFTR expression, were induced, perhaps indicating adaptation to the lack of CFTR. RNAs mediating lung inflammation including calgranulin-S100 family members, IL-1β and IL-4, were increased. Likewise, expression of several membrane transport proteins that interact directly with CFTR were increased, suggesting that CFTR-protein complexes initiate genomic responses. Absence of CFTR influenced the expression of genes modulating diverse pulmonary cell functions that may ameliorate or contribute to the pathogenesis of CF. lation, recurrent infections, and excessive inflammation in the lung. Whereas the pathogenesis of CF is not fully understood, abnormalities in cyclic AMP-dependent chloride secretion and excessive sodium reuptake by epithelial cells related to CFTR deficiency are thought to alter fluid homeostasis at the airway surface liquid leading to its dehydration, impaired mucociliary clearance, and infection (see Ref. 3 for review). Because the elucidation of the primary structure of CFTR, a myriad of functions and numerous interactions with other cellular proteins have been ascribed to CFTR. Thus, in addition to the role of CFTR in the regulation of cAMP-dependent chloride transport, this protein may play pleotropic roles in many cellular processes by interacting with the cytoskeleton, membrane transport proteins, as well as receptors, protein routing and degradation machinery (2). A number of studies support the concept that the excessive inflammatory responses occur in the CF lung, but the mechanisms underlying these abnormalities have not been clarified. Changes in levels of IL-8 and other proteins mediating inflammatory signaling including NFB and iNOS have been associated with CF, in the presence or absence of infection, raising the possibility that abnormalities in CFTR may constitutively alter pathways mediating inflammation (4 -6). In the lung, CFTR is distributed primarily in apical regions of airway and submucosal gland epithelial cells (7). Abundance and cellular sites of expression of CFTR are strongly influenced by developmental, spatial, and humoral factors, supporting the concept that the expression and function of CFTR are regulated at both transcriptional and post-transcriptional levels. Despite extensive study, the precise role of CFTR in the pathogenesis of CF disease remains poorly understood. At the clinical level, severity of CF disease is highly variable even among individuals bearing identical mutations, supporting the concept that environmental and hereditary factors may influence the severity of the disorder (2). These clinical observations, and observations demonstrating strain differences in the severity of CF phenotype after CFTR gene targeting or mutation in mice (8), support the concept that the expression of CFTR and its function in cellular processes may be influenced by many genes or pathways intensifying or mollifying CF disease in various organs. Morbidity and mortality in patients with CF is strongly associated with pulmonary disease caused by mucous accumulation, inflammation, and infection; however, deletion of CFTR mice does not cause significant pulmonary disease suggesting that expression of alternative channels or other complementary genes maintains pulmonary homeostasis in the mouse. Whereas numerous in vitro and in vivo models have been developed for study of CFTR, analysis of genomic responses to the presence or absence of CFTR are complicated by heterogeneity of cell models, and culture conditions that may influence cell function and gene expression independently of CFTR. Direct RNA analysis of pulmonary tissue from humans with CF is complicated by the nearly ubiquitous, severe pulmonary infections that may secondarily modify cellular responses and gene expression, complicating identification of responses to CFTR in vivo. In the present study, we undertook experiments to identify RNAs influenced by the presence and absence of CFTR in vivo, seeking to identify genes and pathways that interact with or compensate for the CFTR to maintain pulmonary function. In this study, stereotypic genomic responses to the lack of CFTR were observed in pulmonary tissues in the absence of infection or disease. MATERIALS AND METHODS Transgenic mice bearing a null mutation in CFTR (CFTR(Ϫ)), generally succumb to intestinal disease in the weanling period (9). To generate healthy mice deficient in CFTR, the human CFTR cDNA was expressed in the intestinal epithelium under control of the intestinal fatty acid-binding protein gene promoter (iFABP), fully correcting small intestinal pathology and supporting normal postnatal survival of CFTR(Ϫ) mice (10). The iFABP-hCFTR, mCFTR(Ϫ) mice have been maintained in a mixed FVB/N, C57BL/6 background without evidence of gastrointestinal or pulmonary disease for nearly a decade in our laboratory. Histological andbiochemical studies identified no overt pathology in lung tissue from these mice compared with CFTR expressing littermate controls (10,11). Mice were maintained in filtered microisolator cages. Sentinal mice were free of mouse pathogens. Lungs of adult iFABP-hCFTR, mCFTR(Ϫ), and control mice were free of bacterial pathogens or colonization as assessed by quantitative culture of lung homogenates on blood agar plates. RNA Microarray and Data Analysis-Total RNA was subjected to reverse transcription using oligo(dT) with T7 promoter sequences attached, followed by second strand cDNA synthesis. Antisera cRNA was then amplified and biotinylated using T7 RNA polymerase, prior to hybridization to the Affymetrix genechip mouse U74aV2 using the Affymetrix recommended protocol (12,13). Affymetrix MicroArray Suite version 5.0 was used to scan and quantitate the genechips using default scan settings. Intensity data were collected from each chip, scaled to a target intensity of 1500, and the results were analyzed using both MicroArray Suite and GeneSpring 5.0 (Silicon Genetics, Inc., Redwood City, CA). cDNAs were hybridized to U74aV2 chips (Affymetrix Inc.). Hybridization data were normalized in a CCAAT enhancer-binding protein two-step process to remove or minimize systemic sources of variation at both chip and gene level. Specifically, each chip was normalized to the distribution of all genes on the chip to control for variation between samples. Each RNA from mCFTR(Ϫ) mice was normalized to its specific control (i.e. sex and age-matched mCFTR(ϩ) littermates). Data were further transformed into log ratio for analysis and symmetry of distribution. Changes in mRNAs were identified by the combination of a distribution analysis (JMP4, SAS Institute, Inc.), and the Welch analysis of variance. Outlier box and quartile box plots were used to identify outliers with the definition of up-outlier Ͼ upper quartile ϩ 1.5 (interquartile range), and the down-outlier Ͻ lower quartile Ϫ 1.5 (interquartile range). Significant changes were calculated by the Welch t test at p Ͻ 0.05. Adjusted p values were calculated by Westfall and Young permutation for correction of false positives (GeneSpring 4.2.1, Silicon Genetics). Comparisons between genotype and age groups were performed using one-way analysis of variance. To identify genes that were differentially expressed because of CFTR genotype regardless of age, hierarchical, and k-means clustering were used to identify consistent changes in gene expression in response to the lack of CFTR at all three time points. Candidate RNAs were further filtered on the basis of reproducibility and absolute intensity. Mean, standard deviation, and coefficient variation were calculated for each replicate. Replicates with coefficient variations Ͼ 50% were deleted from analysis. Genes whose expression was below the level of detection were eliminated as experimental noise. Pathway and Literature Analysis-Selected genes were subjected to intensive search to identify biological function and associated regulatory pathways. A U74Av2 annotation data base with system identifiers was constructed for all the array elements and their associated Gen-Bank TM accession numbers. Gene description, functional categories, biological processes, molecular functions, cellular components, protein domain, and literature information were identified. Information resources included NetAffy (www.affymetrix.com), Source Search genome-www5.stanford.edu/cgi-bin/SMD/source/), BLAST NCBI, Locus Link, mouse-human homolog search (www.ncbi.nlm.nih.gov), and Gene Ontology data base (www.godatabase.org/chi-bin/go.cgi). Differentially expressed genes were classified into functional categories based on the gene ontology definition. To determine which functional category is overrepresented in the selected gene list, the binomial probability was calculated for each category using the entire U74Av2 (contains 12488 mice genes) as a reference data set. The binomial probability is defined by the following. It returns the probability of getting k successes of n trials if the probability is p in the given population (U74Av2). Potential protein/protein or protein/DNA interactions were identified using the published literature information. Lung Histology and Immunohistochemistry-Lungs from postnatal animals were inflation fixed with 4% paraformaldehyde at 25 cm H 2 O pressure via a tracheal cannula. Lung tissue was processed according to standard methods and embedded in paraffin. Procedures for immunostaining were previously described (14). Rabbit monoclonal antibody against the 110-kDa Mac-3 antigen was used at 1:40,000 to identify alveolar macrophages (BD Pharmingen, San Diego, CA). Numbers and histology of alveolar macrophages were not altered by mCFTR. Identification of Gene Responses to the Lung CFTR Deletion-To identify genes responsive to CFTR, lung RNAs from iFABP-hCFTR, mCFTR(Ϫ), iFABP-hCFTR, mCFTR(ϩ), mCFTR(Ϫ) and mCFTR(ϩ) littermates at the age of 3, 6, or 11 weeks of age were compared. Microarray analyses were performed in duplicate from RNA isolated at 3 and 6 weeks of age. Data from 10 Affymetrix Murine Genome U74Av2 chips were normalized and statistical differences between CFTR-deficient (CFTR(Ϫ)) and control (CFTR(ϩ)) mice were identified. Differences related to age were identified by outlier analysis and/or unpaired t test. After normalization, normal distributions were observed in the intensity data from lung tissue obtained at all ages. Lung RNA data from 3-week-old mCFTR(ϩ) and mCFTR(Ϫ) mice (lacking the iFABP-hCFTR transgene) were similarly distributed to those bearing the FABP-hCFTR gene and were, therefore, included in the analysis. To identify RNAs that were differentially expressed in response to CFTR regardless of age, mCFTR(Ϫ) and mCFTR(ϩ) data were separated into two groups. The log-ratio distribution and outlier plot of the combined data set are represented by Fig. 2. A total of 1977 outliers were identified from 12442 genes/expressed sequence tags analyzed. The abundance of 848 RNAs was increased; 1129 were decreased. Welch t test together with the Westfall and Young step-down permutation further narrowed the number of differentially expressed RNAs to 315. Hierarchical clustering was used to visualize and classify the data set (Fig. 3). Data are shown in a two-dimensional matrix to identify groups of genes with similar expression patterns and show remarkably ordered gene expression profiles of 315 selected genes. On the chip level (top dendrogram) RNAs influenced by CFTR formed two distinct groups. Within each group, the samples collected from age-matched pairs were more closely related than those from different ages, suggesting that age also influenced gene expression. At the RNA level (the dendrogram at the left side), genes were clearly separated into two major groups: those mRNAs increased or decreased in mCFTR(Ϫ) mice. Genes were further filtered for the consistency of differences in expression levels across all time points (coefficient variation Ͻ 50%) and for their absolute intensity above 243 (90% of genes called absent by Affymetrix software, Ͻ243 for this data set). Additional filters reduced the number of RNAs to 54, of which 29 were consistently increased and 25 were decreased in mCFTR(Ϫ) compared with their mCFTR(ϩ) littermates (Tables I and II). The expression profiles of these 54 genes are shown in Fig. 4, demonstrating consistent patterns of expression of the CFTR-responsive RNAs regardless of age. Differentially expressed genes were further classified according to their known or predicted functions. Each gene was annotated and assigned to a functional category. To simplify the calculation, we assumed that genes in each category could be fit to a binomial distribution. The binomial probability was calculated for each category using the entire U74Av2 as reference data set. "Inflammatory Response" was the most represented category of those RNAs increased in mCFTR(Ϫ) mice. Among RNAs whose abundance was increased by the lack of CFTR, those influencing inflammation, transcription, and transport were most highly represented and consisted of a group of functional categories quite distinct from those whose expression was decreased in mCFTR(Ϫ) mice (Tables I-III). The potential influence of the FABP-hCFTR transgene on RNA expression was also assessed using the Welch t test at the three ages. Differentially regulated RNAs identified in analysis of gastrointestinal-corrected mice were similarly affected in mCFTR(Ϫ) mice, demonstrating a lack of effect of iFABP-hCFTR on this subset of genes. Genes whose expression was independently altered by the iFABP-hCFTR transgene included 7 RNAs decreased and 11 increased. Differences in their levels of expression were modest (less than 1.5-fold) ( Table IV). Validation of Selected mRNAs-To validate the responsive RNAs identified by microarray analyses, real time RT-PCR was performed. mRNA levels were normalized using ␤-actin or glyceraldehyde-3-phosphate dehydrogenase. Kir 4.2 (Kcnj15), CEBP␦, TNF-AIP-3, and Grin 2d mRNA were significantly increased in CFTR(Ϫ) mice compared with control littermates (Fig. 5). As expected, murine CFTR was not detectable by RT-PCR in mCFTR(Ϫ) mice, nor was hCFTR mRNA detected in lung from the iFABP-hCFTR bearing mice. DISCUSSION The absence of CFTR caused stereotypic changes in gene expression in the lungs of CFTR(Ϫ) mice in the absence of detectable infection or inflammation. Cellular responses to CFTR included enhanced expression of transcription factors and signaling pathways known to influence CFTR gene expression (IL-1␤ and CEBP␦). Likewise, expression of RNAs modulating inflammation, ion transport, protein trafficking, and degradation were altered, indicating that a number of cellular pathways may compensate for or mediate CFTR-dependent functions that, in turn, may maintain normal pulmonary homeostasis in the CFTR(Ϫ) mice or alternatively, influence CF phenotype following response to pathogens. The absence of CFTR initiates reproducible changes in lung gene expression that may modify inflammation, host defense, and other cellular functions, that likely contribute to or mollify the pathogenesis of pulmonary disease in CF. Because strain is known to influence pulmonary findings in CF mice (8), and the transgenic mice presently studied were generated in a mixed FVB/N, C57 Bl/6 background, lung RNA was compared from sex-matched littermates at various ages. An extensive data set was utilized to identify CFTR-dependent changes in gene expression that were present throughout development and in the absence of inflammation, seeking to identify pathways influenced primarily by CFTR, rather than sex, strain, age, or other secondary phenomena. As expected, expression of mCFTR was not detected in the gastrointestinalcorrected mCFTR(Ϫ) mice, consistent with previous findings (10). Histologic analyses, performed presently and previously, demonstrated no structural abnormalities, infection or inflammation in the lungs of these mice as maintained in our vivarium (10, 11), supporting the concept that changes in gene expression were related to CFTR and not to age or lung disease. Analysis of arrays prepared from pairs of mCFTR(Ϫ) mice and mCFTR(ϩ) littermates (those lacking the iFABP-hCFTR transgene), confirmed the microarray findings, demonstrating both the lack of mCFTR mRNA in lungs of the mCFTR(Ϫ) and that RNA changes were independent of the iFABP-hCFTR transgene. Expression of Genes Modulating CFTR-Expression of a number of genes known to influence CFTR expression was enhanced in the lungs of CFTR(Ϫ) mice, including CEBP␦ and IL-1␤ suggesting that CFTR cells responded by enhancing levels of RNAs encoding transcription factors or pathways that may compensate for the lack of CFTR. IL-1␤ increased CFTR gene transcription in epithelial cells in vitro (15) and cis-acting elements binding CEBP␦ (to CCAAT enhancer sites) and c-Fos (to AP-1 elements) are present in the promoter-enhancer regions of the mouse CFTR gene (16,17). Previous studies demonstrated that CEBP␦ directly enhanced CFTR gene transcription in vitro (18), thus, the increased CEBP␦ expression may represent a potential compensatory response to CFTR deficiency that, in turn, may influence expression of genes unrelated to CFTR. Expression of c-fos was also increased in the mCFTR(Ϫ) mice. Whereas AP-1 sites have been identified in the hCFTR promoter, the precise role of c-fos in regulation of CFTR has not been clarified; although treatment of various cell types with phorbol esters decreased CFTR gene transcription in vitro (18). Because CEBP␦ and c-fos activate or inhibit the expression of numerous genes that share cis-active elements with the CFTR gene, changes in their activity may broadly influence gene expression, perhaps inadvertently linking CFTR deficiency to the expression of genes whose activities are not directly related to CFTR protein function. RNA encoding TNF-AIP-3, a zinc finger transcription factor, was also increased in the CFTR(Ϫ) mice, further linking transcription responses to the lack of CFTR. Of considerable interest, TNF-AIP-3 RNA was induced by either IL-1␤ or TNF. TNF-AIP-3 inhibited NFB translocation in vitro, and may represent a compensatory response to the increased expression of IL-1␤ seen in the CFTR(Ϫ) mice (19). JAK-3, nuclear receptor subfamily 2, and interferon regulatory factor-1 RNAs were decreased in CFTR(Ϫ) mice. These RNAs encode transcription factors that regulate various pathways involved in inflammation and may, therefore, represent responses to the proinflammatory proteins induced in the CFTR(Ϫ) mice, in essence, being secondarily responsive to initial compensatory changes. Enhanced Expression of Genes Modulating Inflammation-Genes involved in inflammation were overly represented among the RNAs induced and were distinct from those that were decreased in the lungs from CFTR(Ϫ) mice (Table III). Most prominent among genes whose expression was increased were a family of calcium-binding proteins termed the calgranulins (S100A8, S100A9, and calbindin D9K). S100A8 is expressed primarily by macrophages and monocytes and its expression is enhanced by various cytokines including TNF-␣, IL-1␤, and interferon ␥ (20,21). This family of peptides is expressed by various cell types and share potent chemoattractant activities, stimulating inflammatory cell trafficking. Increased expression of S100A8 (calgranulin A) was previously demonstrated in alveolar macrophages from CFTR mutant mice, the authors suggesting that increased S100A8 may contribute to the enhanced inflammatory responses seen in the absence of CFTR (22). Expression of S100A8 by alveolar macrophages was induced by TNF-␣, interferon ␥ and IL-1␤, mediated at least in part by AP-1-dependent pathways (20). Chitinase A mRNA was also consistently increased in the lungs of CFTR(Ϫ) mice. The acidic chitinase A is a small peptide containing a mammalian lectin domain that binds complex carbohydrates on surfaces of microbial pathogens including fungi. Like the S100 family of proteins, chitinase family members are also expressed by alveolar macrophages in the lung (23). Thus, decreased mCFTR initiated changes in expression of gene expressed primarily in alveolar macrophages. It remains unclear whether these responses are mediated by direct effects of CFTR in alveolar macrophages or by altered cellular signaling initiated by the lack of CFTR in epithelial or other pulmonary cells that secondarily alters macrophage activity. Intense neutrophilic infiltration and an increased IL-8 production are strongly associated with CF lung disease in humans (2). Increased IL-8 and neutrophilic infiltrates were observed in bronchoalveolar lavage fluid from CF patients in the absence of documented pulmonary infection (4). Although it remains possible that antecedent, but resolved, infections may have contributed to the increased inflammation observed in the CF, these observations support the concept that CF is associated with increased susceptibility to pulmonary inflammation. In the present study, IL-1␤, IL-4, and CSF-1 receptor RNAs were increased and each may contribute to the proinflammatory milieu. IL-1␤ enhances CFTR gene transcription, induces inflammation, and is known to stimulate production of the S100-calgranulins, perhaps indicating a network of genes influenced by CFTR through IL-1␤. IL-4 is a potent inflammatory mediator that enhances inflammation and mucous production in airway epithelia. Transgenic animals expressing IL-4 or animal models in which IL-4 is induced developed severe goblet cell hyperplasia, increased mucous production, and inflammatory cell infiltrates (24), findings typically found in patients with cystic fibrosis. CSF-3r RNA encoding a receptor that mediates monocytic cell migration, proliferation, and activity in response to CSF-3 (G-CSF) was increased 3-4-fold in the CFTR(Ϫ) mice (25). Thus, taken together, expression of a number of genes; many influenced by IL-1␤ and mediating inflammation, were induced in the lungs of CFTR(Ϫ) mice. Despite the increased expression of proinflammatory molecules, there is no evidence of inflammation in the lung of the CFTR(Ϫ) mice perhaps indicating that normal homeostasis is maintained by the complex responses of the lung to the lack of CFTR. The presence of stereotypic changes in expression of many genes suggests that the presence of a single ameliorating gene, for example, an alternative chloride channel, does not fully explain the physiologic adjustment of the lung the CFTR(Ϫ) mice. At present, it is unclear whether these proinflammatory responses are secondary to changes in the expression and function of CFTR in the epithelial cells that, in turn, modulate cell signaling and cytokine production in the lung. Alternatively, the absence of CFTR in the alveolar macrophages may alter expression of genes mediating inflammation in those cells. It is of considerable interest that changes in RNAs modifying inflammation were altered in the lungs of mCFTR(Ϫ) mice in the absence of detectable bacterial infection or inflammation, supporting the concept that the transcriptional adjustment to CFTR deficiency suffices to maintain normal pulmonary homeostasis in the mouse in vivo. Alternatively, the levels of expression of the proinflammatory molecules may not be adequate to cause histologically detectable inflammation. It remains unclear whether these adjustments in gene expression may, in turn, render the CFTR(Ϫ) mice susceptible to inflammation following infection or injury. Changes in NFB-and TNF-␣-dependent Pathways-RNAs encoding a number of proteins involved in TNF signaling and NFB activation were also induced in the CFTR(Ϫ) mice. The abundance of TNF-AIP-3 mRNA, a zinc finger transcription protein, a protein whose expression is induced by both TNF-␣ and IL-1␤, was increased in the CFTR(Ϫ) mice. TNF-AIP-3 inhibits NFB activity at target genes (19) and may represent a response to the proinflammatory milieu established in the CFTR(Ϫ) lung. PEG-3, a protein that regulates the induction of NFB following TNF stimulation, was also increased, providing for the support for transcriptional relationships between CFTR deficiency and activity of NFB (26). Recent studies of Schroeder et al. (27) support the concept that the CFTR is required for regulation of NFB, serving as a pattern-associated molecular recognition molecule, following pulmonary exposure to Pseudomonas aeruginosa. In that study, NFB activation and its nuclear trafficking were deficient in CF cells. Taken together with the observation that IL-1␤-mediated transcription of CFTR gene transcription is dependent upon NFB, this important pathway mediating inflammation appears to be influenced by CFTR. In the present microarray analysis, CFTR deficiency was associated with increased claudin 8 and decreased gap junction connexin 37. Decreased expression of gap junction proteins (Cx43, 40, -37, and -32) and decreased gap junction communication were observed in various in vitro cell systems after exposure to proinflammatory cytokines (28). The observed changes in RNAs mediating cell adhesion and increased ex- FIG. 4. Expression profile chart (A) and hierarchical clustering (B) of 54 selected RNAs that were consistently altered in response to the lack of CFTR regardless of age. Hierarchical clustering was performed by UPGMA. Data were normalized using Trimmed Mean and Z-score calculations. In the profile chart, data were normalized by pairwise control. The y axis is normalized intensity (log scale) and the x axis represents experimental ages. Red lines represent the profiles of lung RNAs increased in CFTR(Ϫ) mice. Green lines represent the profiles of down-regulated RNAs. pression of IL-1␤ seen in the CFTR(Ϫ) mice are consistent with findings that in the absence of CFTR, IL-1␤ and TNF-␣ failed to inhibit cell communication via gap junctions. Previous studies demonstrated that CFTR is required for the uncoupling of gap functions between epithelial cells during inflammation, a process that may restrict the spread of pathogens or signaling among adjacent cells. It will be of interest to test whether cell adhesion mediated by the claudins is also linked to the pathogenesis of CF. Changes in Protein Degradation Pathways-Several genes involved in protein degradation were altered in the absence of CFTR compared with normal. Proteosome 26 S subunit (PSMC3) is the major proteolytic component of the ubiquitindependent proteosome. Proteosome 26 S regulates degradation of proteins influencing cell cycle, oncogenesis, transcription, and immunity, including CFTR itself (29,30). The proteosome is composed of two subcomplexes, the 20 S proteosome and PA700. PSMC3 expression was modestly, but significantly increased (1.5-fold) in the absence of CFTR. In contrast, PA28␥ (PSME3), an activator of the 20 S proteosome, was decreased 1.6-fold. These observations are consistent with previous findings that proinflammatory cytokines, including TNF-␣ or interferon-␥ increased expression of the 26 S proteosome and its activators PA28 ␣ and ␤ (31), whereas expression of PA28 ␥ was decreased by interferon-␥ (32). RNA encoding adaptor protein complex AP-2, ␣1 subunit (Ap2a1), a protein involved in the formation of intracellular transport vesicles was also decreased in the absence of CFTR. CFTR co-precipitates with ␣-adaptin (33). Recent studies demonstrated that a C-terminal domain of CFTR binds to the AP-2 adaptor complex to form clathrin-coated vesicles that mediates CFTR internalization (34). ADP-ribosylation factor 5 belongs to a family of GTPbinding proteins that play important roles in the control of membrane trafficking, including formation of secretory vesicles at the trans-Golgi network, endosomal and vesicle-plasma membrane fusion (35). Recent studies support the concept that CFTR regulates endosomal fusion and vesicular trafficking (36,37) indicating potential relationships between CFTR and the actions of ADP-ribosylation factor 5. Another gene in this functional category is represented by kinesin3␣, an mRNA that was decreased in the lungs of CFTR(Ϫ) mice. Transport Proteins Influenced by CFTR-RNAs encoding several transmembrane transport proteins and receptors were also altered in the lungs of CFTR(Ϫ) mice, including solute carrier 38 (member 4), the potassium inwardly rectifying channel (Kir 4.2 or Kcnj 15), the glutamate receptor (Grin 2d), the naturietic peptide receptor 3 (Npr-3), and the ␤3-adrenergic receptor (ADRB-3). Thus, expression of a number of membrane transport proteins was influenced by CFTR, perhaps representing compensatory responses to defects in CFTR-mediated transport activity. Kir 4.2 and Grin 2d RNAs were increased 2-3-fold in CFTR(Ϫ) mice. Kir 4.2 is expressed in respiratory epithelial cells at sites similar to that of CFTR (38). Kir 4.2 regulates cation transport upon which chloride transport via CFTR or other chloride channels may be influenced. Surprisingly, there is evidence for interactions between CFTR and Kir family members because they are both known to bind via PDZ binding domains through interactions with channel interacting PDZ domain protein (39). Likewise, there is precedence for PDZ-dependent interactions among glutamate receptors, CFTR, and Kir family members (40). Thus, the lack of CFTR enhanced the expression of a number of membrane proteins that may interact with CFTR via PDZ domains, perhaps indicating that CFTR-protein complexes may initiate changes in gene expression. These findings support the concept that CFTR interacts with numerous membrane transport proteins and do not support a model in which activity of an alternative Cl Ϫ transporter alone suffices to compensate for the lack of CFTR in pulmonary cells in the CFTR(Ϫ) mice. Regulation of Cell Receptors by CFTR-Natriuretic peptide receptor C (Npr-3 or NprC) was increased more than 2-fold in CFTR(Ϫ) mice. Natriuretic peptides comprise a family of 3 structurally related molecules: atrial (ANP), brain (BNP), and C-type (CNP) whose functions are cGMP-dependent (41,42). Among them, CNP increased ciliary beat, mucociliary clearance in airway epithelial cells, and activated CFTR-dependent chloride transport (42). Natriuretic peptides regulate cytokinestimulated NO production via the binding of Npr-3 (43). Because deficient NO production was observed in respiratory epithelial cells of the iFABP-hCFTR, CFTR(Ϫ) mice, the increased expression of Npr-3 in CFTR(Ϫ) mice may represent a compensatory response influencing airway clearance and nitric oxide production via cGMP (44). In contrast to the transport/ receptors that were induced, ADRB3, a G protein-coupled transmembrane protein mediating cAMP production was decreased in CFTR(Ϫ) mice. ADRB3 is co-expressed with CFTR in airway epithelium (45) and may be functionally coupled to CFTR via cAMP-independent pathways (46). ␤2-Adrenergic receptors directly interact with CFTR via the Na(ϩ)/H(ϩ) exchanger regulatory factor to form a signal transduction complex (47). Co-regulation of ADRB3 and CFTR may indicate that these proteins interact closely at both structural and functional levels, a finding that may be linked to the important role of ␤-adrenergic stimulation and cAMP in the activation of chloride transport mediated by CFTR. Altered Expression of RNAs Encoding CFTR Interacting Proteins-Surprisingly, analysis of the RNA influenced by CFTR identified a number of proteins that directly or indirectly interact with CFTR via protein-protein interactions. This list included proteins involved in protein trafficking and degradation (proteosome 26 S and PA28 subunits and ␣-adaptin), ion transport (Kcnj15 and Grin 2d), and receptors (Npr-3 and ADRB3). Interaction of CFTR with many proteins occurs via PDZ binding domains that mediate protein-protein complex formation. The finding that expression of CFTR interacting proteins is altered in the lungs of CFTR(Ϫ) mice, suggests that CFTR influences networks of signaling and transport activities in the cell and that cells respond to CFTR deficiency via transcriptional responses to CFTR-protein complexes, rather than to CFTR per se. Are Changes in Gene Expression Bystander Effects?-Whereas genomic responses may compensate for the lack of CFTR mRNA and represent compensation for CFTR function, some responses may be secondary and mediated by pathways not directly related to the action of CFTR per se. Alterations in Maintenance of pulmonary homeostasis in the mCFTR(Ϫ) mouse was associated with complex adaptive responses in gene expression. CFTR influenced RNAs encoding transcription factors, ion channels, membrane receptors, cytokines, and intracellular trafficking proteins. Finally, CFTR altered the expression of a number of proteins that interact with CFTR via protein-protein interactions perhaps representing transcriptional responses to functions mediated by CFTR(Ϫ) protein FIG. 6. Molecular pathways and networks influenced by CFTR. A model is proposed by which the lack of CFTR initiates expression of genes that are known to enhance CFTR expression IL-1␤ and CEBP␦, secondarily modulating inflammation (1,2). Increased expression of cytokines, IL-4 and IL-1␤, and the cytokine receptor, CSF-3R, may influence inflammation (2) as well as expression of genes modulating protein trafficking, degradation, and cell-cell communication (3)(4)(5). Abundance of RNA of chemoattractant peptides of the calgranulin family were enhanced, perhaps contributing to a proinflammatory environment within the lung (2). CFTR altered the expression of RNAs encoding transport proteins and membrane receptors that likely interact with CFTR via PDZ domains (6,7) supporting the concept that pulmonary cells respond to the lack of CFTR or CFTR complexes by regulating the expression of CFTR partners. Alterations in pathways regulating second messengers, including cAMP, via the ␤3-adrenergic receptor (Adrb3), were observed (7). Absence of CFTR also influenced expression of genes mediating endocytosis, membrane recycling, and regulated secretion (4,8,9). complexes (Fig. 6). The diversity of genes whose expression was altered by CFTR support the concept that, in addition to regulation of Cl Ϫ transport, CFTR plays diverse roles in multiple cellular functions. The present findings support the hypothesis that pulmonary homeostasis in the CFTR(Ϫ) mouse is maintained by complex genomic responses to the lack of CFTR rather than by the action of a single alternative Cl Ϫ channel. Finally, the genes and pathways identified in this study provide new links between CFTR and cellular processes that may influence the pathogenesis of CF lung disease.
7,250.6
2003-02-28T00:00:00.000
[ "Biology", "Medicine" ]
Thyroid hormone replacement one day before 131I therapy in patients with well-differentiated thyroid cancer Objective: The current study aimed to determine the efficacy of radioiodine-131 (131I) ablation therapy with thyroid hormone replacement one day before 131I administration in patients with well-differentiated thyroid cancer (DTC). Methods: This retrospective study included 29 patients who underwent 131I therapies twice for DTC during 6-12 months. Since all the patients obviously had residual lesions by their serum thyroglobulin levels or their scintigrams at the first therapies, they underwent the second 131I therapies without diagnostic scintigraphy after the first therapies. After confirming the sufficient elevation of TSH concentration, thyroid hormone replacement was resumed one day before 131I administration (3.7-7.4GBq). The ablation rate of thyroid remnant at the first 131I therapy was evaluated by comparing 131I post-therapeutic images of the two treatments. Results: Three patients were administrated thyroid hormone after 131I therapy because of insufficient TSH concentration under thyroid hormone withdrawal. In the remaining 26 patients, 41 thyroid remnant accumulations were detected in all 26 patients at the first 131I therapy. Based on the second 131I post-therapeutic images, successful ablation was confirmed in 24 of 26 patients (92.3%) and 38 of 41 sites (92.7%), which was comparable with historically reported ablation rates. Conclusion: Thyroid hormone replacement one day before 131I therapy could provide a sufficiently high ablation rate in patients with DTC. Introduction Radioiodine-131 ( 131 I) therapy has been commonly used for well-differentiated thyroid cancer (DTC). This procedure is clinically beneficial in reducing recurrence and increasing the sensitivity of serum thyroglobulin, which reflects tumor activity of DTC (1)(2)(3). To achieve sufficient 131 I uptake in residual thyroid tissues and tumors, 131 I therapy for DTC requires thyroid-stimulating hormone (TSH) elevation. Thyroid hormone replacement must be withheld for a certain amount of time in order to permit an adequate rise in TSH, ideally higher than 30mIU/L on the day of 131 I administration. The duration of thyroid hormone withdrawal requires at least 2 weeks for triiodothyronine (T3) and 4 to 6 weeks for thyroxine (T4) (4)(5). Meanwhile, thyroid hormone replacement is conventionally resumed several days after 131 I Asia Oceania J Nucl Med Biol. 2013;1 (1) Hormone replacement before 131 I administration. ATA guidelines and EANM guidelines recommended that thyroid hormone replacement should be resumed 2 or 3 days after 131 I administration (1,4). Under thyroid hormone discontinuation, most patients suffer from hypothyroid symptoms, such as fatigue, lethargy, cold intolerance, weight gain and nonpitting edema and their quality of life (QOL) is impaired. Moreover, elevated TSH may stimulate the growth of residual lesions. To palliate hypothyroid symptoms and eliminate unnecessary TSH stimulation to residual lesions, shortening the term of thyroid hormone withdrawal is desirable. If thyroid hormone replacement started earlier than usual, patients could obtain better QOL during their preparation of 131 I therapy. After the initiation of thyroid hormone replacement, TSH concentration gradually dissolves. It was experienced that TSH concentration was mostly higher than 30mIU/L for one or two days after the initiation of thyroid hormone replacement. For this reason, patients whose TSH levels sufficiently elevated routinely resumed thyroid hormone one day before 131 I administration in our institution. The current study determined the efficacy of 131 I ablation therapy with thyroid hormone replacement one day before 131 I administration. Patients Twenty nine consecutive patients who underwent 131 I therapies twice during 6-12 months for DTC between June 2008 and November 2010 were studied. Since all patients obviously had residual lesions by their serum thyroglobulin levels or their scintigrams at the first therapies, they underwent the second 131 I therapies without diagnostic 131 I scintigraphy after the first therapies. The patients comprised 10 males and 19 females, and the age range was 24 to 74 years (mean = 52.3 years). Twenty-eight were papillary carcinoma (three had the predominant follicular patterns) and one was a follicular carcinoma. All patients underwent total or near total thyroidectomy by experienced surgeons. No diagnostic 131 I scintigraphy was performed before the first 131 I therapy. All patients gave their informed consent for their 131 I therapies. Preparation for 131 I therapy (Figure 1) All patients were prepared by switching from T4 to T3 4 weeks before 131 I therapy. T3 withdrawal and low iodine diet were started 2 weeks before 131 I therapy. Serum TSHs were measured twice at 3-6 days before (TSH1) and at the day of 131 I therapy (TSH2). After confirming the sufficient elevation of TSH1 concentration, T3 replacement was resumed one day before 131 I administration. To ensure TSH2 of more than 30mIU/L, patients whose TSH1 concentra-tions were less than 40mIU/L were administrated T3 after 131 I therapy. I therapy and post-therapeutic scintigraphy Therapeutic doses of 131 I were administrated. 131 I doses were classified according to the patient's condition: 3.7GBq for patients with lymph node metastases or without metastases, 5.55GBq for patients with lung metastases and 7.4GBq for patients with bone metastases. Post-therapeutic scintigrams were acquired 3 days after administrations, using a dual-head gamma camera equipped with high energy collimators and 3/8 inch NaI crystals, which was combined to a low-dose spiral CT in the same gantry (Symbia®, Siemens Medical Solutions). Whole body planar images were acquired 3 days after 131 I therapy at scanning speed of 15 cm/min. Following planar imaging, SPECT images of the neck and chest were obtained. Additional SPECT images were acquired to cover the areas suspected of abnormal tracer accumulations in whole body planar images. SPECT data were acquired from 60 projections (20 seconds per view) with 128 × 128 matrix and reconstructed using a 3-dimensional orderedsubset expectation-maximization algorithm. As soon as SPECT data acquisition was finished, patients underwent CT transmission scans for tomography. SPECT and CT data were analyzed and co-registered using an e-soft workstation (Siemens Medical Solutions). Image interpretation Two experienced nuclear medicine physicians, who were blinded to the findings of the other imaging modalities, assessed thyroid remnant accumulation at the first and second post-therapeutic scintigraphy. An ablation rate of the first 131 I therapy was determined by comparing the scintigraphic findings of the first therapy and those of the second therapy. When their interpretation was discordant, they obtained consensus after conference. The Student's t-test was employed to compare the continuous variables. The Fisher exact test was used to compare the ablation rates between two groups. A p value of less than 0.05 was considered as a significant difference. Results At the first 131 I therapy, TSH1 values ranged from 16.56 to 283.30mIU/L (mean = 98.98mIU/L) in the 29 patients. T3 replacement was started in twenty-six patients with TSH1 values more than 40mIU/L (TSH1 ≧ 40mIU/L) one day before 131 I therapy. T3 was administered to the remaining three patients with TSH1 values less than 40mIU/L (TSH1 < 40mIU/L) (16.56, 20.21, 35.16mIU/L) after 131 I therapy. TSH changes between TSH1 and TSH2 were analyzed. In addition, the ablation rate was evaluated in 26 patients with T3 replacement one day before the first 131 I therapy compared to 3 patients with T3 replacement after the first 131 I therapy. Table 1 shows TSH1 and TSH2 at the first and the second therapies in all patients. The mean time periods from T3 discontinuation to TSH1 measurement in patients with TSH1 ≧ 40mIU/L (n=50) and in patients with TSH1 < 40mIU/L (n=8) were 10.2 days (8 to 11 days) and 10.4 days (10 to 11 days) respectively. There was no significant difference between the two periods. Figure 2 shows the changes between TSH1 and TSH2 in patients with TSH1 < 40mIU/L of the first (n = 3) and the second (n = 5) therapies. TSH2 was significantly higher than TSH1 (p < 0.05), because T3 replacement was started after measurement of TSH2 and 131 I therapy. Figure 3 shows the changes between TSH1 and TSH2 in patients with TSH1 ≧ 40mIU/L of the first (n = 26) and the second (n = 24) therapies. TSH1 and TSH2 ranged from 40.82 to 288.55mIU/L (mean = 100.76mIU/L) and 44.93 to 271.04mIU/L (mean = 114.74mIU/L), respectively. There was no significant difference between TSH1 and TSH2 in the 50 paired TSH measurements. However, in 40 of 50 measurements, TSH values increased in spite of T3 replacement one day before measurement of TSH2 and 131 I administration. In the remaining 10 patients, although TSH values decreased, no TSH level at the day of 131 I therapy was less than 30mIU/L. Figure 4 and Figure 5 show the representative scans of successful and unsuccessful ablation cases. Table 3 shows the number of thyroid remnants in the two treatments in patients with T3 replacement after the first 131 I therapy because of TSH1 at the first therapies < 40mIU/L. Seven thyroid remnant accumulations were detected in all 3 patients at the first therapies and one thyroid remnant accumulation was detected in one patient at the second therapy. Successful ablation of the first 131 I therapy was confirmed in 2 of 3 patients (66.7%) and 6 of 7 sites (85.7%). Table 2 shows the number of thyroid remnants in the first and the second 131 I therapies in patients with T3 replacement one day before the first 131 I therapy. Forty-one A vertically long accumulation in the neck (wide arrow), a point-like accumulation in the left head (arrow head) and multiple accumulations in the lungs (narrow arrows) are seen with the whole body imaging obtained after the first therapy. With the SPECT/CT, the neck accumulation (wide arrows) is considered as a thyroid remnant and the left head accumulation (arrow heads) is suspected of a skin metastasis. The latter is surgically resected and proved to be a skin metastasis. The posttherapeutic scintigraphy of the second therapy detects no accumulation in the neck and can verify successful ablation. Only two faint accumulations considered as lung metastases are seen in the right lung (narrow arrows). Hormone replacement before 131 I Successful ablation rate was higher in patients with T3 replacement one day before 131 I therapy than in patients with T3 replacement after 131 I therapy. However, there was no significant difference between the ablation rates. Discussion We demonstrated that T3 replacement one day before 131 I therapy could achieve sufficient TSH concentration and a good successful ablation rate in patients with DTC. A successful ablation rate in patients with T3 replacement one day before 131 I therapy, more than 90%, was comparable with historically reported ablation rates (6)(7)(8)(9)(10). The current study results indicate that T3 replacement one day before 131 I therapy could be a clinically useful method in patients with DTC. Therapeutic use of 131 I is widely accepted for DTC; however, some problems arise such as decrease in patient's QOL and stimulation to residual lesions by elevated TSH during thyroid hormone withdrawal (11)(12). Recombinant human TSH (rhTSH) is available for thyroid remnant ablation and can resolve these problems (12)(13)(14)(15). In the future, rhTSH may become the main modality of choice for 131 I therapy. However, thyroid hormone withdrawal will be the mainstream method for a while because of insufficient evidence for rhTSH in DTC patients with metastases or recurrence. In addition, especially in developing countries, thyroid hormone withdrawal is an indispensable method because of the high price of rhTSH. To shorten the period of the hypothyroid state and to raise TSH level, various studies have reported about thyroid hormone withdrawal, investigating, for example, its duration and timing, using at times T4 only, and at other times using T3 and T4 together (16)(17)(18)(19)(20). In one study (19), TSH values were repeatedly measured after total thyroidectomy or after withdrawal of suppressive T4 therapy in preparation for 131 I therapy. The time required for TSH levels to reach more than 30mIU/L was 8-26 days after thyroidectomy and 9-29 days after T4 withdrawal. Another study (18) investigated the TSH concentration with a conventional, widely used regimen of substitution of T3 for T4, 6 weeks prior to 131 I therapy and then its subsequent discontinuation 2 weeks prior to therapy. In this report, 6 (11.5%) of 52 patients did not achieve a TSH level of 30mIU/L two weeks after T3 withdrawal. These reports indicated the difficulty of changing the initiation time of thyroid hormone withdrawal because of wide inter-individual variation in TSH concentration. In this study, the time periods between discontinuing of T3 and TSH1 measurement were 8 to 11 days (mean = 10.2 days). Although the period of T3 discontinuation was short, 93.1% of the patients in the first therapy and 96.6% of the patients in the second therapy had TSH1 ≧ 30mIU/L under normal low iodine diet (Table 1). These results indicated that hypothyroid state during thyroid hormone withdrawal could be shortened further. Further study is needed in this subject. As for the initiation time of thyroid hormone replacement, ATA guidelines and EANM guidelines stated that thyroid hormone replacement should be resumed 2 or 3 days after 131 I therapy for DTC (1,4). The current study demonstrated that the initiation time of T3 replacement could be moved forward several days compared with the conventional method. T3 replacement one day before 131 I therapy would shorten the period of the hypothyroid state and decrease unnecessary TSH stimulation to residual lesions during patients' preparation for 131 I therapy. Considering the difference of pharmacokinetics between T4 and T3, the initiation time of T4 replacement might be earlier than that of T3. In the current study, to ensure that TSH values at the day of 131 I administration (TSH2) be more than 30mIU/L, the TSH changes analysis was excluded and T3 was started after 131 I therapy in 3 patients at the first therapy and 5 patients at the second therapy whose TSH values at 3-6 days before the first 131 I therapy (TSH1) were less than 40mIU/L. In the 50 patients whose TSH1 values were more than 40mIU/L, 10 (20%) patients had decrease in TSH2 values because of T3 replacement one day before 131 I therapy. However, TSH2 values did not decrease to less than 30mIU/L in any patient on the day of 131 I therapy. These results would validate the exclusion criterion in this study. However, the exclusion criterion should be further evaluated in a large cohort. An issue may arise as to whether thyroid hormone interrupts 131 I uptake in thyroid remnants and residual tissues, since thyroid hormone transmutes into iodine. However, many reports on the utility of rhTSH under thyroid hor- Three accumulations are detected in the neck with the SPECT/CT obtained after the first therapy. One of them is located in the left upper neck and is suspected of a lymph node metastasis (narrow arrows). Others are considered as thyroid remnants because of their locations (wide arrows). With the SPECT/CT obtained after the second therapy, no accumulation is seen in the left upper neck. However, two accumulations considered as thyroid remnants still exist (wide arrows). 1st Therapy 2nd Therapy Anterior A nterior mone continuation for 131 I therapy suggest that early thyroid hormone replacement would not interfere with 131 I accumulation (14)(15). There were some limitations in the current study. The study was a retrospective study with small population. It did not evaluate patients' symptoms and QOL after 131 I therapy. To resolve these problems, further studies are needed. Conclusion Thyroid hormone replacement one day before 131 I therapy could provide a sufficiently high ablation rate in patients with DTC. Compared with thyroid hormone replacement several days after 131 I therapy, this alternative method would be beneficial in shortening hypothyroid periods and eliminating unnecessary TSH stimulation to residual lesions. Conflicts of interest There are no conflicts of interest.
3,620.4
2011-05-01T00:00:00.000
[ "Medicine", "Biology" ]
Drug Interactions Involving the Cytochrome P450 Enzymes: Analysis of Common Combinations of Antibiotics and Pain Relieving Drugs Objective: For clinicians it is challenging to oversee complex drug interactions of multi-drug administration. Rheumatoid arthritis (RA) patients are frequently under long-term medication with multiple anti-inflammatory and pain-relieving drugs, which are mainly metabolized by the Cytochrome P450 enzymes (CYPs). Additionally, treatment of co-morbidities, such as inflammatory periodontal disease (PD) may have to involve further drug administration. The aim of this investigation was to analyze drug interactions in the therapy of RA and PD and to provide a resource for health professionals to easily check interactions and avoid potential side effects. Introduction Rheumatoid arthritis (RA) is the most frequent inflammatory joint disease affecting more than 50 million people worldwide [1]. RA patients are frequently treated with pain relieving and antiinflammatory drugs (NSAIDs). Furthermore, corticosteroids, disease-modifying anti-rheumatic drugs (DMARDs) and biologics are administered depending on RA severity and progression [2]. In 2008, a world-wide group of rheumatologists developed a set of recommendations for the RA treatment, which is updated at regular intervals [3]. The recommendations are target-based on evidence and expert opinion. The primary treatment aim is the clinical disease remission. Also, the individual drug therapy is at least adjusted every three months, which requires frequent drug anamnesis and adaptation by health professionals besides rheumatologists. The RA etiology is unclear, however next to genetic and environmental factors such as age, gender, HLA genotype and smoking, bacterial infections seem to play an important role [4]. It is proposed that RA results from a failure of the immune response attacking an unknown antigen such as hidden viral or bacterial infections, also diseases preceding RA may cause a failure immune response to viral or bacterial antigens [5]. Periodontal disease (PD) is a bacterial infection affecting the periodontium, which can cause increasing degradation of toothsupporting soft-and hard tissues, ultimately resulting in tooth loss [6]. Gram-negative anaerobic bacteria, organized as a structured biofilm on the tooth surface are the primary cause involved in the initiation and the progression of PD [7]. The best described periodontal pathogens are Aggregatibacter actinomycetemcomitans, Tannerella forsythensis and Porphyromonas gingivalis (P. gingivalis) [8]. P. gingivalis, one of the major periodontal pathogens, is able to invade endothelial cells and human chondrocytes [9]. It is the only known bacterium expressing the peptidylarginindeiminase (PAD) enzyme, which is responsible for the post-translation and conversion of arginine to citrulline [5]. Citrulline modifications lead to the production of anti-CCP antibodies, which are found most frequently in RA patients [10]. Furthermore, aggressive periodontitis, affecting young individuals, is characterized by severe periodontal attachment loss and bone destruction. In comparison to adult periodontitis, aggressive periodontitis shows a more rapid disease onset and a faster progression. It was shown that a combination of mechanical and antibiotic treatment effectively provides favorable clinical results on periodontal and systemic health in generalized aggressive periodontitis patients [11]. In general, the selection of the antibiotic is adapted to the spectrum of bacteria (Table 2). Increasing evidence shows that patients with RA have an increased prevalence of periodontal attachment loss compared to healthy individuals [12]. Evidence from epidemiological studies suggests a bidirectional association [13]. In both the diseases, dysregulated immune responses seem the crucial factor facilitating tissue degradation and loss of function [14]. Intervention studies indicate causal relationship by showing that periodontal therapy has beneficial systemic effects on RA disease activity [15]. PD and PA are prevalent chronic inflammatory diseases associated with significant morbidity and mortality and therefore immense impact upon the economy, health and quality of life. RA and PD are associated with increased mortality due to a number of co-morbidities. Both are chronic inflammatory diseases associated with soft and hard tissue damage, a dysregulation of immune response and common genetic and lifestyle factors influencing the diseases [16]. A number of cross-sectional studies reported an increased incidence of PD in RA patients [17] and a higher prevalence for RA in patients suffering from PD. An increased risk for systemic diseases such as cardiovascular disorders, diabetes and osteoporosis was described for both diseases [18]. Therefore, next to the drug-therapy of RA and PD additional drugs may have to be administered for the treatment of the co-morbidities. Therefore, further undesired drug-drug interactions may be admitted by health professionals treating RA patients. Drug metabolism is a complex biochemical network, which consists of many different parts and reactions in the human organism. Some drugs are excreted in urine and feces without passing any metabolic modifications in the liver. However, most of the systemic drugs have a multi-step metabolism (typically oxidation and conjugation). The oxidation reactions are mainly catalyzed by the Cytochrome P450 enzymes (CYPs) family of CYP enzymes [19], which belong to the family of monooxygenases has been the focus of pharmaceutical research for decades. CYPs catalyze a large amount of chemical reactions, such as alcohol oxidations, dehydrogenation and isomerizations. It is a difficult task of medical science and daily clinical practice to find effective and safe combinations of drugs that do not affect each other's metabolic pathways. If this is not taken into account, severe adverse effects including death occur. The Human Genome Project discovered 57 human CYPs [20]. Due to many polymorphisms and inducability, the biological activities of the CYPs vary noticeable among humans, which is an important issue for researchers as well as clinicians. Knowledge of the level and the catalytic activity of the specific CYP as well as the effect on drug metabolism could and should lead to personalized drug dosages to optimize the therapeutic effect and minimize harmful side effects. If a drug induces a specific CYP, which is also active in another drug's metabolism, the dosage of the first drug should be increased to achieve the same therapeutic effect [21]. In case of a CYP inhibition, the dosage can be reduced, which lowers side effects. Due to multi-drug administration, adverse side effects, such as deadly acute renal failure (31) are discussed intensely in pharmaceutical research [22]. Frequently occurring problems, which we address here, are firstly adverse side effects because of enzyme overload and secondly, ineffective therapy because of enzyme induction or inhibition. Therefore, drug interactions in the therapy of RA and PD were analyzed in the present study. Textmining Information on drug metabolism is spread over 100,000 articles in PubMed. To collect relevant articles a specific search tool was developed. Abstracts of PubMed database were automatically filtered for relevant articles using specific keywords. Medical subject headings (MeSH) represent the National Library of Medicine's vocabulary thesaurus and were used for disease definitions and synonyms. The abstracts were screened for WHO-drugs and their synonyms, as well as a set of human CYPs with synonyms and the papers found in PubMed were manually processed. Each drug was attributed to those CYPs that are involved in drug metabolism as a substrate, an inhibitor or an inducer. Treatment schemes Information on drug administration in the therapy of RA and PD was collected from scientific literature. Additionally, for RA, international recommendations [23] and for PD different national guidelines [24] could be taken into account. Web resources provided further information on drug metabolism, e.g. Nelsons Homepage [25], Flockharts Interaction table [26], University of Maryland's Drug Checker, PubChem [27], Protein Data Bank [28] and FDA-files. Drug classification The recommendations of the WHO Expert Committee for updating the WHO Model List of Essential Medicines are updated annually [29]. In 2004, a list of all items, according to their 5-level Anatomical Therapeutic Chemical (ATC) classification code was published. The ATC-code classifies drugs into different groups according to anatomic site of action, therapeutical effect and chemical structure. The therapeutic subgroup, which is determined by the second level, was used to find drug alternatives. Expression data Affymetrics data were used to compare the CYP mRNA expression of human body tissues. The series of datasets taken from GEO (Gene expression Omnibus, http://www.ncbi.nlm.nih.gov/geo/) were generated from ten donors and represent normal human bodies (Series GSE3526, [30]). It contains seven different tissues, oral, pharyngeal, esophageal and intestinal mucosa, as well as skeletal tissue and bone. All probe sets related to Cytochromes were normalized and condensed to 40 types of CYPs. To assess differences in expression, a heat-map was built with Genesis [31]. Database and web-server Two CYP interaction tables were generated for the therapy of RA and PD. Numerous problems, such as enzyme overload or enzyme induction and inhibition could occur in the combined therapy of RA and PD. Some of these drug-drug interactions are rather unnecessary because the choice of another antibiotic could already circumvent the problem. In the present study, a web-interface for clinicians to check drug-drug interactions was generated to overcome CYP based problems. The database provides information on drug metabolism including PubMed references. Based on the WHO classification system (ATC), the database provides drug alternatives. The present database is designed as a relational database on a MySQL server. For chemical functionality, the MyChem package is included, which aims to provide a complete set of functions for handling chemical data within MySQL. The website is built with PHP and javascript and the web access is enabled via Apache Webserver 2.2. Results The results of the present literature analysis are summarized in tables 1 and 2, respectively. Table 1 show that especially CYPs 2C8, Expression data The built heat-map lists seven tissues involved in RA and PD and the expression of several CYPs therein (Figure 1). Expression ranges from -2.24-fold lower to 2.24-fold higher values. The CYP expression in target tissues has not been taken into account so far, but is an interesting issue because it is a major factor for the effective retention period. For example, CYP 3A7, which was formerly known as fetal enzyme, was recently shown to be upregulated in the bone [32]. This means that the function of the CYP 3A family is significantly increased, which leads to shorter duration of action of drugs like Paracetamol, Diclofenac, Prednisone, Fentanyl etc. Discussion In an aging society with increasing morbidities and co-morbidities drug interactions have to be realized and or prevented by health professionals. One of the most difficult tasks of the decision making process is to find combinations of drugs that do not affect each other's metabolic pathways. Despite the large amount of information on CYPs, optimizing multiple drug prescriptions using CYP metabolism is still complicated [33]. Drug-drug interactions are complex and information on drug metabolism is spread over 100,000 articles in PubMed, which may be overwhelming and not possible to handle by the clinician. Information on CYP-structures [34], binding sites [35], interactions and different genotypes [36] must be combined to allow reducing side effects and to determine correct dosages of medicine undesired side effects when prescribing more than one drug [37]. To overcome this problem a tool for medical and dental clinicians was generated to identify and examine drug-drug interactions online. The SuperCYP database [38] contains information on 1,170 drugs with more than 3,800 interactions including scientific references. This comprehensive resource is freely available at http://bioinformatics.charite.de/perio and is also usable on smartphones and tablet-PCs and could be used as basis for personalized medicine. Evidence of an association between RA and PD, two of the most common inflammatory diseases in human, is increasing [39]. Additional administration of antibiotics in the therapy of PD could influence the metabolism of the other drugs administered for RA therapy. Potent antibiotic agents against periodontal bacterial pathogens are listed in Table Table 1: Drugs in the therapy of RA with CYP metabolism. Involved CYPs are ordered in mode of action (substrate, inhibitor, inducer) and references are given in supplementary material. Table 2: Effectiveness and CYP metabolism of antibiotic agents used in the therapy of PD. References given in parentheses. "Aa" means Aggregatibacter actinomycetemcomitans, "Tf" Tannerella forsythensis and "Pg" Porphyromonas gingivalis. +: 10-fold increased, ++: 10 2 -fold increased concentration of antibiotic in gingival fluid, expressed in multiples of in-vitro measured minimal inhibitory concentration [32]. In addition, both diseases are associated with systemic chronic inflammatory co-morbidities such as cardiovascular disease. Based on the fact that the medication of pain relieving and disease-modifying drugs can hardly be modified, it is primarily the dentist's task to choose an antimicrobial agent for adjunctive periodontal treatment that is on the one hand most effective in its antibacterial efficacy and on the other hand does not negatively affect the therapy and its side effects in RA patients. Bacteria The present data on CYP metabolism suggests two key problems of drug-drug interactions in the treatment of PD in RA patients, discussed below (Tables 3 and 4). First, Aspirine, a commonly used NSAID in the therapy of RA, is metabolized by CYP 2C8 and 2C9 and induces 2C19, which is also the substrate of Amoxicillin, a ß-lactam antiobiotic drug often prescribed as antimicrobial therapy adjunctive to mechanical debridement in oral infections such as aggressive periodontitis. Due to induction of CYP 2C19 and inactivation of Amoxicillin may be possible. Therefore, a replacement by another group of antibiotic agent, such as Ciprofloxacine, which is also effective against periodontal pathogens, would be less harmful with respect to CYP metabolism, and therefore could easily bypass this problem ( Table 3). The table shows drug interactions of the NSAID, Aspirine, with antimicrobial drugs, Amoxicilline (red line because of the conflict regarding CYP 2C19 [orange cells]) and Ciprofloxacine. Ciprofloxacine avoids the CYP 2C19 conflict (green). "S" means substrate, "Ind" means inducer and "Inh" means inhibitor. Suggestions like that are automatically generated by the Web-Server using the classification and metabolic information stored on the server for the drug-cocktail entered by the user. In addition, in the therapy of RA, NSAIDs and DMARDs are often combined with each other and drug-drug interactions often occur. If an antibiotic agent with the same metabolic pathway is administered, side effects because of enzyme overload are possible and could be avoided by choosing agents with different metabolic pathways. The NSAID, Oxaprozine, and the Leflunomide, a DMARD, share the same metabolic pathway via CYP 2C9. Additionally, Leflulomide inhibits CYP 2C8 and 2C9. Administration of Amoxicilline in combination with the antimicrobial drug, Metronidazol, which uses the same metabolic pathway as Oxaprozine and Leflulomide and inhibits the CYP, as well, could lead to adverse side effects because of enzyme overload. Clindamycine, which is also potent against periodontal pathogens, might be a good alternative (Table 4) [40,41]. Advances in genetic research have enabled genotyping and analysis of individual data on expression of target genes and metabolic enzymes. Such expression data in target tissues should be considered in selection of drugs. The Web-Server presented in this study provides a user-friendly platform enabling medical and dental health professionals to optimize drug choice and combinations regarding the degree of CYP capacity utilization. With respect to increasing evidence of associations between oral and systemic chronic inflammatory diseases, such as PD and RA, knowledge about drug interactions become crucial to optimize health care. Conflict of interest There is no actual or potential conflict of interest.
3,460.8
2012-10-15T00:00:00.000
[ "Medicine", "Chemistry" ]
Continuum Surface Energy from a Lattice Model We investigate connections between the continuum and atomistic descriptions of deformable crystals, using certain interesting results from number theory. The energy of a deformed crystal is calculated in the context of a lattice model with general binary interactions in two dimensions. A new bond counting approach is used, which reduces the problem to the lattice point problem of number theory. The main contribution is an explicit formula for the surface energy density as a function of the deformation gradient and boundary normal. The result is valid for a large class of domains, including faceted (polygonal) shapes and regions with piecewise smooth boundaries. Introduction This article is concerned with the derivation of continuum surface energy from a standard lattice model, by exploiting results related to certain lattice point problems of number theory, e.g. [BL, BR, Hu, IKM, Pi]. We study the energy of a crystal, modelled as the part of a Bravais lattice L contained in a reference region Ω ⊂ R d , with atoms (elements of Ω ∩ L) interacting through a pair potential ϕ. The potential may have unrestricted range but must decay fast enough. The crystal is subjected to a smooth deformation y : Ω → R d . The energy under consideration is E{Ω, y} = x∈Ω∩L z∈(Ω∩L)\x ϕ (|y(z) − y(x)|) To approach the continuum limit, one may scale the lattice, i.e., replace L by εL and rescale the potential to ϕ ε = ϕ( · ε ), then study asymptotics of the energy as ε → 0 [BBL, Mo]. Equivalently, one can rescale the region to rΩ and the deformation to y r = ry( · r ), with r = 1/ε, but leave L and ϕ unscaled. The emphasis of the present work is on the dependence of the energy on the geometry Rosakis of the boundary ∂Ω. Our main result is as follows. For the case d = 2, suppose Ω is a convex region with piecewise smooth boundary subject to certain restrictions (spelled out in Proposition 5.2) with outward unit normal n, and that the deformation is homogeneous, y(x) = F x, x ∈ Ω for some 1 F ∈ M 2×2 + . Then the energy satisfies as k → ∞, k ∈ Z. In the right hand side of (1.1), Ω(k) is a suitbaly defined region containing the same lattice points as the dilated region kΩ = {z : z = kx, x ∈ Ω}, namely Ω(k)∩L = kΩ∩L. Our main contribution is the following explicit formula for the surface energy density γ • : M 2×2 + × S 1 → R: Specifically, when Ω is a lattice polygon, Ω(k) is a certain rational polygon containing kΩ, with sides parallel to those of kΩ, and such that Ω(k) ∩ L = kΩ ∩ L. However, Ω(k) is not a dilation of Ω in general. In this case, the main contribution to the o(k) term in (1.1) is an O(1) corner energy that is obtained exactly. On the other and, if Ω is a smooth C 2 strictly convex region, one may choose Ω(k) = kΩ; moreover the result then holds for any real (not only integer) sequence k → ∞; see Proposition 5.2. Our results show that for a large class of regions Ω, if one replaces Ω(k) by kΩ, (1.1) does not hold, unless γ • is replaced by another functionγ (see (1.7) below), whose dependence on the normal n involves a dense set of discontinuities. The hypotheses of the standard surface energy minimization theorem (for the two-dimensional case see Dacorogna and Pfister [DP] and Fonseca [Fo] for the three dimensional version) yielding the Wulff shape may not be fulfilled in general. In contrast, γ • (F, n) from (1.2) is Lipschitz in n. The above formula for the surface energy density is analogous to the well-known Cauchy-Born formula for the stored energy function in the first term of (1.1): In more than one dimension, the first rigorous derivation of continuum energy functions from atomistic models is due to Blanc, Le Bris and Lions [BBL], who study (among other problems) the asymptotics of the energy 2 of a crystal Ω ∩ εL, subject to a prescribed smooth deformation y : Ω → R 3 as ε → 0. The dominant term is the usual elastic energy Ω W (∇y(x))dx with W 1 M 2×2 + is the set of 2 × 2 matrices with positive determinant. 2 This energy is divided by #(Ω ∩ εL) and has rescaled potential ϕ ε . given by (1.3). The next term, of order ε in Theorem 3 of [BBL], is a surface integral over ∂Ω that involves values of the deformation gradient and unit normal. The form of this surface energy is not explicit and it is not clear to what extent it can be expressed as a function of those variables. Terms of order ε 2 include a volume integral of an explicitly determined higher gradient energy, but also surface terms; the latter are left unspecified. As shown in one dimension by Mora-Corral [Mo], the higher order terms in the asymptotic expansion of the energy in powers of ε depend on the choice of the sequence of ε → 0. In Theorem 3 of [BBL], this choice is restricted by the hypothesis that there exist a sequence ε = ε k → 0 as k → ∞, such that #(Ω ∩ ε k L) = |Ω|/ε d k (in dimension d). Letting r = 1/ε and scaling Ω instead of the lattice, this means that for some sequence r k → ∞, #(r k Ω ∩ L) = |r k Ω|. (1.4) In the present work we rely on bond counting arguments instead of asymptotics to a large extent. A byproduct of this approach is an explanation of this sequential dependence issue. Our method hinges on finding, for each w ∈ L, the w-bond number of Ω: For large r, the dominant contribution to N w (rΩ) is #(rΩ ∩ L). Finding the asymptotics of this number as r → ∞ is the lattice point problem of number theory [BR, IKM, Ts]. This reduces to studying the asymptotics of the lattice point remainder R(r) = #(rΩ ∩ L) − |rΩ|, the difference of the two sides of (1.4). In two dimensions, the problem is open for general domains with piecewise C 1 boundary, while even the Gauss circle problem (Ω the unit disk, L = Z 2 ) is not completely settled [Hu]. Through (1.5), the lattice point remainder enters our estimates for the energy E{rΩ, y r }, whose asymptotic form thus depends on the sequence r k through R(r k ). This can be problematic as R is discontinuous and highly oscillatory. In general, the behavior of R(r) depends strongly on the shape of ∂Ω. For Ω a lattice polygon (one whose vertices are lattice points), R(r) is of same order as the surface energy-R(r) = O(r d−1 ) in dimension d-and can be characterized explicitly; see, e.g., Lemma 2.2 below. For smooth convex domains, as shown by van der Corput [Co], R(r) = O(r 2/3 ), between the orders of the surface and the gradient energy of [BBL], but difficult to characterize [Hu]. Hypothesis (1.4) made by [BBL] is equivalent to existence of a sequence r k such that R(r k ) = 0, thus it eliminates certain undesirable higher order terms from a Riemann sum of the elastic energy. Unfortunately however, it is not known for which choices of domain Ω such a sequence exists. We adopt an alternative approach that avoid making such a hypothesis. Crystals typically occur in faceted form in their natural state (for instance, the Wulff shape, e.g., [He, DP, Fo]). This is because of surface energetics affecting crystal growth, but also because cleavage fracture creates new surfaces along special crystallographic planes. This means that they can be modelled as crystallographic polyhedra, whose facets inhabit crystallographic planes (that contain a two-dimensional sublattice of L). In Section 2 we assume that Ω is a lattice polytope, i.e, one whose vertices are lattice points. This does not sacrifice too much generality over crystallographic polytopes. Indeed, if Ω is crystallographic polytope, then kΩ is a lattice polytope for some k ∈ Z. In addition, there is a lattice If Ω is convex, then Ω ′ = conv{Ω ∩ L}. In view of Theorem 3 of [BBL], one expects that the dominant surface energy term does not involve higher gradients of the deformation. Accordingly, it suffices to assume that the deformation is homogeneous (affine). To keep the geometry simple, we confine our analysis to two dimensions. Unlike [BBL, Mo], initially we do not employ a limit process, but rather a bond counting technique. The computation of the energy is reduced to that of (1.5). We then show that this calculation reduces to a number of lattice point problems. The solution of the latter for lattice polygons is furnished by Pick's Theorem [Pi]. The lattice point remainder R(k) is known exactly (for k ∈ Z) and contributes to the surface energy explicitly, being of the same order. In Section 3 we compute the energy of polygonal crystals. For an interatomic potential of finite but arbitrary range, we obtain the energy of essentially any convex lattice polygon exactly (Proposition 3.1). This result is not asymptotic and does not suffer from the sequential dependence issue explored in [Mo]. Let the deformation be y(x) = F x, x ∈ Ω. The energy equals the exact sum of the elastic energy Ω W (F )dx plus the surface energy ∂Ω γ ⋄ (F,n)dx, plus the corner energy N i=1 τ (F, n i , n i−1 ), summed over the N vertices of Ω. The surface energy density is explicitly obtained: wheren is a normal to ∂Ω whose components on each facet are the Miller indices (irreducible integers) of the corresponding lattice plane, and ϕ is the interatomic potential. The corner energy τ (F, n i , n i−1 ) is also explicit but more complicated; apart from F , it depends on the two unit normals of the facets meeting at the ith vertex. For an infinite range potential this result retains only asymptotic validity for a lattice polygon kΩ as k → ∞; the three energies just mentioned are the first three terms of the asymptotic expansion of the energy for large k (Proposition 3.4). In Section 4, we consider regions with smooth boundaries. Because of its construction based on lattice polygons, the surface energy density (1.6) is only defined for "rational" directions of the surface normal; n = (ν 1 , ν 2 ) ∈ S 1 is called rational if ν 2 /ν 1 is a rational number or ν 1 = 0, irrational otherwise. It is natural to ask how (1.6) can be extended to irrational normals. When Ω is strictly convex and ∂Ω is smooth for example, the normal is irrational almost everywhere on ∂Ω. We start by letting ∂Ω be of class C 2 with positive curvature. The key observation is that the convex hull of all lattice points contained in such an Ω is a lattice polygon. This allows us to use number-theoretic results on the asymptotic properties of such hulls due to Bárány and Larman [BL]; see also the survey [IKM]. Perhaps surprisingly, the surface energy density for smooth strictly convex regions (Proposition 4.1) is different from (1.6). It is given by (1.2), where n is the unit normal to ∂Ω and can take on irrational values. The difference is due to the lattice point remainder [Co, Hu], which is now of lower order than the surface energy. As a result, the asympotic expression for the energy of inflated regions rΩ is sequence-independent; the sequence of r → ∞ is not restricted to be integer but arbitrary. We then consider more general regions with piecewise C 1 boundary that comprises flat facets as well as curves with positive curvature. For such regions, the surface energy density function, now defined for all n ∈ S 1 , is obtained in Proposition 4.4: with γ ⋄ from (1.6) and γ • from (1.2). The dependence of the surface energy density on the normal is rather pathological. Specifically,γ(F, ·) : S 1 → R is continuous at irrational n, discontinuous at rational n, and almost nowhere differentiable (Proposition 4.6). Because of this, the surface energy density need not satisfy the usual hypotheses of the Wulff theorem (determining the domain that minimizes the surface energy under fixed measure); see e.g. [DP, Fo], but also Remark 5.1. In Section 5, we resolve the difficulties due to discontinuous dependence of the surface energy on the unit normal. This dependence is due to the behaviour of the lattice point remainder of regions with rational boundary normal. We then alter the region Ω so as to change its measure, but not the lattice points it contains. The goal is that the lattice point remainder of the modified region should be of lower order than the surface energy. For example, if Ω, hence kΩ, is a lattice polygon, translate each side of kΩ outwards by half the distance to the next crystallographic plane with the same normal. This results in a rational polygon Ω(k) that contains the same lattice points as kΩ. Note, however, that Ω(k) is not a rescaling of Ω in general. The lattice point remainder of Ω(k) is O(1) as k → ∞, of lower order than the surface energy. This allows us to write the latter in the form ∂Ω(k) γ • (F, n)ds. The associated surface energy density γ • , given by (1.2), is Lipschitz continuous in the unit normal. This and additional considerations discussed in Section 5, show that γ • is the appropriate density for the determination of the Wulff shape that minimizes the surface energy ∂Ω γ • (F, n)ds over a suitable class of regions Ω with fixed measure [He, Fo]. A more realistic approach to surface energy would allow for "relaxation" of atomic positions from the macroscopic deformation near the boundary. Such deviations might be determined by minimization of the atomistic energy. This is a formidable problem in the present setting (more than one dimension, general boundary geometry, arbitrary interaction range, nonconvex potentials). One of the few results in this direction is due to Braides and Cicalese [BC]; they obtain the relaxed surface energy in one dimension using Γ-convergence. The result is not explicit and seems difficult to compare quantitatively with the explicit "constrained" energy of Mora-Corral [Mo]. In two dimensions, Theil [Th] calculates the relaxed surface energy of a crystal with quadratic short range potentials; the result is in the form of a perturbation of the constrained surface energy. In order to obtain quantitative information on the difference between the relaxed and constrained surface energies, numerical optimization of the atomistic energy was recently performed for a completely unconstrained, Lennard-Jones two-dimensional crystal [Ro]. Atomic positions were allowed to relax from initial positions forming a lattice triangle or hexagon with low Miller-index boundary. The constrained energy was obtained by minimizing over the deformation gradient matrix of a homogeneous deformation that the atoms are constrained to follow. It was found that the difference between the relaxed and constrained surface energies is typically less than three percent (after the appropriate scaling and bulk energy is accounted for). This suggests that in some situations the relaxed and constrained surface energies may be quite close. In analogous one-dimentional computations, the results agree qualitatively with the conclusions of [BC], while the difference between the relaxed and constrained surface energies is less than one percent. Values of this difference computed in three dmensions using density-functional theory for low Miller-index surfaces in various metals are usually less than three percent; see, e.g., [EHF]. Many of the results presented here, in particular expressions (1.6) through (1.7) for the surface energy density, are valid for three-dimensional crystals as well [Ro]. The Bond Counting Approach For subsets P , Q of R n , define the Minkowski sum P ⊕ Q = {p + q : p ∈ P, q ∈ Q} and write The lattice is L = Z 2 unless otherwise noted. Remark 2.1. All of our results can be immeditately adapted to any Bravais Lattice L * by incorporating the linear mapping from L onto L * into the deformation. Then expressions like (1.2) remail valid if L is replaced by L * , provided the the linear mapping from L onto L * has unit Jacobian determinant. We assume that the reference region Ω ⊂ R 2 is a convex body, or a compact convex set with is the set of all w-bonds of Ω (bonds with bond vector w). We will use the abbreviation The energy of the homogeneous deformation y(x) = F x can be written as The factor of 1/2 occurs since b(x, w) = b(x + w, −w) and the potential ϕ is even in w. Interchanging the order of summation above we obtain Evidently, in order to determine the energy, it suffices to calculate, for each w ∈ L, the w-bond number of Ω, i.e., N w (Ω) = #B w (Ω); see (2.1): Clearly the number of w-bonds "starting" in Ω equals the number of lattice points of Ω: Some of these bonds are not contained in B w (Ω): (2.5) so that S + w is the part of ∂Ω through which w points outwards. Denote by T † w (Ω) the set of all w-bonds that intersect S + w and terminate outside Ω. (2.6) Some of these bonds "straddle" Ω, that is, have both endpoints outside Ω but intersect ∂Ω; specifically, Then obviously in view of (2.4), As a result, Roughly speaking, the number of w-bonds in Ω equals the number of lattice points in it, minus the number of bonds that traverse the boundary at least once, plus the number of bonds that traverse the boundary twice. The reason for the splitting (2.8) is that each term can be evaluated using results from geometric number theory. One important case we will consider is when Ω ⊂ R 2 is a convex lattice polygon. In particular, The number of lattice points in Ω, #(Ω ∩ L), is addressed by Pick's Theorem, [Pi, Re, BR], a variant of which is the following Lemma 2.2. Let Ω be a simple closed lattice polygon with facets S i and outward Miller normal Equivalently, letting θ i be the (dihedral) angle between normals of facets meeting at the ith vertex, (2.10) Proof. Pick's Theorem [Pi, Re] states that (since Ω is closed). If two neighbouring lattice points in a facet S i differ bym i ∈ L (with relatively contains both its endpoints. Now the Miller normaln i =m ⊥ i , so that |n i | = |m i | and (2.9) follows. Also, (2.10) is a trivial consequence of (2.9), given that the sum in (2.10) equals 1. Remark 2.3. The shape of naturally occurring crystals is very often faceted (polyhedral). Thus one might start by assuming that Ω is a polygon, though not necessarily a lattice polygon. In that case If Ω is a crystallographic polygon, so that its facets are contained in crystallographic lines, then its vertices need not be lattice points. However, one can then show that there is some integer k such that kΩ is a lattice polygon. Remark 2.4. Eq. (2.10) has an interesting interpretation. It exactly equates a discrete quantity (number of atoms in Ω) with a continuum expression: the "volume" integral of a bulk density, plus the "surface" integral of a surface density, plus contributions of corners. We will show in the sequel that both the w-bond number N w (Ω) and the energy admit analogous representations. The first term in (2.8) is given by (2.9). Turning to the second term, let P i (w) be the parallelogram b 0 ⊕ S i with two parallel sides S i and w + S i if w · n i > 0, P i (w) = ∅ otherwise. Then it is easy to (2.14) In general, P (w) is not convex. However, if one defines then Ω w is a convex lattice polygon, being the Minkowski sum of two such sets. In fact, Also P (w) = Ω w \ Ω, while Ω ⊂ Ω w . This and (2.14) imply The right hand side can be evaluated using Lemma 2.2 for each term. Note that ∂Ω w comprises ∂Ω \ S + w , w + S + w and two w-bonds joining these two pieces. In view of (2.12) the result is where x = (x + |x|)/2 for x ∈ R and n = n i on S i is the unit outward normal to ∂Ω. Here |b 0 |/|w| = gcd(w). We will show next that for |w| small enough compared to the facets of Ω, Q(w) consists of one or two triangles, each having a vertex at one of the two ends of the simple polygonal line S + w . For example, if Ω = [0, 3] 2 and w = (1, 1), Q(w) consists of the triangle with vertices (0, 3), (1, 4) and (1, 3) and its image under reflection about the (1, 1)-axis. Any b ∈ T ‡ w (Ω) intersects two different facets of ∂Ω by (2.7). Let where v i ∈ Z 2 are the vertices of Ω. The shortest line segment with endpoints on non-adjacent facets has length δ. If |w| < δ, b ∈ T ‡ w (Ω) necessarily intersects two adjacent facets, say S i and S i−1 meeting at some vertex v i , with outward normals n i , n i−1 (where n 0 = n N ). Since both endpoints of b are outside Ω, w ·n i and w ·n i−1 must have opposite signs. Then in case w ·n i > 0 and w ·n i−1 < 0, x + w is in the triangle with vertices v i , v i + w and the intersection of S i and w + S i−1 , which is therefore part of Q(w). If the reverse inequality holds, the triangle with vertices v i , v i + w and the intersection of w + S i and S i−1 is part of Q(w). Regarding lattice point count, both cases reduce to the triangle with base b 0 and sides normal to n i and n i−1 : In addition, the relative interior of the base b(v i , w) of the triangle with endpoints v i , v i + w is also part of Q(w) and contains gcd(w) − 1 lattice points. Consequently, if |w| < δ, ( 2.22) Unfortunately, T (w, n i , n i−1 ) is not a lattice polygon in general, since q need not have integer coordinates and Lemma 2.2 does not apply. Instead, we count the lattice points inside the triangle more directly: Lemma 2.5. Suppose (w · n i )(w · n i−1 ) < 0 and let T = T (w, n i , n i−1 ) ⊂ R 2 be the triangle of (2.21). Let u ∈ Z 2 be such that {u,w} is a lattice basis for Z 2 . Then where for w ∈ Z 2 and unit n, m ∈ R 2 with (w · n)(w · m) < 0, Proof. Let n = n i , m = n i−1 . Since in (2.21) q · n = 0, q = λn ⊥ for some λ ∈ R. Then solving (q − w) · m = 0 for λ gives q as in the second of (2.23). Letw = w/ gcd(w) = (w 1 ,w 2 ) and suppose u = (u 1 , u 2 ) ∈ Z 2 solves u ·w ⊥ = 1, orw 2 u 1 −w 1 u 2 = 1. This is solvable by Bezout's Lemma since gcd(w 1 ,w 2 ) = 1. Then the matrix A = col(u,w) has unit determinant u ·w ⊥ = 1 and integer entries, hence so does A −1 = row(w ⊥ , u ⊥ ). As a result {u,w} is a lattice basis for Z 2 , while the linear transformation with matrix A −1 is lattice invariant . Now T ′ = A −1 T has vertices 0, (0, k) ∈ Z 2 and p = (α, β), where k = gcd(w), (α, β) = (q ·w ⊥ , q · u ⊥ ); (2.25) in general p is not a lattice point. Suppose for the moment that α > 0. Then For x ∈ R let ⌊x⌋ ′ be the greatest integer strictly less than x and ⌈x⌉ ′ the least integer strictly greater than x. Then the number of lattice points on a segment {(x 1 , x 2 ) : x 1 = j, µ < x 2 < ν}, where j ∈ Z and µ < ν ∈ R equals ⌊ν⌋ ′ − ⌈µ⌉ ′ + 1. Hence, Since ⌊x⌋ ′ = ⌈x⌉ − 1 and ⌈x⌉ ′ = ⌊x⌋ + 1, the above reduces to s(α, β, k) in (2.24). It then follows from (2.25) and (2.23) that #(T ′ ∩ L) = N T (w, n, m). The linear transformation with matrix A is lattice invariant and thus #(AT ′ ∩ L) = #(T ′ ∩ L) [BP], while AT ′ = T . In case α < 0, reflect T ′ by replacing α by |α|. If α = 0 thenT ′ = ∅. This together with (2.22) gives (2.26) To obtain an expression for the w-bond number of Ω, merely substitute (2.9), (2.18) and (2.26) into (2.8) and rearrange. Observe that for a given bond vector w, N w (Ω) is completely determined by the area |Ω|, the lengths |S i | of the facets, and their orientations through the Miller normalsn i : Lemma 2.6. Suppose |w| < δ, cf. (2.20). Then the w-bond number of Ω is given by (2.27) Remark 2.7. The above can readily be written in a form similar to (2.10): as a bulk integral, plus a "surface" integral, plus corner contributions, for suitable normal-dependent densities g and h. See also Remark 2.4. The present approach of counting bonds has certain similarities with the bond density lemma of Shapeev [Sh]. Surface Energy of Lattice Polygons We are now in a position to compute the energy. Consider first a finite-range potential that only involves bonds within a bounded set. Let the bond range R ⊂ L \ {0} be symmetric, so that w ∈ R =⇒ −w ∈ R. Allow the interatomic potential ϕ w (·) to depend explicitly on w, require and define the energy of the homogeneous deformation y(x) = F x, x ∈ Ω, where ϕ w : (0, ∞) → R is not restricted to be regular in any way. Proposition 3.1. For F ∈ M 2×2 + ,m ∈ Z 2 and unit n, m ∈ R 2 define the stored energy function the surface energy density function and the vertex energy function 5) where the sector step function and θ(n, m) is the angle between n and m, while N T is defined in Lemma 2.5. Suppose the bond range R is bounded with max w∈R |w| < δ, cf. (2.20). Letn =n i on S i . Then the following expression is exact: Proof. As in the argument leading to (2.3), one can write (3.2) as By the hypothesis on R, Lemma 2.6 holds for all w ∈ R. Multiply (2.27) by ϕ w (|F w|) and sum the result over w ∈ R. Interchange the order of summations, noting that w∈R w ·n i ϕ w (|F w|) = 1 2 w∈R |w ·n i |ϕ w (|F w|) by the symmetry of R and the first of (3.1), also that the sum of the (dihedral) angles between normals of facets meeting at vertices N i=1 θ(n i , n i−1 ) = 1, and finally that summation over w in the sector of R where (w · n)(w · m) < 0 can be replaced by summation over R provided the summand is multiplied by H n,m (w). Remark 3.2. The above result is not asymptotic but exact, since we have made no use of asymptotics so far. It applies to convex lattice polygons that are arbitrary apart from the restriction that the bond range is smaller than the characteristic size δ of (2.20). Next we consider infinite-range potentials, where R = L \ {0}. We seek the energy of the kth dilation kΩ of the region Ω, k ∈ Z + . Here we have no choice but to let k be an integer; otherwise kΩ is not a lattice polygon in general. The following will be useful. For convenience we suppose that the interatomic potential ϕ w (·) = ϕ(·) (does not explicitly depend on w), although this is not essential. Proposition 3.4. Suppose the interatomic potential ϕ : (0, ∞) → R satisfies the following: for each r 0 > 0 and for some constants C = C(r 0 ) and d > 2, |ϕ(r)| < Cr −(2+d) for r ∈ [r 0 , ∞). (3.8) Proof. Note that δ(kΩ) = kδ(Ω) = kδ in (2.20), so that Lemma 2.6 for kΩ holds provided Split the energy as follows: (3.10) Now it is clear that for any w ∈ L and k ∈ Z + , for some constant C > 0, since all bonds within kΩ start in kΩ and by Lemma 2.2 applied to kΩ (the dominant term in (2.9) would be |kΩ| = k 2 Ω). This provides a bound for the second term in (3.10): where we invoked (3.7), α > 0 is such that |F z| > α|z| for all z ∈ R 2 and we used Lemma 3.3 with ρ = kδ and p = d; and C is a generic constant with possibly different values each time it appears. The first term in (3.10) is covered by Proposition 3.1 applied to kΩ, since w ∈ R k means |w| < kδ = δ(kΩ). Noting that |kΩ| = k 2 |Ω|, |kS i | = k|S i |, Proposition 3.1 implies where W k , γ k and τ k are given by (3.3), (3.4) and (3.5) with R k in place of R; see (3.9). Recalling that W , γ ⋄ and τ are defined by the same equations with R = L \ {0}, using Lemma 3.3 with M = 2, we may estimate (omitting arguments) We only demonstrate the third of these, the others being easier. Recall that in (3.5), N T is the number of lattice points in the interior of a certain triangle T whose area is bounded above by C|w| 2 , cf. Lemma 2.5. By Pick's Theorem (2.11) (applied to the lattice parallelogram of smallest area A containing T , and having the same base) the area A exceeds N T hence N T < C|w| 2 . Also gcd(w) ≤ |w|, |H n,m | ≤ 1, hence we have from (3.5), C|w| 2 |ϕ(|F w|)| < C w∈Z 2 , |w|>kδ |w| −d < Ck 2−d proceeding as in (3.11). By (3.13), replacing W , γ ⋄ and τ by W k , γ k and τ k in (3.12) produces an error of O(k 2−d ). Combine this with (3.11) and (3.10) to obtain (3.8). Surface Energy for More General Boundaries We examine the surface energy density function γ ⋄ in (3.4) more closely, paying attention to its dependence on the surface normal. Due to its construction, γ ⋄ (F, ·) :M → R is defined only for "rational directions", that is, on the set of Miller normals M = {n :n = (ν 1 , ν 2 ) ∈ Z 2 , gcd(ν 1 , ν 2 ) = 1}. The first term (involving the sum) reduces to a function of the unit normal n, and trivially admits a unique continuous extension onto the whole of the unit circle S 1 . There is no such extension for the second term. Define the rational and irrational direction sets as respectively, whereM is defined in (4.1). Thus a vector is rational (irrational) if the tangent of the angle it makes with the usual basis vectors is rational (irrational). Since facets of lattice polygons have rational normals, the surface energy density γ ⋄ is defined only for such directions. Note that for each n ∈ S 1 R there is a uniquen =n(n) ∈M withn/|n| = n. The question arises as to how one can extend the definition ofγ ⋄ (F, n) = γ ⋄ (F, |n(n)|n), n ∈ S 1 R , to the whole of S 1 . This is related to another question: what is the surface energy when ∂Ω is smooth, for example ∂Ω = S 1 ? It turns out that this question can be answered, at least partially, using the present approach. The basic idea is that even if ∂Ω is not polygonal, but smooth, the convex hull of all lattice points inside Ω is a convex lattice polygon. Proposition 4.1. Let Ω ⊂ R 2 be strictly convex and ∂Ω be C 2 with positive curvature. Suppose ϕ is as in Proposition 3.4, but with d > 3. Define the reduced surface energy density γ Then for any sequence r = r k → ∞ as k → ∞ (r k ∈ R + , k ∈ Z + ), where n : ∂Ω → S 1 is the unit outward normal to ∂Ω. Remark 4.2. This asympotic result for inflated regions rΩ, is sequence-independent; that is, the sequence of r → ∞ is not restricted to be integer but is arbitrary. This occurs because the lattice point remainder R(r) = O(r 2/3 ) [Co, Hu] is of lower order than the surface energy. In contrast, the surface energy for lattice polygons, or the more general regions considered in Proposition 4.4 depends on the sequence of dilation factors. The dependence on the dilation sequence is thoroughly studied in one dimension in [Mo]. Proof. For each r > 0, let Ω r = conv(rΩ ∩ L). Then Ω r ⊂ rΩ is a convex lattice polygon, while rΩ ∩ L = Ω r ∩ L. Hence, in view of (2.2), where y(x) = F x for x ∈ rΩ. The calculation of E{Ω r , y} proceeds as above with one exception. Since Ω r ⊂ rΩ, and rΩ \ Ω r contains no lattice points, (2.19) and (2.16) imply Q w (Ω r ) ⊂ Q w (rΩ). (4.7) Let q, q ′ ⊂ ∂(rΩ) be the two points of ∂(rΩ) where the tangent vector is w, and B rρ ⊂ rΩ be a disk with ∂B rρ tangent to ∂(rΩ) at q, where ρ is the smallest radius of curvature of ∂Ω. Also let B ′ rρ ⊂ rΩ be a similar disk tangent to ∂(rΩ) at q ′ . Then for r large enough it is easy to see that The connected component ofQ w (B rρ ) containing q is contained inside an isosceles triangle with base a w-bond (with length |w|), and height the distance from the base middle to the intersection of the two circles ∂B rρ and w + ∂B rρ ; these are tangent to the base at its endpoints. The triangle height is thus bounded by C(r)|w|, where C(r) approaches zero for large r. A crude but sufficient upper bound of the lattice point count of this set, hence also of the right hand of (4.8), is C|w| 2 , with C independent of r. In view of of (4.7), #T ‡ w (Ω r ) is also bounded by C|w| 2 . This estimate replaces the sum over vertices (second sum) in (2.27). Since w∈L\{0} |w| p ϕ(|F w|) are absolutely convergent for p = 0, 1, 2 as one infers from Lemma 3.3, it follows that Here we have used (4.2) and (4.4), then (2.10), in which the last term (sum) equals 1, together with the fact rΩ ∩ L = Ω r ∩ L. We turn to ∂Ωr γ • (F, n)ds. Recalling (4.4), a typical term involves ∂Ωr |w · n|ds = 2|Proj w ⊥ ∂Ω r ||w|, (4.10) |Proj w ⊥ ∂Ω r | being the length of the projection of ∂Ω r onto a line perpendicular to w. This follows after splitting ∂Ω r into two pieces, over which w · n is ≥ 0 and ≤ 0, and using the Divergence Theorem on each. Next, we show that (4.11) where the constant C is independent of r > 1 and w. There are lattice points z − and z + ∈ ∂Ω r ∩ L, such that Ω r lies entirely between lattice lines l − , l + with normal w ⊥ and containing z − , z + , respectively. Consider the part of ∂(rΩ) that lies outside the strip bounded by l + and l − . It consists of two disjoint arcs, one to the "right" of l + and the other to the "left" of l − . The length of the projections of these two arcs onto the w ⊥ axis equals the difference in (4.11). Let c + be the arc to the right of l + (with endpoints in l + ). Let s be the region bounded by c + and l + . The only lattice points it contains are in l + . This is true since rΩ \ Ω r is free of lattice points. By the strict convexity of rΩ, there is a unique q ∈ c + where the normal to c + is w ⊥ . Consider the osculating circle of c + at q. Let s ′ ⊂ s be the portion of the osculating disc contained in s; it is a circular segment whose height (in the direction w ⊥ ) equals the thickness of s (the length of its projection onto a line along w ⊥ ). The radius of the circle is rρ for some ρ > 0. There are two possibilities. Either s ′ lies between l + and the next lattice line l ′ with normal w ⊥ to the right of l + , or it extends beyond l ′ to the right. In the first case the height of the segment s ′ is 1/|w|, the distance between adjacent lattice lines with normal w ⊥ . In the second case, let s ′′ be the portion of s ′ to the right of l ′ . Then s ′′ is also a circular segment and free of lattice points. Suppose its chord length is c and height is h. Since the radius of the circular arc is rρ, we have h 2 − 2rρh + c 2 /4 = 0. Solving this for h/(rρ) and using the inequality 1 − √ 1 − x < x for 0 < x < 1 yields h < c 2 /(4rρ). Now since the circular segment s ′′ is free of lattice points and its chord is in l ′ , the chord length c < |w| ≤ |w| (since the distance between adjacent lattice points in l ′ is |w|.) Hence h < |w| 2 /(4rρ). The total height of the larger circular segment s ′ is h + 1/|w| which is thus bounded by C|w| 2 for r ≥ 1. The thickness of s in the direction normal to w is the same as this height. This shows (4.11). According to Proposition 4.1, when ∂Ω is smooth and strictly convex, so that the normal vector is irrational almost everywhere on ∂Ω, the surface energy density is given by (4.4); in contrast, for lattice polygons (with rational normal a.e. on ∂Ω), the surface energy density is given by (4.2). This suggests that we combine the two expressions in defining a surface energy density for all values of the unit normal. That will allow us to treat a more general case with Ω a (not necessarily strictly) convex body. We do place some restrictions on ∂Ω: flat parts of ∂Ω must be lattice segments (with rational normals). Corners have to be lattice points. We now state the main result of this section: Proposition 4.4. Assume that Ω is a convex body with ∂Ω Lipschitz, and that there is a finite set is a C 2 curve and one of the following two alternatives holds: Suppose ϕ is as in Proposition 3.4, but with d > 3. Define the extended surface energy densitŷ γ(F, ·) : S 1 → R as follows: with γ • defined in (4.4) and S 1 R , S 1 I defined in (4.3). Then as k → ∞, k ∈ Z + , where n : ∂Ω → S 1 is the unit outward normal to ∂Ω. Proof. We now choose r = k ∈ Z + and let Ω k = conv(kΩ∩L). The part of the proof of Proposition 4.1 prior to (4.9) is easily adapted to the present setting, so that once again, as k → ∞, with γ ⋄ as in (4.2), Let ∂Ω f be the union of those S i that are straight segments and ∂Ω c the union of the S i with positive curvature, so that ∂Ω = ∂Ω f ∩ ∂Ω c . By hypothesis, for k ∈ Z + we have kv i ∈ ∂(kΩ) ∩ L, hence Our hypotheses regarding ∂Ω c , specifically alternative (i), ensure that n ∈ S 1 I a.e. on k∂Ω c , while (ii) implies that n ∈ S 1 R a.e. on k∂Ω f . Using (4.15), rewrite the above as It remains to show thatR(k) = O(k 2/3 ) as k → ∞, k ∈ Z + . Let i ∈ J c , so that S i satisfies alternative (i) in the statement of Proposition 4.4. Let S i k be the portion of ∂Ω c k between kv i and kv i+1 , i.e., terminating at these two points and containing no other kv j . Let the strictly convex body D i be such that ∂D i = Γ i . Let G i k be the bounded region whose boundary is kS i ∪ S i k ; this is well defined since both curves terminate at kv i and kv i+1 . in view of (4.14) applied to D i for r = k ∈ Z + . Next, note that S i k ⊂ ∂D i k . As a result, ds < Ck 2/3 (4.21) by Lemma 4.3 with D = D i . Next, we turn to the difference of the last two integrals in (4.19). Recalling (4.4), we write this as follows: i∈Jc w∈L\{0} where n is the outward unit normal to k∂Ω and ∂Ω k in the first two integrals, whileñ is outward where the estimate follows from (4.12) by replacing Ω of Proposition 4.1 by D i ; the constant C is independent of k. Since the sum w∈L\{0} |w| 3 ϕ(|F w|) converges absolutely by hypothesis, so does the double sum in the previous equation; therefore This together with (4.20) and (4.21) shows thatR(k) = O(k 2/3 ). The normal is irrational a.e. on ∂Ω c . Consequently k∂Ωc γ • ds = k∂Ωcγ ds = k ∂Ωcγ ds, and (4.16) follows from (4.18), since (4.6) holds. Remark 4.5. It is interesting that in cases where the normal is rational on a subset of ∂Ω of positive measure, the dilation factors are required to be integers. In contrast, the result of Proposition 4.1 (where the normal is irrational almost everywhere on ∂Ω) is independent of the sequence of dilation factors. In one dimension it is known [Mo] that the coefficients in the asymptotic expansion of the energy depend on this sequence. It should be kept in mind that there is no counterpart in one dimension of an irrational surface, which is purely a higher-dimensional occurrence. The reason for the difference between the rational and irrational cases is the order of the lattice point remainder term. Proofs of the Wulff theorem associated with surface energy minimisation [Fo] over domains with given measure typically rely on continuity of the surface energy density with respect to the unit normal; see [DP] and Remark 5.1 for a weaker alternative. Perhaps surprisingly, the extended surface energy densityγ(F, ·) : S 1 → R exhibits a dense set of discontinuities as we show next. Proposition 4.6. Suppose ϕ is as in Proposition 3.4 and fix F ∈ M 2×2 + . Then (ii)γ(F, ·) : S 1 → R defined in (4.15) is continuous at n ∈ S 1 I , discontinuous at n ∈ S 1 R and differentiable at most on a subset of S 1 I of measure zero. Proof. Arrange the elements of L \ {0} in a sequence: {w j }, j = 1, 2, . . . , such that |w j+1 | ≥ |w j |, and define g j (n) = (−1/4)ϕ(|F w j |)|w j · n| for n ∈ S 1 . Then clearly g j : S 1 → R is Lipschitz on S 1 and (formally for the moment) γ • (F, n) = ∞ j=1 g j (n). Now since |g j | ≤ M j = |ϕ(|F w j |)| |w j | on S 1 and the series ∞ j=1 M j = w∈L\{0} |ϕ(|F w|)| |w| converges in view of Lemma 3.3, then G k (n) = k j=1 g j (n) converge uniformly as k → ∞ to γ • (F, n) on S 1 by the Weierstrass M test. Since n → |w · n|, n ∈ S 1 is Lipschitz with constant |w|, the Lipschitz constant of G k is bounded The uniform convergence of the G k together with the uniform bound on their Lipschitz constants guarantee that the limit function γ • (F, ·) is also Lipschitz on S 1 and (i) holds. To show (ii), consider the function In other words, letting n = (ν 1 , ν 2 ) ∈ S 1 , otherwise. (4.22) Then one hasγ (4.23) By (i), it suffices to prove that h is continuous at irrational n and discontinuous at rational n to show the continuity part of (ii). In fact, h is very similar to the Thomae function T (x) = 1/q for x = p/q, p, q coprime integers (x rational), and zero for x irrational; see e.g. Proposition 4.1 in [Sa]. Adapting these results to the h is trivial in view of (4.22). Thus h is continuous at irrational n and discontinuous at rational n and so isγ(F, ·). Also h is nowhere differentiable by a simple adaptation of Proposition 6.1, [Sa]. Since by part (i) γ • is Lipschitz, it is differentiable a.e. on S 1 by the Rademacher theorem. Thenγ(F, ·) fails a.e. to be differentiable by (4.23). Also it is not differentiable at rational n as it is not continuous there. A Continuous Surface Energy Density There are two issues associated with the surface energy densityγ. The first issue is the lack of continuity ofγ(F, ·). This suggests that the surface energy minimisation problem, that of minimising the integral ∂Ωγ (F, n)ds over a suitable class of regions Ω with |Ω| fixed, may actually be ill posed. Remark 5.1. The standard hypothesis for surface energy minimisation in three dimensions is continuity ofγ(F, ·) [Fo]. However, in two dimensions, as shown by Dacorogna and Pfister [DP], lower semicontinuity ofγ(F, ·) suffices. It is easy to show from (4.15) and Proposition 4.6 thatγ(F, ·) is indeed lower semicontinuous provided W (F ) < 0. The latter inequality is not unreasonable; for example, it is satisfied for values of F near the minimum of W (F ), when the latter is given by (1.3) with ϕ a standard Lennard-Jones potential. The second issue is that the surface energy minimisation problem with densityγ(F, ·) is not physically appropriate, since fixing |Ω| is not the same as fixing the total mass, or equivalently, the number #(Ω ∩ L) of lattice points of Ω. If the minimisation were over the class of lattice polygons with fixed lattice point number, the appropriate constraint would fix |Ω| + ∂Ω 1/(2|n|)ds instead of |Ω|, by virtue of Lemma 2.2. For a lattice polygon, the lattice point remainder R(k) = #(kΩ ∩ L) − |kΩ| can be written as (5.1) Rosakis using Lemma 2.2. It seems that R is implicated in both issues raised above. Being O(k), it contributes to the surface energy and gives rise to the term 1 2|n| W (F ) in (4.15), (4.23), which is the one responsible for the lack of continuity ofγ. Also, surface energy minimisation over domains with fixed measure would seem to make physical sense only if their lattice point remainder #(Ω∩L)−|Ω| vanishes, so that constraining |Ω| fixes the lattice point number, hence the mass. One way to ensure this might be to seek a sequence of dilation factors r k ∈ R satisfying condition (1.4) imposed by [BBL], i.e., R(r k ) = 0. It is not clear for what choices of Ω this is possible, and we modify this approach in two ways. First, we relax the condition R(r k ) = 0 and require instead that there is a sequence r k such that so that the lattice point remainder is of lower order than the surface energy, which is O(r k ). This is satisfied for the smooth regions with positive boundary curvature of Section 4, where the fact that R(r) = O(r 2/3 ) for any real sequence r → ∞ was exploited in proving Proposition 4.1. As a result, the density γ • in (4.5) is continuous in the unit normal by extension to the whole of S 1 ; see Proposition 4.6 (i). Second, in case Ω is a lattice polygon, or the "mixed" region of Proposition 4.4, we rewrite the energy in terms of an "equivalent" region Ω(k) containing the same lattice points as the scaled region kΩ. Accordingly, from (2.2) it is clear that E{kΩ, y} = E{Ω(k), y}. Observe that, given the set of atoms that are within a convex region kΩ, there is some freedom in choosing an alternative convex region Ω(k) containing precisely the same atoms. By choosing Ω(k) in a specific way, we can ensure that the lattice point remainder of Ω(k) is of lower order than the surface energy. For lattice polygons this can be done as follows. The "interplanar" distance between adjacent parallel lattice lines with Miller normaln is 1/|n|. If Ω is a lattice polygon, construct Ω ′ by moving each side with Miller normaln i of ∂Ω outward by 1/(2|n i |), half the interplanar distance. Then extend the translated sides, so that they once more intersect in the same order as before. Thus Ω ′ is a rational polygon [BR] (not a lattice polygon) that contains the same atoms as Ω, with sides parallel to those of Ω and vertex angles the same as those of Ω. In general though, it is not a dilation of Ω, although Ω ⊂ Ω ′ . Performing the same operation on kΩ for each k ∈ Z yields Ω(k). Since the layers added to kΩ have measure equal to k ∂Ω 1/(2|n|)ds to dominant order, it follows from (5.1) that the lattice point remainder #(Ω(k) ∩ L) − |Ω(k)| = o(k), so that (5.2) is satisfied for r k = k ∈ Z. Writing the energy in terms of the modified region Ω(k), one arrives at the following representation: The O(1) term is a correction due to intersection, in the neighbourhood of corners, of layers corresponding to adjacent sides, since directions and thicknesses of layers are k-independent. The second equality above follows from (2.10) of Lemma 2.2. The O(1) terms in (5.5) are actually constant (depend only on Ω and not on k) as is easily shown. This establishes the middle assertion in (5.3). Since the distance of adjacent lattice lines with normaln i is 1/|n i |, the added layers (whose thickness is half that distance) contain no new lattice points; thus the first assertion of (5.3) holds true, while the last is trivial. The first of (5.3) ensures that E{kΩ, y} = E{Ω(k), y}. Now (5.4) follows immediately from Proposition 3.4, (5.5) and the definitions (3.4) and (4.4). Case (b): Suppose Ω is a smooth region as in Proposition 4.1. Then choose Ω(k) = kΩ, to that (5.3) follows from [Hu] and note that (5.4) is the same as (4.5) with k = r ∈ R + . Rosakis Case (c): Let Ω comply with Proposition 4.4 . For each k let Ω(k) be the set obtained by moving only the flat sides kS i ⊂ ∂Ω f , i ∈ J f of kΩ outwards by 1/(2|n i |) (and discarding portions of the added layers that lie outside the curves Γ j near the endpoints where S i join curved sides of ∂Ω). Remark 5.3. Provided that a sequence of dilation factors r k exists such that (1.4) holds (as assumed in Theorem 3 of [BBL]), it is possible to modify the results of the previous section to show that for the types of regions considered here, E{r k Ω, y} = r 2 k Ω W (F )dx + r k ∂Ω γ • (F, n)ds + O(1). Thus the non-explicit surface energy density of Theorem 3 in [BBL] is now determined to be γ • defined in (4.4). The order of the error is due to the vanishing of the lattice remainder R(r k ). Apparently, it does not seem to be known for which regions Ω such a sequence of dilation factors r k exists. We are thus led to the construction of the regions Ω(k) of Proposition 5.2, which are not dilations of Ω of the form rΩ, since they involve different translations of different facets. Remark 5.4. Proposition 5.2 indicates that the appropriate problem of surface energy minimisation over regions of fixed mass involves minimising ∂Ω ′ γ • (F, n)ds over a suitable class of domains Ω ′ with |Ω ′ | fixed. The integrand γ • is now Lipschitz continuous in the unit normal as guaranteed by Proposition 4.6. Thus γ • can be used to determine the Wulff shape of the crystal. We must remark, however, that while (5.4) has the aforementioned advantages as regards surface energy minimisation, it is not appropriate as an asymptotic series in k, since the domains of integration depend on the latter variable. The appropriate asymptotic series remains (4.16).
12,885.6
2012-01-03T00:00:00.000
[ "Mathematics" ]
Localization of Blockchain and E-Currency Model for E-Government Services Blockchain can reduce bureaucracy and increase the efficiency and performance of administrative processes through a platform possessing features and attributes such as storing and exchanging electronic messages in a decentralized environment and executing high level of security transactions and transparency, if used in government public service delivery. Many scholars believe that this distributed technology can bring new utilizations to a variety of industries and fields, including finance and banking, economics, supply chain, and authentication and increase economic productivity and efficiency dramatically by transforming many industries in the context of today's economy. The present study, presents the characteristics of the localized blockchain and e-currency conceptual model for the evolution of e-government services. It also examines the impact of the blockchain and e-currency model on the economy and electronic financial transactions as a viable, practical and constructive solution (rather than blocking and filtering of e-currency and blockchain). Ultimately designing a localized block chain and e-currency model, has played an effective role in exploit its high potential to speed up the administrative processes and reduce costs related to electronic transactions and payments in e-government and increase e-government revenues and ultimately it can speed up the customer service delivery and increase their satisfaction with the government. 1-Introduction Today, society and the global economy are governed by the trust we have on intermediaries such as banks, governments, and big Internet companies like Google and Facebook. Some of the largest corporations and the greatest wealth come from becoming an intermediary in the business world. Intermediaries accomplish a transaction and take their share. Such intermediaries do a great job but have their own limitations: "they are costly and slow down everything. Anything that becomes central is vulnerable." But above all, they "make a disproportionate profit" of what they have provided. Simply put: for the little value they add, they make a lot of money. Their best product is trust, and that trust is built on the notion of their perpetual existence [1]. Decentralized systems face major problems, including scalability and issues related to privacy and multi-identity. So, experts are trying to design decentralized protocols that are attack resistance in addition to being scalable and optimized. Analyzing such protocols requires extensive knowledge in areas such as distributed systems, cryptography, game theory, and concepts of information theory. Blockchain technology basically could be regarded as a public ledger and all of committed transactions or digital events are stored in a list of blocks. This chain grows as new blocks are appended to it continuously. Asymmetric cryptography and distributed consensus algorithms have been implemented for user security and ledger consistency. The blockchain technology generally has key characteristics of decentralization, persistency, anonymity and auditability. With these traits, blockchain can greatly save the cost and improve the efficiency [2], [3] and facilitate the move to a more equitable and flat society. Today, more than 100 blockchain projects created to transform government systems are being conducted in more than 30 countries. What leads countries rapidly initiate blockchain projects? It is because blockchain is a technology directly related to social organization; unlike other technologies, a consensus mechanism form the core of blockchain. Traditionally, consensus is not the domain of machines but rather humankind. However, blockchain operates through a consensus algorithm with human intervention; once that consensus is made, it cannot be modified or forged [4]. There are two different perspectives for governments in relation to the rise of blockchain architectures and applications. On the one hand the perspective of governance by blockchain, in which public organizations adopt blockchain technology for their own processes, like service provisioning, and in which blockchain technology is used to govern transactions. The other perspective is termed governance of blockchain, or blockchain Governance, which determines how blockchain should look like, how to adapt to changes and should ensure that public values and societal needs are fulfilled. Both require in-depth knowledge of the blockchain technology and the situation at hand [5]. In traditional e-government model Just as a common transaction, for example, need to experience more government audit and the corresponding archive, subsequent department needs through internal data query platform of leading department database data, and combining with affairs to deal with the data submitted to the department, offer certain audit opinion. Although the whole process has realized information, the system has the shortcomings like long business time between departments, low efficiency, lack of multiple levels of permissions on the laws and regulations, among database data redundancy is serious, the lack of unified update management, failure to ensure data security and high cost. But the egovernment system based on blockchain government affair has the advantages like high efficiency, build a multi-level laws and regulations of the authority, unified database and data security guaranteed. The application of blockchain technology to e-government also can reconfigure public resources, improve government efficiency, save cost, improve the basic income of people, and promote the construction of harmonious social relations [6]. With the increasing number of e-users in Iran and the increasing use of e-government services throughout the country, banking networks traffic as well as e-service centers are gradually increasing due to the use of traditional protocols and methods for money transfer and service provision. The use of blockchain and e-currency services can play a very effective role in accelerating and reducing the cost of financial transactions and electronic payment transfers in e-government, increasing egovernment revenues, reducing banking traffic, and speeding up delivery of service to customers. Providing a localized blockchain and e-currency model can also ensure security of service delivery and greatly prevent subversive attacks and theft of customer information and accounts. Obviously, to achieve this, the culture of using e-currency and e-services must be created and developed for the society and people, also trust and necessary substrates must be made. Finally, the problem statement of this research can be the fatigue of decision-makers of E-Currency and E-Government services due to the combination of different methods of reducing e-government relocation expenditure, Enhancing the security of e-government services, Blockchain capabilities to reduce the breakdown of the consensus protocol, increasing e-revenues of the government, and the use of e-currency based on consensus algorithms; in improving payment of e-government services. On the other hand, the need for using an intelligent system is in order to increasing confidence and reliability in decision making, as well as the need for multiple expertise by simultaneously utilizing the expertise of different field specialists to solving problems of research. 2-Related Work According a study that analyses seven pilot blockchain deployments in the public sector in Europe, Significant incremental benefits can be realized in some areas through the utilization of blockchain technologies for the provision of public services. The two main groups of are increased security (enhancement of data integrity, immutability and data consistency between organizations) and efficiency gains (such as reduced processing time and lower costs). At this stage of the technology life cycle, the continuation of experimentation with different technical designs is vital. Prior to the scale-up, technical and governance standards need to be developed, in order ensure interoperability of different designs and facilitate operative services. Incompatibility between blockchain-based solutions and existing legal and organizational frameworks is a major barrier to unlock the transformative potential of blockchain. Hence, the major policy objective should be to increase the technological and ecosystem maturity of distributed ledgers. Policy actions should aim not only at adaption of the technology to existing ecosystems but also at transformation of existing processes, organizations and structures using the disruptive potential of blockchain [7]. There are several working groups and pilot projects (in all stages of work ranging from proposed, to under development, to deploy) focused on applying blockchain within the U.S. government. The most common trends evaluated by federal agencies include: financial management, procurement, supply chain management, smart contracts, government-issued credentials, Federal personnel workforce data, Federal assistance programs, foreign aid delivery, health records and biometric data. But Increase technical understanding of blockchain within government by developing familiarity with the decentralized and distributed paradigms of DLT 1 , Develop an internal blockchain subject matter expert workforce, Participate in the stewardship of blockchain and DLT by entering collaborative relationships with institutions like the World Economic Forum's Center for the Fourth Industrial Revolution, Increase government awareness of malign crypto-financial activity, Consider and study privacy & legality implications especially regarding the intentional -right to be forgotten‖ and accidental private key destruction and Amplify knowledge of potential blockchain-based national security threats, particularly in intelligence, critical infrastructure, and the Internet of Things must be considered [8]. Blockchain technology as a type of decentralized transaction and data management technologies, provide trust, obscurity, security and data integrity without having to use any third party controlling organization. The literature review identify three groups of factors, namely institutional (norms and culture, regulations and legislations, governance), market (market structure, contracts and agreements, business process) and technical (information exchange and transactions, distributed ledges, shared infrastructure) those are needed for organizational adoption of blockchain. Factors presented in this framework (institutional factors, market factors and technical factors) interact and mutually influence each other. The way how different factors will interact with each other depends on the context in which blockchain will be adopted. Additionally, factors which influence the adoption of blockchain technologies depend on its intended use [9]. 2-1-Theoretical Framework and Variables Based on the evaluations and critical reviews of the books and articles related to the research model, at first, variables, indices, and measures are identified. The initial research model, the relationship between variables, and localization of the blockchain and e-currency model via this relationship for e-government payment are following specified: 2-1-1-E-government Component [4], [10], [11], [12]: 2-1-1-1-Reducing e-government relocation expenditure: Reducing e-government expenditure on hardware Reducing e-government expenditure on software Reducing e-government expenditure on human resources 2-1-1-2-Enhancing the security of e-payment services of e-government: Protecting user information of personal accounts Improving the security level of users' accounts 1 The volume of currency available to individuals The volume of currency available to businesses The volume of currency available to governments 2-1-2-3-E-currency rules: Managing the implementation of contracts related to the selection and qualification of supervisory and operating parties Developing and communicating the technical architecture of e-services through blockchain and e-currency Management of providing consulting, educational, and cultural services to the executives 2-1-3-1-Decentralization capability: Distributed data logging Distributed data storage Updating data as distributed 2-1-3-2-Open-source capability: Developing of applications by people Evaluating the data publicly Transparency of data and applications for people 2-1-3-3-Anonymity capability: Anonymity of data transfer Anonymity of transaction Increase of trust between nodes 2-1-3-4-Independence capability: Independent data transfer Independent data updating Protection and immutability of all records forever Figure 1 shows the initial model of research and the relationship between variables. After reviewing the theoretical foundations of the research and investigating the literature history of the research, it was determined that considering the research gaps in the knowledge domains of "reducing e-government relocation expenditure, Enhancing the security of e-government services, Blockchain capabilities to reduce the breakdown of the consensus protocol, increasing e-revenues of the government, and the use of e-currency based on consensus algorithms; in improving payment of e-government services", as well as the lack of an intelligent system to provide guidance to the managers for decision making, the research innovations are localizing and realized in solving those research gaps. 2-2-Data Analysis Method After reviewing the research background and theoretical foundations of the research, it was found that no similar research has been carried out to localize the blockchain and e-currency model for e-government services based on the documentation of e-government services of the country, employing a combined methodology of statistical analysis and artificial intelligence in MATLAB. Figure 2 illustrates the research steps as following. A descriptive-modeling and exploratory (qualitative, quantitative) type of research was conducted. Due to the use of articles and documentation related to the research subject from a variety of sources, the method of data collection in this research is a "case-study of documentation". To evaluate the rules of artificial intelligence system based on the artificial neural network to localize the model extracted from expert opinions, the tools of determining the variables of decision-making model and interview have been utilized. The study population consisted of professors, specialists, and experts working in the Iranian Blockchain Association and the Blockchain Laboratory of the Sharif University of Technology in Iran or similar positions. The sampling method is a combination of two methods of nonprobability purposive sampling (judgmental) and snowball sampling. Due to the nature of the sampling method, the sample size of the study will be equal to the number of available and collaborative experts. Consensus algorithms (especially Proof-of-Work (POW) and Proof-of-Stake (POS)) are used to present and evaluate the localized model. Also, breakdown analysis was used to investigate the intrusion and weaknesses of the localized model. In addition, to investigate the security of the proposed model against hackers and subversive attacks, the randomly chosen nodes analysis method in the network is used to create blocks by exploiting both Nakamoto and voting methods. Modeling Modeling of e-government concepts to identify input and output variables and to draw relationships between them (with input-output approach) Defining Variables Defining qualitative variables using linguistic constraints and assigning them numbers, smart sets, and membership functions (using triangular and trapezoidal smart numbers) Intelligent System Design Introducing intelligent inference system using toolkit of artificial neural networks of MATLAB programming environment: this step involves extracting expert rules and evaluating them by experts and creating an intelligent base of rules as well as designing inference engine with access to intelligent rules (using toolkit of artificial neural networks of MATLAB programming environment ) User Interface User interface design, displaying options, and the way to use intelligent inference system using toolkit of artificial neural network programming environment designed in MATLAB Defuzzification Selecting a method for defuzzification to convert numbers and smart sets to a definite value to check for actual system performance (using toolkit of artificial neural network programming environment of MATLAB) Conclusion Analysis of intelligent inference system outputs for localization analysis of blockchain model and e-money for e-government payment using toolkit of artificial neural network programming environment of MATLAB (with system analysis approach) relationship between these concepts and rules is assessed and evaluated by the experts. Indeed, Statistical Analysis is the study of the effect of input variables on the output variable at a statistical model. The Research technique of this article is Artificial Neural Networks in Matlab software. One of the most important reasons for using artificial neural networks and fuzzy systems in this research is that real world issues typically have a complex structure, which implies ambiguity and uncertainty in their definition and understanding [17], [18]. Ever since it has been able to think, it has always been ambiguous in various social, technical and economic issues. The human brain defines and evaluates sentences by considering various factors based on inferential thinking, whose pattern in mathematical language and formulas, if not impossible, will be very complicated [17], [19]. Linguistics variables are expressed on the basis of language (spoken) values that are in the phrase set (words / terms), and language expressions are attributes for linguistic variables. Here, linguistic variables are said to be variables that words, and sentences of human and machine languages are acceptable values for them instead of numbers. A fuzzy number is a special fuzzy set in which x denotes the true values of the member of the set of R and its membership function as it is Eq. (1) [17], [18]: In fact, the dividing below describes how the relationship between fuzzy logic and the artificial neural network is expressed in terms of this view [17], [18], [19]: Symmetric Neuro-Fuzzy models: The neural network and the fuzzy system work on one single operation, but they do not affect each other. None of them are used to determine another parameter. Usually, in this model, the neural network is used to pre-process the input or output of the fuzzy system. Artificial Neural Network based Fuzzy Inference Systems: Some of these systems are considered as Cooperative models. These models are used to expand fuzzy rules. Combined Neuro-Fuzzy models: Artificial neural network and fuzzy system combine in a coordinated structure. This pattern can be considered as a neural network with a fuzzy parameter or a distributed learning fuzzy system. ANFIS and ANNBFIS are examples of this model. Finally, the five steps of designing an intelligent system based on artificial neural networks to localize the blockchain and e-currency model for payment of egovernment services according to Figure 3 are as follows: Step 1: Modeling of the field concepts to identify input and output variables and to draw relationships between them Step 2: Defining qualitative variables, exploiting linguistic constraints and assigning them numbers, fuzzy sets, and membership functions Step 3: Designing an intelligent system based on artificial neural networks including the extraction of expert rules, their evaluation by experts, the creation of fuzzy rules database, and designing inference engine with access to fuzzy rules. Step 4: Designing user interface, displaying options, and using the designed intelligent system Step 5: Selecting a method for defuzzification to convert fuzzy numbers to a definite value to verify the actual performance of the system. The sample and population of this research can be divided into two general groups: the first group consists of University Professors (Academic Experts), the second group includes experts working in e-government services or similar positions (Industrial Experts). 4-Results After distributing 100 questionnaires, the sample size of this study is 96 available and cooperative experts who were selected by a combination of two methods: nonprobability purposive (judgmental) sampling and snowball sampling. The data related to measure 1 (localization tool of blockchain model and e-currency for paying egovernment services) and measure 2 (validation tool of the intelligent system) were collected in the fall of 2019. Table 1 shows the descriptive information of the research variables and indicators, based on the number of data and mean, and standard deviation, indicating that the data in this study are in good condition in terms of symmetry and aggregation. The most important criteria for the variables are outlined in the table. Neuro-Fuzzy Inference Engine: Sugeno User Interface Knowledge base Inputs Outputs Fuzzification Defuzzification Following are the five steps of designing and implementing of the intelligent system for localizing the model: Step one: input and output variables are defined. Input variables of the intelligent system involve reducing egovernment relocation expenditure (X1), Enhancing the security of e-government services (X2), Blockchain capabilities to reduce the breakdown of the consensus protocol (X3), increasing e-revenues of the government (X4), and the use of e-currency based on consensus algorithms (X5); the output variable of the intelligent system is the status of "improving payment of egovernment services." Step two: qualitative variables are defined by linguistic constraints and assigning them numbers, fuzzy sets, and membership function. Table 2 illustrate the linguistic variables, fuzzy values, and membership functions of triangular and trapezoidal numbers associated with the input and output variables of the intelligent system within three-and five-spectra. Step three: a knowledge base of intelligent system is designed, which involves extracting expert rules, evaluating them by experts, and creating fuzzy rules database. The starting point of building a rule-based knowledge base is to obtain a set of rules when a phase of expert knowledge or the field of knowledge being examined and the subsequent step are a combination of these rules into a single system. Finally, the number of fuzzy rules of the module "improving the payment of egovernment services in the country" of the intelligent system is equal to 243 because of the five main variables, each of which has three states. Figure 4 shows how to generate fuzzy rules within the knowledge base. Step four: an inference engine of intelligent system is designed ( Figure 5). In this step, the wtaver method is used for defuzzification to convert fuzzy numbers and sets to a definite value for actual evaluation of the system performance The mean error of the test data was calculated 0.0085 (less than 1%) in the inference engine of the intelligent system for "localization of blockchain and e-currency model for the payment of e-government services", which shows the high accuracy of the calculations of artificial neural networks of the research. The defuzzification in the intelligent system converts the fuzzy output to a definite number. Step five: this step explains how to exploit the intelligent system and analyze its outputs numerically (accurate) and linguistically to analyze the behavior of system's output variable. In order to determine the weight of the input values of the system, information about the ideal and functional weight of each of the main variables of the research is presented in Table 3. According to the rules of knowledge base of the main module of the intelligent system and based on the calculation of the weight of each main variable using the expert opinions and also taking advantage of the intelligent system based on designed artificial neural networks, it is possible to numerically and more precisely examine the status of "improving the payment of e-government services in the country" The main findings of research based on localized model is utilizing intelligent system outputs, the status of "improving the payment of e-government services of the country" can be analyzed on the basis of variables such as "Blockchain capabilities to reduce breakdown of consensus protocol (X3)", "Increasing e-revenues of the government (X4)", "Enhancing the security of egovernment services (X2)", "Use of e-currency based on consensus algorithms (X5)", and "Reducing e-government relocation expenditure (X1)" because, as outlined in the E-Government Foresight Framework, the services should at least be provided through only one single port. Or, in the most optimistic way, log-in should be via a port, and then the user transfer operation should be done. In fact, according to the rules of knowledge base of the main module of the intelligent system based on the calculation of the weight of each main variable using the expert opinions and utilizing the intelligent system designed in this research, the status of "improving the payment of e-government services of the country" can be investigated numerically and more precisely: Ideally, if the status of "Reducing e-government relocation expenditure (X1)" is good, i.e. exactly 0.813, and "Enhancing the security of e-government services (X2)" is good, i.e. exactly 0.824, and "blockchain capabilities to reduce breakdown of consensus protocol (X3)" is good, i.e. exactly 0.819, and "Increasing e-revenues of the government (X4)" is good, i.e. exactly 0.812, and "Use of e-currency based on consensus algorithms (X5)" is good, i.e. exactly 0.815, then, "improvement of e-government services payment" is excellent (fifth level)", i. After designing, the outputs and responses of the intelligent system were compared in a separate measurement tool with the opinions of 18 so-called experts, the results of which can be seen in Table 4 based on the intelligent system rules and the mean of expert responses. Since experts' opinions are expressed based on the spectrum of five membership functions, to test the hypothesis above, we can exploit the discrepancy percentage between the outputs of the intelligent system of this research and the mean of expert opinions. Hence, the final difference between the outputs of the intelligent system and the mean of expert opinions was not significant and was equal to 0.065. Since there is not a sufficient reason to accept the null hypothesis, the opposite hypothesis is accepted, i.e. there is no significant difference between the mean of expert opinions and the outputs of the "intelligent system". 5-Conclusions One of the most important results of the research is that in improving the payment of e-government, blockchain technology has the ability to facilitate direct interaction between government agencies, citizens and economic actors, which at the basic level means improving public services in registration and information exchange processes. Finally, utilizing the results of the present study, we may contribute to the removal of existing barriers to improvement of the payment of e-government services. The following solutions can facilitate the achievement of e-government's goals:  Making Citizens' Digital Authentication more secure, for Enhancing the security of e-government services.  Increasing citizens' ownership and control over economic processes, for increasing e-revenues of the government.  Using smart contracts in process automation or registering economic documents in official government offices, for reducing e-government relocation expenditure.  Implementing the Blockchain capabilities to reduce the breakdown of the consensus protocol, for the use of e-currency based on consensus algorithms.  Supporting administrative agency to accelerate the electrification of their services.  Using the potentials of the private enterprises to increase citizens' satisfaction with services.  Reducing the duties of the government and transferring them to non-state sectors through the capabilities of information technology.  Accelerate and facilitate the receipt of citizens and business's services from the executive apparatus.
5,765.2
2020-11-14T00:00:00.000
[ "Computer Science", "Economics", "Business" ]
Atomic Layer Deposition of Insulating AlF 3 /Polyimide Nanolaminate Films : This article describes the deposition of AlF 3 /polyimide nanolaminate film by inorganic-organic atomic layer deposition (ALD) at 170 °C. AlCl 3 and TiF 4 were used as precursors for AlF 3 . Polyimide layers were deposited from PMDA (pyromellitic dianhydride, 1,2,3,5-benzenetetracar-boxylic anhydride) and DAH (1,6-diaminohexane). With field-emission scanning electron microscopy (FESEM) and X-ray reflection (XRR) analysis, it was found that the topmost layer (nominally 10 nm in thickness) of the nanolaminate film (100 nm total thickness) changed when exposed to the atmosphere. After all, the effect on roughness was minimal. The length of a delay time between the AlF 3 and polyimide depositions was found to affect the sharpness of the nanolaminate structure. Electrical properties of AlF 3 /polyimide nanolaminate films were measured, indicating an increase in dielectric constant compared to single AlF 3 and a decrease in leakage current compared to polyimide films, respectively. Introduction In the past decades, the growth of the global microelectronics industry has mainly relied on the demand for electronic devices such as computers and smartphones, as well as the expansion of technology applications such as the Internet of Things and cloud computing [1]. The growth trend of the global microelectronics industry is expected to continue into the next decade [2]. To maximize transistor density, the feature size of microelectronic devices is further reduced [3], and the density of wires on the chip is increased. However, higher resistance of the wires and their capacitive coupling cause the signal delay of the circuit itself (called RC delay) to become increasingly serious [4]. The challenge is the transmission of power and the distribution of clock signals to control time and synchronize operations. This challenge involves material properties, technology, and system architecture [5,6]. RC delay, power consumption, and crosstalk between wires can be achieved by reducing the dielectric constant (k) of the interlayer dielectric (ILD) [7]. Compared with the Al/SiO2 technology, adoption of copper and low-k dielectrics have reduced the capacitance and the resistivity between wires [8]. The dielectric constant k (relative permittivity εr) is the ratio of the original applied electric field (in vacuum) to the electric field in the final medium. There are two ways to reduce k: one is to reduce the number of dipoles in the material, the other is to reduce the polarizability of the material [9]. This means that materials with less polarizable chemical bonds than Si-O or lower density can be considered as low-k substitutes for SiO2 [10,11]. By using almost completely non-polar bonds (such as C-C) in materials such as organic polymers, the dielectric constant of the material is further reduced. A challenge with the polymers is, however, reaching sufficiently low leakage characteristics. Aluminum fluoride has a low refractive index (1.36-1.40 [12]) and a wide bandgap >10 eV [13] but reports on its dielectric constant range from 2.8 [14] to 6 [15]. This variation might be due to different history, preparation, and physical properties (crystallinity and amorphous) of the samples. In lithium ion batteries, aluminum fluoride is used as a solid electrolyte interface layer [12]. Polyimide (PI) is one of the organic polymer materials with intriguing properties, its long-term use temperature range is 200~300 °C. PI has also good insulating properties and is used in the field of microelectronics [16][17][18][19]. For example, PI has been used as an insulating interlayer material [20]. Dielectric constants of common polyimides have been reported to range between 2.8 and 3.5, generally being ~3 [19,21]. In the IC industry, there are two main methods for depositing low dielectric constant materials: spin coating and chemical vapor deposition (CVD) [22]. CVD is mainly used for k > 2.5, and spin coating is mainly used for porous films with k < 2.5 [23]. Atomic layer deposition (ALD) is a method where precursor gases or vapors are alternately pulsed onto the substrate surface [24][25][26]. Surface reactions in ALD are all self-limiting [27]. While ALD is often considered being limited to inorganic coatings, molecular layer deposition (MLD) is a corresponding technique for vapor deposition of organic and hybrid films, which is also based on continuous self-limiting surface reactions. [28][29][30][31]. This paper attempts to combine aluminum fluoride with polyimide to prepare new inorganic-organic low-k materials by using ALD and MLD, or more shortly ALD. Both AlF3 [12] and PI [20] have been deposited earlier by ALD and therefore the main focus here is in combination of AlF3 and PI into a nanolaminate structure and characterisation of these. Also, while the ALD AlF3 films were reported to have low refractive index of 1.36-1.40, no electrical measurements on them were done prior to this work. Materials and Methods ALD depositions were carried out by using an ASM Microchemistry F120 reactor. Nitrogen (99.999%) was used as the carrier and purging gas. Halide precursors of AlCl3 (99%, Acros Organics, Morris Plains, NJ, USA) and TiF4 (98%, Sigma-Aldrich, Saint Louis, MO, USA) were used for the AlF3 deposition as reported earlier [12]. Polyimide layers were deposited from 1,2,3,5-benzenetetracarboxylic anhydride (97%, pyromellitic dianhydride, PMDA, Sigma-Aldrich) and 1,6-diaminohexane (98%, DAH, Sigma-Aldrich) (Figure 1) as described earlier [20]. The source temperatures were 79 °C for AlCl3, 135 °C for TiF4, 160 °C for PMDA, and 40 °C for DAH. The substrates were either 5 cm × 5 cm Si wafer pieces or 5 cm × 5 cm ITO (indium tin oxide) covered glass. PI films have reasonable deposition rates below 200 °C [20] and AlF3 thin films can be deposited in the range of 160-340 °C [12]. At 170 °C the deposition rate of AlF3 on Si substrate is ~2.75 Å/cycle, while the deposition rate of PI is ~5.4 Å/cycle. These both are close to their maximum deposition rates. In addition, AlF3 films deposited at 170 °C are amorphous and thus relatively smooth, thus avoiding extensive roughening of the nanolaminate stack structures. Therefore, 170 °C was chosen as the deposition temperature for the AlF3 and PI nanolaminates. Two kinds of nanolaminates with different bilayer orders were prepared (Figure 2), starting the depositions with either AlF3 or PI. In total 5 PI/AlF3 bilayers were deposited, where the nominal single layer thicknesses were 10 nm. The samples therefore had a structure of 5 × (10 nm PI + 10 nm AlF3)/substrate and 5 × (10 nm AlF3 + 10 nm PI)/substrate. The films with the AlF3 as the bottom layer and PI as the top layer are denoted more shortly PI-AlF3, and the films with PI as the bottom layer and AlF3 as the top layer are denoted AlF3-PI. The total thicknesses of these nanolaminates were approximately 100 nm. In order to compare differences in the electrical properties, thinner 60 nm nanolaminate films with three bilayers were also prepared as 3 × (10 nm PI + 10 nm AlF3)/substrate. Based on the earlier experiments [12,20], the pulsing sequence for the AlF3 deposition was selected as: 0.5 s pulse and 1.0 s N2 purge for AlCl3, 1.0 s pulse and 1.5 s N2 purge for TiF4. At the deposition temperature of 170 °C, a deposition rate of 2.75 Å/cycle was measured for AlF3. Uniform PI films were obtained when the PMDA pulsing time was 1.5-7.0 s and the DAH pulsing time was 1.0-5.0 s at 170 °C. Considering the uniformity and integrity of the PI films, 2.0 s pulse and 3.0 s N2 purge for DAH, and 5.0 s pulse and 5.0 s N2 purge for PMDA were selected resulting in a deposition rate of 5.4 Å/cycle. A Hitachi S-4800 (Hitachi High-Technologies Corporation, Tokyo, Japan) field emission scanning electron microscope (FESEM) and an Oxford INCA 350 (Oxford Instruments, Abingdon, UK) energy dispersive X-ray spectrometer (EDX) were used to image and analyze the composition of the nanolaminate films. Approximately 2 nm Au-Pd was sputtered onto the samples using a Cressington 208HR High Resolution Sputter Coater (Cressington Scientific Instruments, Watford, UK) to obtain clearer cross-section images from the nanolaminates. X-ray reflectivity (XRR) was measured with a PANalytical X'Pert Pro MPD X-ray diffractometer (Malvern Panalytical, Malvern, UK) to analyze the true thicknesses of the single layers in the nanolaminate stacks. The measured data was fitted using Reflex v44 [32]. The overall thickness of the nanolaminate was measured by a FS-1™ Multi-Wavelength Ellipsometer from Film-Sense (Kurt J. Lesker Company, Frankfurt, Germany). Atomic force microscopy (AFM) images to analyze surface roughness and morphology were recorded using a Veeco Multimode V instrument (Veeco Instruments, Plainview, NY, USA). A silicon probe with a nominal tip radius of 10 nm and a nominal spring constant of 3 N/m (Bruker RFESP-75, Billerica, MA, USA) was used to capture images in the air. Images were flattened to remove artifacts caused by sample tilt and scanner bow. Roughnesses were calculated as a root-mean-square value (Rq) as an average of 3 to 5 images per sample. The final images were obtained by scanning at a frequency of 0.5 Hz from a scanning area of 500 nm × 500 nm without any other image processing. For electrical measurements, capacitors were made with the nanolaminate as a dielectric and ITO and Al films as the electrodes. The nanolaminate films were deposited on ITO films on glass, and Al electrodes were patterned on top by evaporating aluminum through a shadow mask by an Electron Beam Evaporator IM9912 (Telemark, Battle Ground, WA, USA). A contact to the bottom ITO electrode was made in the corner of the sample by scratching through the nanolaminate and soldering a wire. The capacitance C of the nanolaminate film was measured at zero to ±2 V bias with a 4284 A Precision LCR Meter from Hewlett Packard (Hewlett Packard, Palo Alto, CA, USA). From the measured capacitance the dielectric constant εr (also called as k) was calculated as where d is the thickness of the entire nanolaminate film, εo is the dielectric constant of vacuum, and A is the area of the top Al electrode (2.04 × 10 −7 m 2 ). Leakage measurements were carried out by a Keithley 2450 Source Meter (Keithley Instruments, Cleveland, OH, USA) with ±50 V as the measurement voltage range for the 5 bilayer films and ±25 V for the 3 bilayer films. Film Deposition First it was verified that AlF3 and PI can be deposited on top of each other at 170 °C with the same growth rates as they grow on Si substrates. On this basis it was calculated that 35 cycles of the AlF3 process and 20 cycles of the PI process would result in a nanolaminate where the individual layer thicknesses would be 10 nm. Such bilayers were repeated for five times ( Figure 2) and with an ellipsometer it was verified that the total thicknesses of the nanolaminates were close to the targeted 100 nm. It was observed with FESEM that the films were of lower quality if no delay time was introduced between the processes. Therefore, we tested 0, 1, 3, and 5 min breaks between the AlF3 and PI film depositions, which allows for a comparison of the effect of the delay time on the properties of the nanolaminate film, as will be described in the following. Film Structure and Morphology Analysis Because of the low deposition temperature all the films were amorphous. In the previous study on the ALD of AlF3 first small signs of crystallization were observed only at 280 °C [12]. FESEM was used to examine surface morphology of the nanolaminate films. Figure 3; Figure 4 show the effect of the ambient atmosphere to the film surface. It can be seen that right after the deposition the nanolaminate film has a featureless surface as characteristic to amorphous films, but with a prolonged exposure to air the film surface becomes uneven as some lines appear on the surface. This was not observed in the case of 100 nm thick AlF3 and PI films alone but in the nanolaminates it occurred regardless whether the top layer was AlF3 or PI. The effect of the air exposure to the film morphology can be seen also from the cross-sectional images ( Figure 5). In the sample exposed for 3 min to the ambient atmosphere, all the layers, including the top PI film, are smooth, while pronounced buckling of the layers and discontinuity of the topmost layer can be seen after one day exposure. However, all the other layers except the topmost one remain continuous which is crucial for the insulating properties as seen later. It is expected that when the stacks were exposed to air, the topmost 10 nm film would be too thin to resist compressive stress. Each layer can be distinguished clearly in most of the SEM cross-section images (Figure 6). The pictures show the effect of delay time on the nanolaminate structure. Only when there was no delay time in between the AlF3 and PI depositions, the multilayer structure is hard to resolve. When the delay time was increased from 0 min to 5 min, the interfaces between the AlF3 and PI layers became clearer. EDX measurements revealed that both the PI-AlF3 and AlF3-PI films contained Al, F, C, O as major constituents as expected. Chlorine impurities were also detected from the nanolaminate films deposited with short delay times between the PI and AlF3 processes. The most probable reason for the chlorine is the presence of unreacted or only partially reacted AlCl3 precursor or byproducts originating from AlCl3. The amount of chlorine decreased with increasing delay time, which seems to link to the improved morphology and purity of the films. However, the growth rates remained the same even if sufficient delay times were applied between the processes. As shown in Figure 6, the number of bilayers (3 or 5) does not affect the cross-section structure. AFM images (Figure 7) reveal that the nanolaminate films are smooth when measured from 500 nm × 500 nm areas between the buckle lines. Roughnesses of all the ~100 nm thick films were in the range of 0.3 -0.5 nm. The deposition sequence of AlF3 and PI was found to affect only slightly the roughness of the films. Generally, the AlF3-PI nanolaminates were slightly rougher than the PI-AlF3 counterparts. The roughest films (Rq ≈ 0.5 nm) were AlF3-PI deposited without any delay between the processes, and the smoothest film (Rq ≈ 0.3 nm) was PI-AlF3 deposited with 5 min delay. Although the deposition temperature of 170 °C is below the AlF3 crystallization temperature of 280 °C [12], AlF3 appears to have grainier structure than PI. The delay time between the processes also affected the roughness of the film surface. As the delay time was increased, the film surface became smoother. The total thickness of the film affects the roughness of the surface only slightly; the film with three bilayers is only slightly smoother than the films with five bilayers. Despite the buckled lines, the amorphous nanolaminates had low enough roughness for XRR to resolve the nanolaminate stack structure (Figure 8). In the patterns, the high frequency oscillation comes from the total thickness whereas the lower frequency oscillation arises from the single layers. As expected from the FESEM images, clear differences were seen in the XRR curves. A 5 min delay time between the depositions of the layers resulted in more regular structures. Overall, when shorter delay times were applied, the XRR curves became more irregular. This indicates that the layered structure is not so well defined. Comparing the PI-AlF3 and AlF3-PI structures, it can be observed that a sharper nanolaminate structure was obtained when the deposition was initiated with AlF3 (PI-AlF3). With the optimized process parameters, the nanolaminate structure was retained regardless of the number of AlF3/PI bilayers. However, there were always some imperfections apparently due to less sharp interfaces and film buckling. The XRR curve of the PI-AlF3 nanolaminate deposited with 5 min delay in between the PI and AlF3 processes was analyzed in detail ( Figure 9 and Table 1). The total thickness of the nanolaminate film was 95.2 nm, which is only slightly less than the targeted 100 nm. It can be seen from Table 1 that as the number of the deposited layers increases, the roughnesses of the interfaces generally increase. The scattering length density (SLD) of AlF3 is from 2.2-2.3 × 10 −5 Å −2 and also for PI SLD varies in only a narrow range of 1.2-1.4 × 10 −5 Å −2 . These equal to mass densities of 2.7-2.8 g/cm 3 for the AlF3 stoichiometry and 1.4-1.5 g/cm 3 for PI when a repeating monomeric unit of C16H14N2O4 [20] is used for the calculation. The thicknesses of the first AlF3 and PI layers on the Si substrate were somewhat less than expected from the growth rates measured at the steady-state growth conditions. Further layers deposited on top of each other had constant thicknesses and hence deposition rates. These thicknesses were not exactly 10 nm, however, indicating slight differences from the growth on silicon. When evaluating the nanolaminate stack structure as a function of a location of a given bilayer, it was seen that the thickness of the AlF3 layer tends to stabilize at 9.5 nm (~0.5 nm lower than the expected 10 nm) regardless of the layer position in the stack. On the other hand, the PI layers were always slightly thicker than the preceding PI layers in the stack (0.3-1 nm) with the exception of the topmost layer that was much thinner (2.3 nm). This is due to the shrinkage when the sample is exposed to air for a long time, as also seen in FESEM images ( Figure 3; Figure 4). . Measured (blue) and fitted (red) XRR curves of the nanolaminate with 5 × (10 nm PI + 10 nm AlF3)/Si, 5 min delay. The inset shows the electron density profile giving the best fit. Table 1. Scattering length densities (SLD), roughnesses and thicknesses determined for each layer, in the order from the top to the substrate, by fitting the XRR curve of the 5 × (10 nm PI + 10 nm AlF3)/Si, 5 min delay nanolaminate (Figure 9). Electric Properties Dielectric constants and leakage properties of selected structures were measured by depositing the nanolaminates onto ITO films on glass substrates. Al evaporated through a shadow mask was used as the top electrode for the capacitor. The total thicknesses and dielectric constants of different structures are shown in Table 2. For a bare 75 nm AlF3 film, a dielectric constant of 3.4 was measured, and the bare PI film has a dielectric constant 3.8. Thicknesses of the measured five bilayer nanolaminate stacks were 90-100 nm. Interestingly the dielectric constants of the nanolaminate films are within 3.8-4.8, being higher than those of AlF3 film (3.4) and PI film (3.8) alone, whereas an intermediate value would be expected as the first approximation. A plausible explanation for the higher than expected dielectric constant is interface polarisation within the nanolaminate structures, which in turn may arise from the leaky nature of the PI films (see below). Generally, the dielectric constants of the nanolaminate stacks with PI as the top layer are smaller compared to the AlF3-PI stack. As the delay times between the AlF3 and PI processes were increased, the dielectric constant showed an increasing trend, closely attributed to the improved layered structure of the nanolaminates. Leakage currents and breakdown voltage of the nanolaminates deposited with 5 min delay were measured from the samples deposited onto ITO films and completed by the Al top electrodes. Bare AlF3 and PI films were also measured for comparison, as shown in Figure 10. The bare 75 nm thick AlF3 exhibited good insulating properties: a high breakdown voltage of 96 V and low leakage current density (<10 −6 A/cm 2 ) up to the breakdown voltage which are characteristics of good inorganic insulating materials. The leakage current density of the bare PI by contrast was very high and breakdown voltage low. All the nanolaminates exhibit leakage properties that are closer to AlF3 than PI, i.e., low leakage current densities and high breakdown voltages. For such thin films deposited at low temperature and containing organic constituent, leakage current densities less than 10 −5 A/cm 2 and breakdown voltages of more than 50 V are excellent results. As can be expected, the thinner nanolaminate film (PI-AlF3 5 min, with three bilayers) has a lower breakdown voltage and a higher leakage current density than the thicker nanolaminates. Replotting the results as a function of electric field instead of absolute voltage would bring it together with the other nanolaminates, however. · The leakage current density through the bare AlF3 film was more stable than through the nanolaminate films, which is the most obvious within the 0-10 V range ( Figure 11). In this regard, we conducted several sets of comparison experiments where the leakage measurement was repeated several times. The five and three bilayer nanolaminates were measured repeatedly within ±40 V and ±25 V, respectively. Upon repetition the leakage level increased and the noise decreased. This refers to a growth and stabilization of leakage paths that in the first measurements cause the noisy behavior. Because of the high leakage of PI, low leakage of AlF3, and the lack of noisy leakage in bare AlF3, it is obvious to relate the instability to the PI layers in the nanolaminates. Conclusions AlF3/PI nanolaminate films were successfully deposited by ALD at a low temperature of 170 °C. AlCl3 and TiF4 were used as AlF3 precursors, and PMDA and DAH as precursors for the PI deposition. It was observed that without elongated purging of 5 min while changing from one material to another these processes interfered with each other, destroying the controlled nanolaminate film structure formation. The introduction of the elongated purging also reduced chlorine content of the deposited films. When exposed to the ambient air, the topmost layer of a laminate film shrank. Therefore, protective layers should be used for detailed analysis. Dielectric constants of the nanolaminates were 3.8 and higher, thereby exceeding the dielectric constants of AlF3 and PI (3.4 and 3.8) alone. This was explained in terms of interface polarization within the nanolaminates, enabled by the leaky characteristics of PI. The AlF3/PI nanolaminates showed low but noisy leakage in the first measurements, and upon repetition of the measurement the leakage level increased and stabilized. The bare AlF3 turned out to be a very attractive low-k candidate material. The k-value was 3.4 and the leakage current density remained below 10 −6 A/cm 2 up to the breakdown voltage of 96 V. While the combination of AlF3 and PI did not result in the targeted low-k properties, the study adds to our understanding of the characteristics of inorganic/organic nanolaminates and provides reference material for further studies on these. Conflicts of Interest: The authors declare no conflict of interest.
5,294.8
2021-03-19T00:00:00.000
[ "Materials Science" ]
Survey on Identity and Access Management for Internet of Things The Internet of Things (IoT) encompasses a large number of connected devices, generating and sharing different types of data among themselves. These data enable the creation of society-changing applications, such as health monitoring and autonomous vehicles. Under this context, protecting the access of these connected devices and their data is critical to IoT applications’ success, since a single data breach can incur into a cascade effect, which leads to devastating consequences. Identity and Access Management (IAM) systems provide mechanisms to identify individuals at the network and determine their access privileges, thus avoiding inappropriate access. However, the current IAMs system struggles to provide the scale or manage the complex relationships that IoT brings. In this work, we present a comprehensive state-of-the-art survey of IAMs and the main concepts and challenges when applied to IoT. First, we overview the IoT technology, giving its essential characteristics and communication architectures and its main applications. Then, we present IAMs state-of-the-art and the main concepts, existing architectures, and challenges. Finally, we focus on current IAMs that aims to tackle the IoT complexity, along with an in-depth analysis of their proposals and future directions in the field. Introduction The paradigm of the Internet of Things (IoT) has gained ground in the scenario of wireless communications [1]. The idea of having millions of objects interconnected under the control of human beings has paved the way for a diverse set of applications that promise to solve problems that are currently challenging in society, such as autonomous driving, assisted living, and e-health. These are only a few examples of IoT-based applications [2] among a myriad of others. Today, this hyper-connected digital world is quickly becoming a reality. The entire network ecosystem is conditioned to have a massive explosion in the number of connected objects worldwide. For instance, there are more than 25 billion devices connected around the world, and the forecast for 2025 is that this number surpasses the mark of 75 billion [3]. In this scenario, security and accessibility are must-have foundations [4]. The damage that a malicious user achieves in compromising a connected vehicle, a copier connected to a corporate network, or even a medical device is vast since IoT devices can act as a "back door" to launch significant attacks putting a network's infrastructure and people's life at risk [5]. The security of IoT devices passes through managing their identities. Access control is critical to the success of the IoT since most of the information contained in the IoT environment may be personal or sensitive [6]. In its definition, an Identity and Access Management system (IAM) plays a fundamental role in assisting the network infrastructure by providing information about user profiles, service features, and access policies. Today, IAM shows itself as a consistent way to enforce business and security policies, regardless of network entry point by the users. It is also a way to manage user's identities and identify users and devices to control their access to network resources [7]. In short, the core of an IAM comprises policies defining which devices and users are allowed on the network and what a user can accomplish, depending on device type, location, and other factors [8]. To keep pace in this hyper-connected context proposed by IoT, an IAM system must manage more identities than never before. Instead of managing users' identities and a few devices, the IAM on the IoT era must address all the entities on the network, from devices and systems to individual network users. Thus, an IAM infrastructure must be designed to scale to the number of devices, with each device having a multi-faceted relationship with other network entities. A device needs to communicate with other devices, end-users, and applications, meaning that at every second, there is a considerable number of different relationships that have to be controlled, managed, and secured. Since this context is dealing with a massive number of devices spread globally, we need to quickly provide new identities to devices with the correct access rights -but we also need to be able to update and revoke them just as effectively [9]. In this work, we present a comprehensive survey of IAMs for the IoT. We analyze the main concepts and challenges of this promising area, including new IoT applications and the particular challenges they introduce to the current IAM systems. We also point to future directions considering recent work found in the literature. To summarize, we provide the following contributions: • We explore the fundamentals of IoT and provide an overview of the representative applications, main characteristics, and architectures; • We provide a comprehensive insight into what comprehend a digital identity; • We give an overview of IAM and its architecture, including operations, models, and challenges. We also explain how IoT is influencing on the future of IAMs; • We delve into an analysis of IAMs proposals for IoT. • We give a perspective of the future of IAMs for IoT and a broader perspective of the area considering the recent advancements in the field. The remainder of this paper is organized as follows: In Section 2, we overview the main characteristics of IoT, together with typical applications and communication architectures. In Section 3, we present the definition and concepts of digital identities, together with an explanation of its lifecycle. In Section 4, we give an overview of IAM, presenting the definition, evolution, and classification of identity management models. We analyze the requirements for IAM for IoT and several proposals designed for it in Section 5. In Section 6, we give a future perspective to IAM challenges in the IoT. Finally, in Section 7, we present the final discussions, as well as a broader perspective of the area. The term Internet of Things (IoT) has been around for quite a few years and has attracted the attention of researchers worldwide. IoT refers to a type of network to connect "everything" (for example, devices, applications and services, human users, data, communication endpoints, and locations) through some communication technology, such as Bluetooth, Zigbee, WiFi, Celular, and LoRaWAN [10]. In this network, "everything" is uniquely identifiable and addressable, meaning that they can be individually targeted and found at the network. Therefore, we define that IoT is a network that allows persons, real-world objects (devices, communication endpoints) and virtual entities (applications, services) to interact with each other over the Internet by their unique identifier to achieve some goal [11]. In the following, we further characterize and describe IoT in terms of (i) applications, (ii) its main characteristics and, (iii) and the main architectures used in IoT systems. IoT Applications IoT has gained popularity in recent days due to the capacity to interconnecting "everything" and the potential to offer creative applications that deliver substantial benefits to society. It is already possible to find applications proposals of almost everything serving different purposes, from simple home control to autonomous vehicles. In this section, we present and discuss three examples, among a variety of IoT based applications: Smart Home, Healthcare, and Connected Vehicles. These applications illustrate the particularities of IoT and, at some level, are already deployed in today's world or are currently planned to appear in the next years. Smart Home: Initially, the term smart home was used to define an environmental control system, such as lighting and heating control. However, with the introduction of IoT, this term expands to any device within the house, which has now included smart TVs, smart thermostats, smart security cameras, and other similar devices. The main idea of a smart home is to have these devices operating together to provide homeowners security, comfort, and convenience [12]. Smart homes demonstrate the potential for the development of a wide range of applications due to the variety of house objects. Burglary prevention, for example, allows homeowners to enhance their house security, performing monitoring even when they are away. A smart motion sensor within the house captures any suspicious movement and sends a notification directly to the homeowner's smartphone. Once properly notified, homeowners connect their smartphones at the house's smart security camera system to have real-time video streams of their home, providing evidence to confirm the suspicious activity. Regarding comfort and convenience, a Smart Gardening application takes advantage of smart sensors to automate whether to increase or decrease water supply or even collect data on incoming weather patterns to determine the most suitable course of action [13]. Healthcare IoT: The current healthcare system emphasizes physical visits to hospitals with the procedures for patient monitoring, care, management, and supervision manually executed by nursing staff. These constant physical visits may represent a bottleneck that burdens the nursing staff, leading to tragic errors in practice. With the arrival of IoT technology, healthcare providers use IoT medical devices to reduce costs and improve the efficiency of patient care [14]. Given that healthcare is a vast ecosystem, the application of IoT in this scenario seems to be endless. Diabetes treatment, for example, can be done through IoT devices. A patient wears a continuous glucose monitor, which monitors the patient's blood glucose levels for several days and takes readings at regular intervals. Those readings feed an insulin delivery device that automatically adjusts the amount of insulin injected into the patient's body, helping them keep blood glucose within the safe range, preventing human errors such as extremely high or low doses [15]. Connected Vehicles: Connected vehicle refers to applications, services, and technologies that connect a vehicle to its surroundings. Thus, each vehicle has several devices connected to other internal or external devices, networks, applications, and services [16]. This connectivity allows the vehicle to share a high volume of transportation-related data, enabling plenty of new applications, such as traffic management, urban and suburban mobility, accident prevention, motorist advisories and warnings, and even autonomous self-driving vehicles [17,18]. IoT Characteristics IoT creates opportunities for a wide range of applications. Those applications may have their specific domain requirements and characteristics. However, some of the IoT characteristics are generic enough that arise in a variety of domains [19,20]. Next, we discuss the most significant IoT characteristics described by Patel et. al [19]. Scalability: IoT devices generate data that feeds several applications. For example, several connected vehicles track the road information level and deliver it to a government server. This information feeds an application that allows traffic efficiency visualization, helping traffic engineers plan more safe and efficient traffic flow. Once the number of devices connected composing the IoT increases each day, the amount of data IoT devices generate tends to grow. Then, an IoT architecture must be capable of handling this massive expansion of devices and data [20]. Heterogeneity: IoT connects different devices with different capabilities, complexity, and vendors. Devices in IoT rely on various hardware platforms and networks, interacting with other devices or service platforms through different networks. Hence, IoT architecture must support network connectivity between heterogeneous devices and networks [20]. Connectivity: From an idealistic viewpoint, IoT compromises of a global scale, to thousands or even millions of devices simultaneously connected to the Internet. However, the Internet is composed of heterogeneous networks and communications technologies. Emerging IoT applications, such as healthcare and connected vehicles, demand ubiquitously available Internet connectivity. Thus, to enable this connectivity, IoT must consider a wide range of protocols and communication technologies, such as LPWANs, Bluetooth, Cellular (3G/4G/5G), Zigbee, 802.11, RFID, among others [19,20]. Dynamicity: IoT devices are exposed to rapidly changing surroundings, experiencing different connections, and disconnections throughout their lifetime. In general, many IoT devices are low energy and lightweight, meaning that most of them rely on batteries as their primary power resource. Thus, if a device's battery dies, the network experiences a disconnection. Furthermore, mobile devices are fundamental to creating several IoT applications, such as self-driving vehicles. These devices are subject to rapidly network changing, experiencing connections, and disconnections with several devices while they are on movement due to their mobility. Thus, the IoT device must dynamically adapt in response to constant changes, meaning that it needs to reconfigure to avoid disruptions on applications [21,19], ensuring seamless connectivity and management without human interaction. Privacy and Safety: In several applications, such as self-driving vehicles and personal glucose monitor, the IoT devices have a role in generating, processing, and sharing private and safety-critical data. However, most of them present a lack of battery and computational power. Consequently, the devices spend most of their resources executing their primary function, leaving only a few resources for privacy and safety functions. This lack of resources turns the act of establishing defense techniques in the context of attacks in IoT much more complicated than conventional information systems [22], once those traditional security methods tend to be expensive for IoT in terms of energy consumption and processing overhead [23]. Besides, most traditional security demands a highly centralized architecture, which may not be suited for IoT due to the difficulty of scale and the mobility pattern of several IoT devices [23]. IoT Architectures There is no single architecture of IoT that is widely accepted by the scientific community. Hence, several works focus on proposing new ones. However, the most accepted architectures in the literature are the 3-layer architecture and 5-layer architecture [24]. 3-Layer Architecture: Introduced in the early stages of IoT research, this architecture defines three layers: the perception, network, and application layers. The perception layer is the physical layer, whose purpose is to collect data, to identify the interacted entities, and to perform actions using specific equipment such as sensors, actuators, and readers [25]. The network layer is the core layer of the IoT, and it is responsible for transmitting the gathered information by the perception layer with the use of wire/wireless networks and the Internet. Finally, the application layer is responsible for delivering application-specific services to the user. It defines various applications for IoT deployment [25]. 5-Layer Architecture: Several works present new proposals for IoT architectures [26]. One of the most accepted architectures is the five-layer, which expands the traditional three-layer architecture by including two new layers: processing and business layers. The five layers are perception, transport, processing, application, and business layer. The role of the perception and application layer remains the same as the same that they have on the three-layer architecture. The transport layer is responsible for transferring the collected data from the perception layer through wired/wireless networks. However, instead of transferring data to the application layer, they are sent to the processing layer. The processing layer analyzes, stores, and processes the data that comes from the transport layer and can also manage or/and provide a diverse set of services to the lower layers. It employs many technologies, such as databases, cloud/fog computing, and big data processing modules. Finally, the business layer has the purpose of managing applications, business, and profits models of IoT [27,25]. Identities for the Internet of Things IoT introduces the concept of a world where everything becomes interconnected, allowing "things" to communicate. This communication creates opportunities for a wide range of applications; however, to be securely connected, "everything" first needs to be identified. Therefore, we can define identity as the digital representation of the information known about a specific person, device, or service into IoT. Since IoT covers this wide variety of "things", we define humans, devices, and services as subjects of identities. Thus, identity is a digital representation of a subject made by itself or by another subject [28]. We can use the identity information for different purposes, ranging from allowing subject to prove his, her, or its claim to identity until establishing permissions to enable interaction with other subjects [7,29,30]. In this section, we first overview these identities, presenting the basics of the notion of identity and its components. Second, we discuss the life cycle of these identities, describing what happens since the creation until the revocation of one subject's identity. Components of an Identity Identity is a digital representation of the subject, where this representation is done through a set of claims [28]. These claims are referred to as identity attributes and present the characteristic elements of their subject. Each identity is exclusively associated with a subject; however, the opposite is not valid. A subject can have more than one mapped identity, where each identity encompasses valued attributes within an application context. We call these multiple identities partial identities of the subject and denote the completed identity as the set of combinations of these "partial identities" that are used for specific application contexts [30]. Furthermore, identities can be permanent or temporary, depending upon the application's context [31]. For example, a subject may have a permanent identity and another as a company's interim accountant. Figure 1 shows an example of a subject (Alice) with more than one identity, each representing her in a different context. As an employee, Alice's identity consists of a series of attributes that indicate her role inside a company, describing her name, job title, job category, and any other attribute that company needs. As a user of music streaming applications, Alice is represented by other types of attributes, such as name, gender, and favorite music genre. According to the ITU-T recommendation (Y.2720) [32], an identity is composed of three different components: identifiers, attributes and credentials [32]. An identifier is a series of digits, characters, symbols, or any other form of data used to index one identity in some context. Attributes are pieces of data bounded to an identity that specifies a characteristic of the subject owner of that identity, such as condition, quality, or other information associated with that subject. Credentials are a set of data that can be used to the subject to claim its identity. Therefore, credentials link the subject with their identity in a process called authentication, which details we discuss in Section 4. Identity Lifecycle We define an application as any service or group of services available for registered subjects registered. These applications can utilize the Internet or other network hardware infrastructure to perform useful functions, such as data sharing. When a subject is registered in some application, we refer to them as users of that application. A lifecycle is the definition of phases to identify an object status in a period. Therefore, an identity lifecycle determines the status of a user's identity within an application context. In Figure 2, identities have a generic lifecycle framework that is applicable regardless of the application, and it is composed of five phases: provision, propagation, usage, maintenance, and de-provisioning. We describe all phases in the following subsections, defining their role in the identity life cycle. Each application context may have its identity lifecycle, and planning each phase is essential to building an identity architecture. Briefly, an identity starts out being provisioned or created for a subject. After created, the subject becomes a user of that application, and their identity is propagated by any application that utilizes this user's identity information (details in Section 4). Once propagated, applications utilize the identity, and occasionally some identity changes, such as credentials changing or attributes addition, may occur. In these cases, identity updates force an identity to propagate again. Finally, when this identity serves its purpose and is no longer needed, it is disproved or destroyed [33]. Identity Provision: This term represents the creation of an identity for a subject, a step before becoming a user of the application. Therefore, Identify Provision creates a unique identifier, credentials, and the record of the subject's attributes. These attributes can be, for example, location, email, and specific attributes for an application context. Some applications require an explicit identity provision for their users. In those cases, which are usually persons, the user registers himself into an application sending the attributes and the identity proof to be used as credentials. The application checks the authenticity, validity, and accuracy of these attributes before establishing a link between subject and identity [34]. However, there are cases where the identity of the subject is not explicitly established. In those cases, it is possible to construct a digital identity of the users, based on the collection of various network attributes used in various contexts. Most of the time, the subject is not aware of this implicit way of the construction of digital identity [35]. Identity Propagation: Some applications require that pieces of identity propagate to other systems. This replication objective is simple: applications may replicate the identity for better performance, cost, or simple failure defense system. More complex applications may require a unified identity directory where an identity created by some application may be used in another application. Ideally, a propagation must occur after each change in some identity, and the propagation must occur in a reliable way to avoid the problems of safety and consistency [33]. Identity Usage: This is the straightforward phase of the identity lifecycle. During this phase, several applications and users use this identity to perform identity verification, which can determine if a user's identity is legitimate and access control operations, which allow a user the access to perform actions over resources [33]. Identity Maintenance: In a general approach, identities are not static, and attributes and credentials may experience several changes during the identity lifecycle. For example, the base user characteristic may change over time. Its identity must follow up this change or, as another example, the application must support new business opportunities, requiring a complete change on the identity by adding new attributes. Independent of the factor that motivates this change, after completion, the results must be propagated on all affected applications [36]. Due to the dynamic nature of IoT, this is one of the most costly phases of the identity life cycle. Identity Deprovision: Removing identities at the end of its lifecycle is just as crucial as providing those identities. Deprovisioning is the process which enables the application to know about which users are no longer valid [37]. The most straightforward approach to Identity Deprovision comes with the complete removal of a user's identity. However, deleting an identity means removing all tied information, which might still be necessary for auditing. Thus, applications opt to disable instead of deleting identities. In those cases, the applications revoke the identity's credential, meaning that the identity still exists but is no longer linked to a user and does not have any access rights associated with it. Hence, the application minimizes the loss of information and stays secure; however, the cost to maintain the information of disabled identities may be encumbered. For this reason, some applications utilize a hybrid approach. They implement a mechanism that disables an identity first but deletes it only after a time interval. Therefore, this approach aims to combine the benefits of both previously mentioned approaches; however, it keeps the drawback of the first one after the deletion [38]. The State-of-the-art in Identity and Access Management Identity and Access Management (IAM) is a system for managing the life cycle of users' digital identities [39]. It encompasses provisioning and de-provisioning identities, identity authentication, and authorization to a user to access services. In a nutshell, the main goal of identity and access management is to ensure that only authenticated users have access to specific services [40]. Figure 3 helps to give a complete overview of the identity and access management topics that we will address in this survey. We will start providing a brief history of this system, following by the authentication and authorization methods, and finally, the identity and access management models most commons today. A Brief History of Identity and Access Management IAM has started as a simple security solution to check the identity of persons when accessing a specific application service [41]. In this beginning era of IAM, the group of users of an application is mainly composed of persons, meaning that IAMs focused on a human's identification. When accessing an application service, a person must first register herself on it. When they complete registration, the IAM creates their identity in that service. Latter, to prove her identity, the user must memorize a username and password. This identity represents that person on the service and is used to determine what she can do. When a user wants to use that service again, she proves her identity through login, inputting her username and password [7]. At this beginning, the IAM system was exclusively attached to one specific service, and this exclusivity on each service turns out that each one should include its own isolated IAM. If a user wants to use another service, she must register himself at that service and pass through the log in again, repeating all the processes of inputting a username and password. Although sufficient to guarantee service access security, the increasing number of services may be a burden to the user. Once that, with several IAMs, the user should memorize a username and password for each service, making this task difficult and annoying over time [42]. To solve this problem, IAMs have been detached from a single service and began to serve multiple services. Hence, a user has a single identity to represent her under a wide range of services. From the user point of view, they must have to log in only one time to access various services, which makes the password memorization process much more straightforward than before [43]. Under the same IAM, a user identity may be the same, no matter the service she logs in. This feature enables identities to carry valuable information such as preferences, location, and history of activities. As a result, services have gained the ability to verify the identity of users and make personalized decisions based on their behavior [44]. However, not only the user experience becomes enhanced, but service security also benefits since IAMs can instantly spot fraudsters based on their past activities. By detecting anomalies, IAM can, for instance, identify scammers who are impersonating legitimate users [45]. With the deployment of IoT, IAM and its concepts are once more put into proof. In this context, the term "user" is not just persons interacting with services; it is a wide variety of things, ranging from objects to individuals communicating and exchanging data among themselves. Therefore, having an identity is very important to allow IoT devices to determine "who or what it is communicating" and if "she or it has rights to communicate" [46]. Authentication, Authorization, and Auditing IAM must integrate policies and technologies to enable managing user information and to control their access to online services. Since identities contain attributes of a user (both humans and non-humans), IAM must provide access for those users, while preserving confidential personal and business information from unauthorized user [47]. In a simplified way, Identity Management is about managing identifiers and attributes related to identity, while Access Management is about evaluating these attributes based on policies and making decisions. Therefore, considering an IAM, three main operations are enumerated: Authentication, Authorization and Auditing [48]. Figure 4 illustrates a coherent picture of these three operations and their interactions, showing a situation in which a user wants to access an available network service. However, to access this service, the user must go through the authentication and authorization operations. The auditing operation, in turn, supervises all this process, creating a log for each output of the authentication and authorization operations. In Figure 4, the first barrier in controlling access to network services consists of verifying that a user who wants to access service is who she claims to be. The authentication operation main idea is that each user has some unique information that sets it or her apart from other users. During the authentication, a user provides credentials for the claimed identity, ideally, knew only by herself. As a result, IAM matches the identity's credential to the credential offered by the user and concludes if that user is the rightful owner of the chosen identity [48]. Once the system finalizes user authentication, her access still depends on the rights that the user has. Therefore, the authorization operation is a process of granting or denying the user access to some services based on a set of rules. In most systems, not all users should have the same rights to perform specific actions. Therefore, the access control model is crucial to protect specific services from unauthorized access. An access control model determines which user and under which conditions/policies can access a service [49]. Auditing means monitoring user activity, recording every action done, since the authentication until the events that follow granted user authorization. With Auditing, the system keeps track of identities activity, such as authentication results and which services have been accessed. Keeping track of users and their activities serve many purposes. For example, tracing back to events leading up to a security incident can prove very valuable to a forensics analysis and investigation case [50]. In a nutshell, it is essential to point out the clear distinction between authentication and authorization. The recognition of the identity of a user is the responsibility of the authentication operation. Authorization assumes that authentication has done correctly and applies the Access Control model. Therefore, the effectiveness of an authorization rests on proper authentication and correctness of the Access control model. The audit concerns the post-analysis of all the requests and activities of both authentication and Authorization operations requested by a user on the system [49]. Therefore, the audit is useful to discourage attempting violations, analyze user behavior, and to track possible flaws in the access control model. Authentication Methods The authentication describes the process of verifying a user's ownership over an identity. However, such proof comes in different ways, and it is associated with the identity credentials. There are various types of credentials, usually classified as: knowledge, possession, inherence and context-aware factor [51,52]. Except for the inherence factor, all authentication methods are valid to both human and nonhuman users. Knowledge factor credential describes a piece of information that the user knows. Usually referred to as "any information of something that it knows", this method is widely used today. For persons, the standard user name and password authentication process is a typical example of this category of credential type. Despite its popularity, this authentication method has the drawback of depending on user memorization capacity, meaning that the user should never forget the passwords, and once that without them, the user loses access to her identity. For IoT devices, the device stores the password, which arises a series of security concerns, for example, the challenges for a human user when managing a large number of device usernames and passwords, and the complexity of securing the passwords stored on the devices. Possession factor credential describes as "something that the user must have in their possession" to progress with the authentication. When a user is a person, the most common possession credential may include a one-time password (OTP) generator, ID card, or a smartphone. If we take into account that possession factor credentials can be lost, this type of credential faces the same challenges and problems of the knowledge factor credential. For IoT devices, secrets stored in the device prove the possession factor credential. In this case, a device store a piece of information that the system recognizes as a reliable proof of identity. For example, an IoT device can establish a symmetric key cipher with another party, meaning that both need to have the same key acting as a proof of identity. Inherence factor credential are exclusive to human users, and constitutes of biological characteristics, such as voice and fingerprint, for the authentication process. Referred as "something that user is", this factor has the benefit of that cannot 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 be lost; however, it can be temporarily unavailable due to, for example, damaged fingerprints or even common throat problems like hoarseness. In general, most of the biometric constitutes of public information, turning this factor prone to be replicated by malicious users, like a fake fingerprint or a voice record. As a result, this kind of credential is not secure to use alone, and several works point out that must be used in conjunction with other factors. Context-aware factor credential is often suggested as a complementary credential that increases the robustness of the authentication method [53]. For human users, it consists, for example, of verifying the user location using GPS devices combined with the time, which enables an accurate confirmation of the identity of the user. For devices, the system confirms the identity using behavior or characteristics, such as geographic, and communication technology [53]. Authorization Methods The success of an authorization method is related to the access control model, which determines a set of requirements that a user must meet before access a service. Diverse access control models have been proposed for use in IoT systems, such as Discretionary Access Control (DAC), Mandatory Access Control (MAC), Role-Based Access Control (RBAC), Attribute-Based Acess Control (ABAC), and Capability-Based Access Control (Cap-BAC). Each one of them has its upsides and downsides. Discretionary Access control (DAC) [54] is an access control mechanism that was initially developed for operating systems, and later transposed for the IoT context. In DAC, an owner of an IoT device specifies which users can access it and defines some access rules, such as which operations are valid, and which hours other users can access. Several approaches have been proposed to implement the DAC: access matrix, authorization table, and access control list (ACL). In general, for a single device, this approach gives the owner full control, identifying who can access it and under which conditions the operations are accessible to them. However, if the user is the owner of a high number of devices, the lack of a centralized administration can turn the design of access conditions complex and the auditing process complicated [55]. Mandatory Access Control (MAC) [56] is an access control model based on the classification of all entities in the IAM. In this model, each user (both human and non-human) and service has a security label, which reflects the sensitivity of the information that they can access or generate. The security label reflects the user's trustworthiness not to disclose sensitive information. To function correctly, MAC models put several restrictions to limit the label changes, allowing only a limited set of human users to modify object security labels. For this reason, MAC models are difficult and expensive to implement and maintain, particularly in dynamic scenarios that require more flexibility, for example, in a patient's emergency the healthcare application must lower the security of the user data to provide a faster response. However, if the application utilizes the MAC, only a few persons can change the security label, which can put the person's life at risk, since the healthcare professionals cannot receive this information in time [55]. Role-Based Access Control (RBAC) [57] is one of the most used access control model. Each user of the applications has a role, and this role determines which 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 services and operations she can access. Thus, users are assigned to roles and inherit the permissions assigned to that role. Roles can also be organized in a role hierarchy, defining that a role can inheritance the permission of other roles. In general, this model provides effective access management, but it defines only predefined static roles. The definition of these roles are highly dependent on a centralized entity, and depending on the complexity of the application, the number of roles can rapidly increase, turning this method unfeasible and potentially cumbersome to the IAM [58]. Attribute-Based Access Control (ABAC) [59] is similar to RBAC, however it is more flexible. Instead of defining a role, a set of policies tests attribute conditions, allowing or not access to some service. This strategy provides a fine-grained access control model. However, there are questions around the ideal number of policies and their evaluation [58]. These questions become more complicated when they assume that these attributes can be given from multiple resources (for example, the access depends on two user's identities) and can change over time, leading to safety and consistency problems [36]. Capability-Based Access Control (Cap-BAC) [60] is an access control model based on tokens that contain rights granted to the user that holds it [61]. Thus, during the authentication process, tokens are created by IAM and sent to the user. Each token directly identifies target services, the user to which the system has granted the rights, and operations allowed. Consequently, the user needs to show this token to the service before requesting an operation. The main disadvantage is the need for IAM to create and maintain all tokens, which also involves determining the policies of tokens creation [62]. Classification of Identity and Access Management Identity and Access Management is continuously reshaping to follow technology changes. In the first generation of IAMs, there was no separation between entities offering services or identity information; therefore, each service manages the identity of its users in total isolation [63]. However, across time, this model has ended up turning outdated to some applications, and new models have emerged. In this survey, we define that an IAM falls off five basic models: isolated, centralized, federated, user-centric, and self-sovereign model. In isolated, centralized, federated, and user-centric models, the user always relies on third-party identity providers to store and share their data. On the other hand, the self-sovereign model enables a user to have all their digital identity data stored and managed by her, allowing a user to share its information with selected SPs selectively. Figure 5 illustrates this classification, showing the evolution of IAM models across time, showing evolutionary path since isolated until self-sovereign model. For each model, we present in following subsections the main features, benefits, and limitations. Isolated Model: This model has strongly evolved with centralized computing. Under this model, a user that wishes to access a service offered by an SP must first prove her identity. This identity has the user's attributes, and the SP manages these attributes in isolation. Therefore, the main characteristic of this model is that the SP assumes the responsibility of an IdP, managing and storing all of its users' identities and attributes [64]. Figure 6 shows an example of an isolated model. As shown, a user registers herself on two SPs in which she wants to get the services. For each SP, the user has a unique identity with her attributes, and each identity has unique credentials. Once that there is no cooperation between these two SPs, each SP assumes the role of own IdP, managing the user's identity in total isolation from another. The simplicity of this model comes with some drawbacks. Managing identities in the local scope are straightforward, which allows simple authentication, authorization, and accounting. However, scalability becomes apparent as the number of identities grows. Once that a user has different identities on each SP registered, this model can induce reused credentials or make some of them be forgotten [65]. Besides, this model can wound the privacy of the registered user, because every service has her identity with all required attributes. Finally, due to the isolation, sharing an attribute among two or more identities change compromises the propagation of this change since, in this model, there are identities spread across different SPs. Centralized model: This model does not confine the scope of identity to a single service, turning the network of services that create the identity responsible for the bounding. Therefore, this model introduces a central IdP, which is the identity authority, that centralizes IAM. Once establishing an identity, a user can use any SP attached to that IdP without having to engage in the authentication process explicitly. This concept -known as a Single Sign-On (SSO) -allows a user with one unique identity access to multiple services [64]. Figure 7 shows one example of the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 centralized model. Two SPs agree to deliver this task to a centralized IdP instead of being their own IdP. Hence, when the user registers itself on that central IdP, she has a unique identity that grants her access to all SPs depending on that same IdP. Then, different from the isolated model, the user can have access to both SPs using the same identity. With this centralization, the user only needs to memorize one identity and its credentials instead of multiple identities and credentials. However, this centralization is a doubtful advantage, because if one identifier with the associated credential is compromised, all services that identity can access are also compromised. Furthermore, the centralized aspect of this model does not solve the problem of scalability of a large number of users or services. Federated Model: The concept of federation represents the relationship between two or more organizations that have identity infrastructure capabilities [63]. In this model, a group of IdPs and SPs is bound together to form a federation. In this case, ruled by a set of commercial agreements and a standard technology platform, a user participating in one organization can directly access SPs at another organization. The result is that a user participating in the federation has an extension of services without the need to manage its identity in other organizations. In other words, this model allows several identity authorities to divide the power of a single one [66]. Just like the centralized model, SSO is also allowed at the federation. In this case, a user can authenticate itself a single time with a single IdP, and all IdPs of federation consider that user authenticated, enabling that user access to all SPs that are also members of the federation. In its pure form, in the identity federation, a user only needs to have one identity profiled, generally at its home organization. However, an identity ends up spanning at multiple IdPs participating in the federation. This identity spanning occurs because, as much as avoiding redundancies is the core of federation design, sometimes an IdP still needs to replicate an identity for its internal management or even for better performance or to reduce costs and risks of failures [63,36]. Once identities can carry valuable information, there are several regulations concerning privacy protection and identity disclosure. Therefore, performing identity replication carelessly leads to security and privacy problems. Thus, it is essential to inform the purpose of identity replication. An approach to fulfill privacy and security requirements is through the use of partial identities with pseudonyms [67]. In this approach, a user's identity is replicated only to carry the essential needed original identity's attributes. The pseudonyms are new identifiers of this replicated identity, which helps to achieve something close to anonymity in this new identity. For this to work, it is necessary to enforce proper management of these pseudonyms to maintain private the linkage between identity and its pseudonym [68]. However, current federated models lack an effective mechanism to keep the consistency of users' information during identity modification or revoking [69]. Figure 8 presents a typical federated model, where two organizations establish a federation, meaning that they realize a set of agreements, standards, and technologies to enable both of them to recognize an identity from another organization. Thus, the user identity, which was previously registered by organization 1, is now accepted by organization 2. Hence, the user can now authenticate itself a single time with their IdP and grant access from all SPs that are members of the federation. In 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 this example, if the user wants to invoke a service from SP in organization 2, the IdP of organization 1 authenticates him and sends a claim-like message to the IdP of the organization 2: "I am IdP of organization 1, and I authenticate that user". Thus, the IdP of organization 1 creates a pseudonym identifier that is linked to the true identity and shares this identifier with IdP from organization 2 to ensure that a user may use SP from organization 2 without disclosing its true identity. Through this pseudonym, both IdPs agree that they are referring to the same user. However, only the IdP from organization 1, which is the one who assigned the pseudonym, stores the user's real identity with all attributes associated with it. The federation model introduces the idea of offering a set of SPs to the user with only a single identity. However, it is unrealistic to assume a global federation to encompasses all SPs. Thus, the number of identities that a user needs to manage continues to rise since the user may have an identity on several federations. There are several federated standards available, which makes IdPs able to exchange identity information. Some of the most popular are Security Assertion Markup Language (SAML), Open Authentication (OAuth), and OpenID. Security Assertion Markup Language (SAML) [70] was a product developed by the Security Services Technical Committee of OASIS. It is an XML-based framework that enables IdPs to transmit user authentication, entitlement, and identity attributes. In a nutshell, this standard allows two organizations to select and share identity attributes expressed in XML. In a typical use case for a SAML scenario, the user accesses an SP outside her organization and this SP creates and redirects to the IDP where the user is initially registered. This IDP authenticates the user and returns it with a SAML response ensuring user authenticity. The SP verifies the SAML response and, finally, authenticates the user. OpenID [71] is a decentralized framework for federated (and user-centric, explained in the following subsection) IAM, where a user is capable of accessing different SPs by the Internet through a single digital identity. Therefore, to access some SP, the user first must have an account on any OpenID IdP. After the authentication process, the IdP sends the user a global identifier following a URL format. At this moment, the user can utilize this URL to request services from any SP compatible with OpenID. Under the OpenID framework, SP is entirely dependent on IdP for the user authentication, which means the SP does not have any authentication method to verify the user's identity. Thus, the SP is not able to generate a new unique URL whenever a service is requested to replace the current URL. Unfortunately, since the URL of the user is sent across different SP requests, someone could acquire the authentication URL using a man-in-the-middle attack, getting unauthorized access to an SP. To make things even more complicated, the URL used to identify the user on OpenID is recyclable, meaning that one identifier may become associate with multiple users, leading to the possibility of someone to get unauthorized access to an SP. OAuth [72], different than OpenID and SAML, is exclusively made for authorization purposes. In a nutshell, with Oath, the user can delegate the authority to a third-party service (Service hosted at another organization) through tokens, which allows this service to accomplish authorized tasks on behalf of the user. To achieve this, Oauth determines four roles: resource server, resource owner, consumer, and 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 authorization server. The resource server is a host where the user identity is protected. The resource owner is a host user who owns the data, and authorizes a service to access her identity, respecting the limits imposed by the authorization conceived. The Consumer is a service who wants to access the resource owner identity; however, it must be authorized by the user, and authorization must be validated. The authorization server is an entity that authorizes the consumer to access the resources available on the resource server. In a typical use case, a service requests authorization to access some information at resource service. If the resource owner authorizes this request, the server receives a token, which determines his authorization limits. Thus, the service asks access to the authorization server by sending its identity and authorization token. If both are valid, the authorization server creates an access token for service, completing the authorization. This token conceives the authorization, and when sent to the resource server, if it is valid, it provides the resource for the service. User-centric model: This model is the first one to introduce a system that supports identity management at the user side. Instead of managing several identities, a user has a personal tamper-proof device that stores several identifiers and credentials, together with the IdP, who provides them. This device acts like an IdP selector and contains a portfolio of identifiers and credentials from different IdPs. This approach opens up for a user the possibility of only needing to manage its identity with their personal IdP selector. Once authenticated with the IdP selector, the user lets the IdP selector handle the authentication with external IdPs. In this model, on each usage of its identity, a user needs to explicitly approve it, meaning that it is not possible to disclose the information to a third party without her permission. In general, the centralized, federated, and user-centric models put the trust on the IdP, transferring to them control over the identity. Therefore, an IdP becomes a large datastore of personal information, storing all types of data about users. Figure 9 illustrates a user-centric model. In this example, both IdPs register the user, which consequently has access to both SPs. Instead of memorizing the identifier and credential for each IdP, this task is delegated to the IdP selector, who manages 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 the authentication process automatically. Once a user authenticates himself to the IdP selector, she can enjoy an SSO experience, without any agreement between these two IdPs. Self-sovereign model: The Self-sovereign IAMs rely on distributed ledger technology (DLT), which, in essence, is a technological infrastructure and protocols that allow the recording and sharing of data across a distributed network of different participants. It is possible to record, share, and synchronize this data in an immutable manner across the network, without the need of a central coordinator [73]. In short, DLT provides control over the evolution of data between users through a peer-topeer network, usually using consensus algorithms to ensure the replication among the nodes of the network [74]. The DLT data structure allows the creation of a tamper-proof ledger of transactions, and the ledger remains the same over the network. In short, all participants can view all data recorded on the ledger composed of cryptographically linked "Blocks", which are digital pieces of information. The security of the DLT came with the fact that once creating and appending a block to the blockchain, it is not possible to change or revert the transactions in that block [75,76]. Initially focused in the financial sector [77], DLT quickly spread in several fields, inclusively ended up as the key of the self-sovereign IAM [74]. This model emerges from the concept of allowing users to store their identity data, removing any centralized control from the identity authority. Thus, instead of depending on an IdP, users become their own IdP, meaning that they store and manage their attributes. In self-sovereign mode, users are in control of their identity, not relying on a central authority for this purpose. For this to work, identity information must be provided efficiently to those services who need to validate it, must reside in a trusted environment, and it must not be owned or controlled by anyone [78]. For security and privacy reasons, putting any personal data on the ledger is not the best approach since the ledger is immutable. Thus, it is not possible to alter or delete any data written to the ledger. Therefore, instead of sharing current attributes, this model utilizes the DLT to share a set of claims, proofs, and attestations. This model operates through the Zero-Knowledge Proof method, which allows one user to prove to another one that they know specific information or meet a specific requirement without exposing the actual information supporting that proof [78]. In the self-sovereign model, three entities are necessary: Identity Owner, Identity Proofer, and Identity Verifier. The Identity Owner is a user with complete control over its identity. When the Identity Owner desires to share some data with someone else, it turns public the proper information it has. Identity "Proofer" is responsible for attesting the validity of the data claimed by the Identity Owner. Identity Verifier validates the entity that makes a specific claim and the entity which attested this claim. It is essential to point out that claims made by identity owners can be self-asserted or asserted by another entity whose authenticity can be independently verified by a relying party [78]. We believe that with the popularity of the self-sovereign identity model, the current organizations which actuate as IdPs will have their role redefined as "Identity Proofers". Instead of storing and managing identities, the organizations will be used only to identify an identity owner and to attest to any claim that an identity owner could make. Through attested data , 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 both Identity Owner and Identity Verifier experience a safer way to check attributes since that Identity Owner does not share any unnecessary data, and Verifier does not store sensitive data [79]. Figure 10 illustrates the entities of this model. Initially, the user registers itself on DLT, creating a self-generated identification number. This identification number is unique, and no other user knows it. To access some services, the user must make some claims of specific values that they know. In this example, to access SP, the user must have the value of "Attribute 1" more than X. Both SP and the user agree to have the IdP as a trusted party. Thus, instead of sending directly "Attribute 1" to the SP, the user claim that it has "Attribute 1" more than X, and along with the identity number, sends it to Identity Proofer (1). The identity Proofer sends attestation with its digital signature (2), determining "who is the claimer", "what is the claim", "who attested it", "Whether it has been altered" and "Whether the claimer has revoked it". Now, the user stores and guarantees it is not possible to alter nor delete this attestation on DLT (3). The user requests the service (4) and presents the attestation, leaving Identity Verifier the task to check the digitally signed claim and determine if it cames from a relevant authority (5). At this moment, SP can establish a direct, encrypted connection with the user (6), providing the requested service. The Seven Laws of Identity and Current IAM Models Introduced in 2004 by Kim Cameron, The Seven Laws Of Identity [80] is a set of principles to which an IAM must conform to offer a universally adopted and sustainable identity system. A metasystem, or system of systems, is a concept of having an IDM that leverage the strengths of all constituent IAM models and provide interoperability among them. This metasystem concept aims to create a consistent IAM system to all of the users, resulting in improvements that benefit all applications, solving several identity systems challenges and making the Internet a safer place. In short, this identity metasystem aims to provide the user identity control when accessing services over the Internet, allowing them to select their digital identity and use them to access the services of their choice. Furthermore, this identity metasystem should enable identities based on different technologies to operate together, where a trusted intermediary that understands both technologies exist and realizes the translations from one technology to another. Law Isolated Centralized Federated User-centric Self-Sovereign Law of control: IAM must first offer a convenient and straightforward way to manage user's identities. However, to endure, the IAM should earn the user's trust. Then, IAM must support to put the user in control of the used digital identities and released information. This law is the logic behind user-centric and self-sovereign model, however not wholly fulfilled by user-centric. In user-centric, a user still needs a third-party IdP to store identity information, meaning that there is still a need for a third-party IdP, which has some degree of control over her identity. Law of Minimal Disclosure: There is a risk of a data breach an IAM. The best practice to mitigate this issue is to acquire only the information that a service "need to know", and retain only the information "need to retain". Thus, by following these practices, it is possible to ensure the least possible damage in the event of a breach [80]. In short, except for the centralized model, all models have their way of dealing with minimal disclosure. In isolated model, all identity's information is contained at one service, meaning that there is no share of this information with other parties [64]. Thus, any data breach is confined at one single service. Federated model implements the pseudonymous, which makes data identifiable at third-party IdPs presented at the federation. However, user data is untraceable without accessing the IdP that generate the identity. User-centric model has the objective of dealing with this kind of data breach, however, while still having different identities attached at their personal IdP, the identity information still relies on a third-party IdP exposed to the data breach. Self-sovereign model, assuming that users should never put the identity data at the ledger, utilizes the zero-knowledge technique, which minimizes the amount of data exposed to data breaches, since it never disclosures the exact identity's data. Law of Justifiable Parties: IAM must make users aware of the party or parties with whom she is interacting while sharing information. Thus, it represents the central premises of the user-centric and self-sovereign model. Both user-centric and self-sovereign model place the users in the middle of the identity process, preserving their freedom to pick their favorite IdPs and to share information. Law of Directed Identity: To deploy means of managing identities in a hyperconnected world, the IAM must create a relationship between identities, creating a context for a given situation. Thus, the IAM must support two types of identity relationships: "omnidirectional" and "unidirectional". Public entities (IdPs and SPs, for example) should have identifiers that are invariant and well known. These public identifiers can be thought of as beacons -emitting identity to anyone who shows up-relating to anyone in an "omnidirectional" way. On the other hand, when a 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 user wants to share information with other entities under the law of minimal disclosure, they must create a short-lived relation revealing the least possible identifying information -creating a "unidirectional" identity relationship. Law of Pluralism: IAM must enable the inter-working of multiple identity technologies run by multiple IdPs. Thus, it means that an IAM must allow the coexistence of multiple technologies. Except for the centralized and isolated model, all models have tools to operate across different domains. In the federation model, when two or more organizations establish a federation, they determine a set of rules and agreements that enables the identity share across multiple models, with each organization implementing their technology. In user-centric, the personal IdP offers an interoperability across multiple IdPs. In the Koshutanski et al. [81], for example, the authors implement a module on personal IdP that allows users to transform authentication messages from one authentication method to another. In the self-sovereign model, the possibility to share identity claims accepted across different services, fulfill the law of pluralism. Law of Human Integration: IAM must define the human user as a system component. Thus, it must offer security protection for human-device communication to offer protection against identity attacks. Instead of initial identity verification, the user must have other ways to prove its identity. This human integration aspect is attached to the authentication mechanism. It can be achieved, for example, by the integration of multi-factor authentication, meaning that user authentication can only occur when presenting more than one identity verification form. All models can have multi-factor authentication, meaning they all can fulfill human integration at some degree level. Law of Contexts: In an IAM, a user must be able to interact in an identity relationship, deciding which identity elements to share. Thus, the IAM must incorporate this decision interaction with users. This approval of identity use is the key feature of the user-centric model and self-sovereign model. By adopting the user-centric model, a user can decide to share an identity from one IdP to another, picking which identity information we share. In self-sovereign model, once a user has full control over her identity, she determines which claims to share. IoT on the future of Identity and Access Management Systems The connected nature of IoT introduces a series of security challenges that come up as a barrier to the wide adoption of IoT. Once IoT encompasses a large number of connected devices, with only a few of them designed with security as consideration [5], data breaches can have a cascade effect, which can lead to devastating consequences. In an autonomous self-driving vehicle, for example, the injection of bogus data can cause a fatal accident. In a diabetes treatment application, a false glucose reading can lead the insulin delivery device to wrongly adjust the amount of insulin, threatening the patient's life. These examples highlight the need for a mechanism to identify devices, sensors, monitors, and manage their access to sensitive and non-sensitive data before sending or receiving information. In IoT, "things" can have the role of both users and SPs, meaning that they are valuable resources that require management encompassing control and audit [11]. Requirements of the Identity and Access Management System for IoT IAM for IoT is about dealing with a massive amount of users; thus, each user must have at least one unique identity. However, concerning accessibility and usability, people are used to almost instant results, meaning that the user experience must be taken into account when planning an IAM for IoT. When providing an identity for all "things", the identity provisioning must be as fast as possible, with correct access rights and the de-provisioning must be just as effective as the provisioning, to avoid malicious user of seizing old identities to initiate an attack. The authentication scheme of the IAM must be able to support multi-tiered authentication where users have relationships and require different authentication methods because IoT devices, as general, are by their nature designed to be simple and to conduct a set of specific tasks. As a consequence, most of them lack security features and computational power, which turns the authentication and cryptography operations challenging. Also, due to their simplicity, most IoT devices do not have a proper interface with the user, meaning that traditional authentication methods, like the password-based, may not be directly applicable to IoT. Moreover, with the scale of IoT, manually authenticating on each device must not be the main form of authentication. For access control, the challenge is not granting the right access levels; instead, the main issue is when and why to grant access. Thus, the challenge lies in setting limits for IoT devices and determine what is appropriate for dynamic large-scale scenarios. Therefore, to address this problem, several works [82,83,84,41] focus on understanding the context of device regular operation and making the access control predict or act based on subtle variations of this behavior. Overall, those challenges show how IoT is influencing the current IAMs to adapt to offer a more specific identity platform for these devices. In a nutshell, managing persons' and devices' identities brings a variety of potential problems due to the nature of IoT. Thus, current existing IAM solutions might not fit the IoT domain. In the following, we present a requirement list to design an IAM for IoT, based on the seven laws of identity [80] and IoT characteristics. Scalability (Req.1): IoT compromises of uncountable things connected, which means that IAM for IoT must operate on a massive scale. However, current legacy IAM platforms are not capable of handling these massive scenarios; in fact, most of them are isolated, inflexible, and unable to scale. When developing an IAM for IoT, the system needs to handle hundreds of millions of identities and access validation actions per second [11]. Flexible Architecture (Req.4): IoT encompasses a wide variety of devices, from sensors, mobile devices, cars until high computational resources. Consequently, these devices operate across a wide range of programming languages and different platforms. Hence, the IAM must be compatible with all devices providing the flexibility to build applications on any platform or language [86]. Adaptive Authentication (Req.5): As pointed before, the IoT devices are not homogeneous. These devices vary in computational power, connectivity, and power requirements. So, each IoT must support at least one authentication method. Therefore, an IAM for IoT should be flexible to support adaptive authentication for a wide range of devices in different scenarios and with different levels of complexity and security requirements [87,88,86]. Continuous & Contextual Security (Req.6): IoT dismantles connectivity barriers and opens doors for the creation of a full connected devices' ecosystem. This feature sparks an interest in IoT due to the new pack of opportunities that emerges. However, hackers and malicious users can also exploit this connected ecosystem. To exemplify new attack techniques, DDoS botnets from IoT devices can become a reality, attacks on pacemaker implants, and also on blood pumps. These two examples by themselves already show that IoT devices can cause great harm. In this direction, context-based security for users and devices is critical in securing IoT. Through contextual information such as geographic location, time, device profile, device behavior, the IAM should generate a risk score that relieves or intensifies the authentication process [88]. In IoT, things are interconnected, changing information and context during all time. Even if a device successfully authenticate to a system, how guarantee that it remains validated over time? Most legacy IAM solutions only protect the initial authentication. For this reason, IoT requires higher security. Applying a contextual identity and adaptive risk at the time of authentication at any point during a communication increases the security for all users once the continuous security approach ensures the authenticity of users at all times and can mitigate risk whenever an anomaly is detected, even for previously authenticated users [89,90]. Relationship Management (Req.7): IoT introduces an environment with high connectivity, allowing each device to have a series of associated relationships. During an IoT device life cycle, it may change hands numerous times, being necessary to store information such as the device manufacturer, the current owners, the previous owners, significant components, special privacy considerations, and security provisions. This information is not only crucial in identifying the connected devices, but it is also critical for security and to deal with the complexities involving device ownership and access. Therefore, the IAM must be able to create complex and dynamic relationships among the IoT, which includes the respective security ramifications. Privacy & Consent (Req.8): IoT devices are the primary agents to gather a massive amount of data from users. These data can be very personal (user history, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 preferences, health status), meaning that if it becomes publicly available, it can expose the user to many risks, so there is an interest in hiding this data. However, data collected to IoT devices might be necessary for statistics and optimization for that user, making users concerned about how their data is shared and used. Thus, to deliver this personalized data share experience, privacy must be prioritized. In this direction, an IAM must give the users the ability to manage privacy preferences, and consent to data sharing in a way that they can control their data access [91,92]. Analysis of Identity and Access Management Systems for IoT In this subsection, we offer a comprehensive analysis of the IAM proposal for the IoT context. This survey offers a broader perspective of IAMs for IoT showing, for each work, the basic concepts, application context, and technical particularities. Isolated IAMs • Identity management framework for cloud-based internet of things: Horrow et al. [93] propose an IAM framework based on the isolated IAM model. The authors assume a scenario where human users and IoT devices are capable to indirectly communicate with each other through a service hosted at the cloud. The authors argue that the cloud, due to the processing and storage capability, is a suitable technology to host an SP and to centralize the IdP functions. The proposed IAM framework has two modules: the identity manager and the service manager. The first module is responsible for authentication operations of the IoT devices, human users, and services. The latter defines the authorization functions for IoT devices and human users on the services. In short, the proposed framework follows a publisher-subscriber approach, assuming that IoT devices gather information from different kinds of sensors and publish this information to services on which is subscribed. When a human user accesses a service to get the information collected by IoT devices, she must first authenticate to the cloud, and pass to the authorization process, which verifies if her subscription status. The framework considers the mobility of both IoT devices and human users as contextual information to avoid illegitimate access to the services, meaning that the location and the network on which the IoT device or human user is connected is valuable information for the authorization. In short, if the IoT device is connected in a valid location and network, it is allowed to publish the information collected and, similarly, if the human user is connected in a valid location and network, she is allowed to read the information collected by IoT devices. As a result, every location and network also has a unique identification. Although the framework describes the identity manager as the module for the authentication operation, the authors do not define any specific method for it. The authorization, on the other hand, is well-defined to follow the Discretory Access control (implementing an ACL), with each service having a list of IoT devices and human subscribed users, followed by networks and locations where available. The authors define the identity life-cycle, determining that during the identity provision phase, each IoT device and human user of the IAM is created and allocated at one location, resulting in an identity 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 composed of the tuple: a unique identifier, type of human user or device and location identifier. When a human user or IoT device changes location, the identity maintenance updates the location identifier. Finally, when any human user or device leaves the IAM, the authors define that this information must reflect in the identity manager module, deprovisioning their identity. In this paper, there is no auditing module, which led us to assume that the cloud takes this responsibility. In other words, the cloud can see any activities among human users or devices, which expose the identity information privacy to a third-party. This work assumes a scheme where one universal IAM maintains all IoT devices, human users, and services identities. Even if we assume that the cloud technology provides a solution in terms of computational capacity, building a scalable solution for the communication with the cloud at this universal scale is unrealistic in the IoT context. Besides, the ACL model is another drawback of this work, once it supports a large centralized system. The rapid growth of services and IoT devices turn this strategy obsolete because more the need for more complex relationships among those human users, IoT devices, and services. Centralized IAMs • Authentication and Access Control in the Internet of Things: Liu et al. [94] describe an authentication and authorization method for an centralized IAM, in a scenario where IoT devices and human users are capable to communicate with each other. An identification key procedure establishes secure communication between the IoT devices and their identity. Basically, through an Elliptic Curve Cryptography-Based authentication, both users establish a key for their communication. Those users utilize this key in their communication, ensuring it is secure and also considering the possession factor credential. The proposed work assumes scenarios with a massive number of devices, and the centralization comes with the assumption that every user is pre-registered on a nearby trustworthy access point, denoted as a registration authority. This authority has the role of the IdP and provides computational and storage capacity, assuming the role of the trusted third-party during the authentication. Moreover, this authority is also able to keep a historical record of all accesses for auditing purposes. Authorization happens through a hierarchical-RBAC, meaning that exists an inheritance relationship among roles, resulting in a superior role inherit all permissions of the roles below then. This work does not include Contextual Information and consent mechanisms. Therefore, several works like [95,96] act in this requirement, where we point out the work of [95], where offer easy authorization is the main point of their work. • Security architecture for mobile e-health applications in medication control : Gonçalves et al. [97] propose a framework for authentication and authorization of users and devices for healthcare applications, focusing on a remote medication control system for Ambient Assisted Living. Their framework assumes a mobile application where physicians fill out the medical prescription, including the dosage and time to take the medication. A centralized 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 e-health system database stores this information, linking with each patient's RFID. An electronic personal health record, which is a mobile device carried by the patient, can retrieve this information. Authorization follows an RBAC model, determining two well-defined roles: patients and physicians, with the former having read-only permission, and the latter having read-write permission. Due to the sensitive nature of the medical environment, the authors argue that in case of emergency, if physicians cannot reach the needed information, it can be dangerous to patients' life. Thus, to solve this problem, they present an adaptive authentication, where two types of authentication occur. When the user access read-only operations, such as a prescription consultation, it occurs through a simple login and password method. However, when the physicians perform a read-write operation, such as prescription writing, the authors propose a robust authentication protocol based on public-key certificates, stored in smart cards. Both authentication methods rely on a centralized server application, which acts as an IdP and verifies the validity of the received password or certificate. All the accesses must be auditable at a centralized server, maintaining a robust log that identifies the user, time of occurrence, and operation performed. In this work, contextual security is out of scope, and the system does not present any form of privacy and consent about the patient's data. • Non-Intrusive User Identity Provisioning in the Internet of Things: Al-Karkhi et al. [98] propose authentication and authorization methods based on user behavior. The authors argue that, since IoT devices are becoming part of people's lives, the way their devices interact with other devices and services, such as smartphones and connected vehicles, follows the user's behavior. Once that the people's time and attention are limited, this work assumes that traditional authentication methods, such as login and biometric scanning, are not adequate for user identification due to the dynamicity of IoT. If we assume users have mobility and several relationships with services, the model of user identity confirmation on every interaction is inappropriate for IoT devices. Therefore, to get the user's identity, the proposed work tracks the user's behavior and asserts them into their identities. The authors argue that this solution should help users to avoid and control intrusive interactions, reducing users' disturbance in dynamic environments. For example, the IAM can assert user identity by tracking their access from one service to another while being connected at university. Therefore, this work proposes a centralized IAM that creates, at any time, an implicit identity attached to a confidence level for each user. The authorization occurs through the MAC model, with the IAM determining, for each service, the minimum required confidence level. Thus, the user authenticates on their devices under any authentication method, and every time that user accesses a service, the IAM verifies the respective context and recent activities, checking if that interaction has occurred recently or not. According to the result of this check, the IAM updates the user confidence level. For example, when a user performs a recognized activity such as accessing a particular service at a specific time, the user confidence increases. The authors assume identifying 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 and recording only the minimum context information to determine the user's confidence level, thus ensuring privacy. However, the user does not have any form of consent about the recorded data. The main limitation of this proposal is related to the time required to build the IAM's user knowledge and the lack of flexibility inherent by the MAC authorization model that would encourage users to remain with the same behavior. • A flexible authorization architecture for systems of interoperable medical devices: Tasali et al. [96] propose an authentication and authorization framework for the medical environment. This work assumes a scenario that patients' vital signs are continuously monitored, creating a real-time stream of their health conditions. The monitoring system sends information gathered by these sensors to a centralized application, which can be accessed by clinicians at any time. In short, this application offers real-time data of a set of patients, reducing the need of the clinicians to visit each patient physically. The authors argue that both clinicians and the application must have access restricted to the patient's device, thus ensuring privacy. The authorization system follows both RBAC and ABAC models. Specifically, each application has a role defined by a RBAC authorization model, and each clinician has their authorization permissions set by the ABAC authorization model since it has more flexibility and can carry context information, such as location and relationships with patients. Since application and clinician may have different permissions, the authors propose the flexible authorization model, wherein emergencies, the application can inherence the attributes of the clinician. If the clinician needs to access some device, but the application does not have access, the application can temporarily expand its permissions by inherent permissions of that clinician, allowing them to function correctly in emergencies. Once this work does not implement any form of consent for the patient side, they are very vulnerable to social engineering attacks. A malicious user can execute a manipulation to trick patients, setting their permissions into the patient's devices, resulting in a device that gives away sensitive information about the patient without any form of consent. • Identity Management for the Internet of Things: A Software-Defined Networking Approach: Sadique et al. [99] propose an IAM architecture based on concepts of Softwaredefined networks. Authors argue that IoT is composed of different objects, and these objects should be able to travel among different networks, regardless of their locations, network providers, and manufacturers. To achieve this, the authors claim the need for a collective identity among different networks. In their proposal, they designed a location-based IdP and spread identities over several locations of the IoT network. Every device implicitly registers at the closest IdP, and this IdP defines the device's identity context. For every new registration, the local IdP forwards this new identity to a global IdP hosted at the cloud. This global IdP has the origin IdP of every identity, and keep track of context changes. When accessing a service, the device authentication occurs at the closest IdP. If this IdP has the identity information , 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 authentication occurs instantly. If not, the IdP forwards the authentication request to the global IdP. This global IdP, responds to the authentication request and updates identity context on their database and at origin IdP to the new IdP context. The new IdP replicates the device's identity information and responds to the authentication request. The centralization at the global IdP is a significant drawback in this context. Once it inheres concepts of software-defined networks, several works show that control actions, such as rule installation, have surprisingly high latency [100]. Therefore, in this proposal, every time a device changes location, the authentication process is vulnerable to a high latency. Federated IAMs • A federated architecture approach for Internet of Things security: Leo et al. [101] propose a federated IAM model in the context of the smart homes. The authors argue that since IoT becomes a critical element of the people's lives, the provision of adequate security for the IoT must be across multiple domains. However, due to connected devices heterogeneity, it is necessary to have an entity to mediate the communication among devices. Thus, the authors introduce de Secure Mediation Gateway (SGGW), which is a hub to overcome this heterogeneity and provide secure communication among IoT devices in a domain. In their approach, they partition the IoT devices into two groups: intraSMGW and interSMGW. The first one, denoted as the intraSMGW group, is the internal set of devices belonging to a security domain accessible by a single SMGW. The remaining devices constitute the second group. Each SMGW acts as a centralized IAM for their domain. Among SMGWs, there is a federated network, which enables the secure remote access to devices within a domain supervised by a single SMGW. Thus, SMGW is the boundary between intradomain and interdomain. The proposed architecture shows the importance of a federated IAM to have an internal autonomy or centralized unit to overcome the heterogeneity of various devices. However, the architecture does not specify any Authentication, Authorization or Auditing functionality. • Consolidate the identity management systems to identify the effective actor based on the actors' relationship for the Internet of Things: Majeed et al. [102] present an IAM framework that focuses on the relationship between users and devices. The authors argue that once IoT is an interconnected network of users and devices, the traditional interaction from "owner" and "subscriber" does not fulfill the IoT requirements. IoT is a complex network, with several users interconnected on behalf of devices other than their legal owner. Thus, the authors argue that IoT requires to establish the identity of an actual user -called effective actor -behind any communicated device. However, such identification is a challenge since the IoT environment is not always static, meaning that a user can dynamically establish interactions, changes, or disconnections. The framework proposed by the authors is capable of establishing the effective actor identity of mobile objects that might 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 belong to different IAM in the IoT. When a user engages a relationship with some device, it creates an identity composed of user and device identities and their origin IAM. Then, when a user requests a service from an SP, this relationship identity is presented, containing both users and devices attributes in addition to their origin IdP. The SP sends this information to the respective IdP, which checks the attributes with the users and devices origin IdP. Assuming that this IdP has a trust relationship with the IdPs presented by the user and device, it authenticates the user and device relationship, and the authorization method occurs through the ABAC model, with attributes of both user and device. In this work, the audit occurs separately on each IAM, which limits a malicious user to corrupt the log files as a whole. However, in this approach, the authors do not present any consent mechanism for users and devices. • A Federated Lightweight Authentication Protocol for the Internet of Things: Santos et al. [103] propose a federated identity authentication protocol for IoT. This work argues that current federated IAMs are mostly ill-suited for IoT devices since most of them are build upon the login/password and have weighty protocols. Addressing this problem, the authors present a federated identity authentication protocol tailored to IoT, based on the assumption that IoT devices are generally resource-constrained. Thus, this work adapts the traditional IDM authentication to achieve a federated lightweight authentication. In this direction, they replace weighty cryptosystems by the Elliptic Curve Cryptography, which is more suitable for IoT and reduces the message load in the IoT device. This work presents an authentication solution for IoT. The authorization and audit, are out of this paper scope. User-Centric IAMs • A user-centric identity management for internet of things: Butkus et al. [95] propose a user-centric IAM focusing on the mobility of human users and IoT devices. This work assumes a scenario where IoT devices are dynamically interacting based on the identities of their owner, where the relationship between the human user determines which IoT devices one can access. In sum, each human user is the owner of a set of IoT devices, and each relationship between humans has a role that set which devices a human user can access from another. For example, when a user visits her friends, their relationship determines if her devices are allowed to engage in a service to allow communication and collaboration with devices at the visited place. To address the mobility issue, the authors assume that when this human user visits a place, the local IAM returns a list of trusted IdPs to the user. When the user picks her origin IdP, the local IAM redirects the authentication request to this origin IdP to validate this visitor. The authors do not define any specific method for this authentication, leaving to the origin IdP the picking of the most suitable method. Upon successful authentication, the origin IdP sends a token to the visitor. The visitor then forwards the token to the service to get access. Once with this token, the authorization relies on the RBAC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 method. The auditing is realized in both local IdP as the visiting IdP and contextual security is given by the human relationship. • Identity Management in E-Health: A Case Study of Web of Things application using OpenID Connect: Domenech et al. [104] propose a user-centric and federated IAM focusing on healthcare applications. The proposed architecture assumes a scenario where a patient wears several medical devices, which continuously gather data about that patient's health condition. The authors assume these devices do not have resources to directly share data on the Internet, meaning that they employ a smart gateway located at the network layer of the 3-layer architecture of IoT, to act as a bridge between a device and the Internet. In the proposed solution, they use the OpenID Connect framework to authenticate users and devices and to establish trust relationships among users and other entities. Thus, when a device or user tries to publish data to the SP, the SP indicates the OpenID Connect Provider for the authentication process. Once authenticated, the OpenID Connect Provider issues and forwards a token to the SP. Then, based on ABAC authorization model, the SP grants the required access. Since this work uses OpenID, it inherits well-know challenges of this platform. For example, if we consider the latency and throughput of the network and the use of computational resources of the devices, the OpenID contributes to highload environments, resulting in a performance loss [105]. Furthermore, since the access token mechanism of the OpenID Connect protocol utilizes the same token across different requests, a malicious user can acquire this token to use in a man-in-the-middle attack and get unauthorized access to data. This data leak can compromise other systems and be used to follow up attacks. • Cloud-based federated identity for the Internet of Things: Freemantle et al. [106] propose a model called OAuthing, that aims to provide a federated IAM and consent management for IoT systems. The authors argue that one of the critical issues in the IoT is the concern that a device usually supports only a specific manufacturer's web system. The manufacturer is the one who manages identity, stores data, provides a user web interface, among others. Once that service can be hacked or may go out of business, the authors determine that this model is not trustworthy. To address this problem, the authors propose a model to reduce the amount of information a manufacturer stores. To achieve this, the authors recommend a separation of devices and users IdP. The users IdP is where the authentication method occurs, usually with a login page to users present their credentials. The devices IdP is where the manufacturer stores a secure identity (pseudonym of the device's true identity) of devices. The manufacturers issue a device with a default client identity token at production time. When the user buys the device, they present the secure device identity to the device IdP, which presents a choice of user IdPs to the user. Once the system authorizes the user with their existing user IdP, the device IdP refreshes the stored token on the device representing the logical owner of the device. Thus, the device IdP acts like an identity broker, where when a user wants to access a device, it should authenticate with the user's IdP using some federated identity protocol such 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 as OAuth, OpenID, and SAML. Once authenticated, the device IdP creates a pseudonym to provide privacy for the user. In this proposal, they do not share anonymous identity with anyone, and issue all the authorization methods following the Cap-BAC model, with random tokens that give permission to perform specific actions but do not identify users. When a service requests access to some user's device information, their pseudonyms are associated with a token to route the request to an instance that is specific to that user. Thus, in this model, the manufacturer only knows the original device identity (e.g., MAC address) and the client identity that has been issued by default. While this model solves the problem by limiting the amount of information that they store, the devices IdP, except for the data and the device's true identity, have access to all information of the IAM. Moreover, this work presents only the ownership relationship, with the device inheriting all user's authorizations from its owner. However, this may not address the IoT relationship requirements of IoT complex scenarios. Self-Sovereign IAMs • Blockchain for IoT Security and Privacy: The Case Study of a Smart Home: Dorri et al. [23] have proposed a solution where smart homes have local DLT (Distributed Ledger Technology) for controlling and auditing communications of devices, privately and securely. The authors argue that DLT overcomes security and privacy challenges in IoT, since creating and appending a "block" to the blockchain is a permanent operation. However, they argue that adopting DLT in the IoT context is not a straightforward solution. The creation of blocks demands a proof-of-work challenge, which is a way for the DLT to establish a "decentralized consensus". In a nutshell, this proof-of-work demands high computational and energy resources; the transaction confirmation suffers from long latencies due to the consent, and the broadcast of blocks to the whole network is not a scalable solution. Thus, to solve this problem, the authors assume that each smart home is equipped with an always-online, high-end computing device, known as "miner", and has functions similar to the centralized IdP: authenticating, authorizing, and auditing device transactions. However, once that consent solution implies in hard proof-of-work challenges, long latencies, and solutions that do not scale, this work discards these operations, by maintaining a private DLT in a single host. Thus, the "miner" maintains a secure ledger that contains devices activity log, including all requests and their results, without the proof-of-work drawback. In short, this work makes the accountability a very hard-to-forge local public log. While this work looks adequate for a single IoT system, the lack of scalability makes it inadequate for large IoT systems. When this work abolishes the proof-of-work challenges using a single server, it completely isolates itself into a silo-like solution, which may not sound a right approach for IoT, once that most of IoT systems are a compromise of a network of systems. • Improving the privacy of IoT with decentralized identifiers (DIDs): Kortesniemi et al. [107] propose the use of the self-sovereign identity for IoT devices. They determine that, in less critical applications, the device's identity can be their IP address or the hardware identifier, such as RFID. However, in critical applications, a device must be able to prove the claimed identity. If this device is used only by its owner, the usage of permanent unique identifiers for the devices present no privacy problems. However, when the owner enables the device to operate with third parties, a permanent unique identifier is a privacy risk, since the system can potentially track information to reveal the device owner. Also, if the device is at some stage sold or borrowed, maintaining the same identifier would put both the old and new owners' privacy at risk. To solve privacy problems, the authors propose the device identity has to be changeable, and one way to implement this is through the self-sovereign identity since it allows them to create, manage, and to discard identities as they seem fitting. The author shows that deploying DIDs directly on IoT lower layers (perception and communication) may not always be possible due to the limited resources and security risks. Thus, the DID must reside in devices that have acceptable computational capabilities, such as gateway or hub devices. In short, this work shows how to introduce DIDs as a complementary function to the OAuth-based authentication and authorization operations. The authentication of devices with the hub occurs through a possession factor, with pre-shared secret keys. When this device communicates with a service, the hub promotes the idea of using anonymous or pseudonymous identifiers for each service and even switching identifiers at suitable intervals, complicating to a malicious user to track and correlate the legit user's activities in different services, thus protecting privacy. In short, this paper shows that DIDs are a suitable solution for privacy-enhancing identifiers of IoT devices. However, not all devices can implement it, being necessary the use of proxy approaches for these more constrained devices. • Secure Open Federation of IoT Platforms Through Interledger Technologies -The SOFIE Approach Lagutin et al. [108] present SOFIE, which is a solution for federating the existing IoT platforms openly and securely using Distributed Ledger Technologies. The authors determine that most IoT platforms and systems are centralized and unable to exchange data among themselves or perform actions across each other. The authors argue that several types of DLTs can be used for IoT platforms, each offering different trade-offs in terms of latency, throughput, and consensus algorithm. Thus, in complex systems, the idea of having a single DLT for everything is unfeasible, highlighting the need to tackle interoperability. To achieve this interoperability, the authors present the inter ledger approach that allows different DLTs to exchange data with each privately. Thus, in their solution, each device is a participant of a private ledger, storing all authentication, authorization, and auditing transactions. When this device wants to share some information with other IoT platforms, it stores only a subset of the data to the main ledger used for collaboration with other ledgers. While this work addresses the interoperability, inter ledger operations may take minutes or even hours, which might not be suitable for IoT with real-time restrictions [109]. Table 2 summarizes the works we previously presented, according to the requirements of the Identity and Access Management System in IoT (see Section 5.1). In the following, we further discuss each requirement. Scalability (Req.1) is highly required at any work that is targeting to present an IAM for IoT because it is one of the most evident requirements for it. Some IAMs models, like the isolated and centralized, have more issues to offer this scalability in relation to the user-centric and self-sovereign. Therefore, instead of only analyzing the model, we will put the technology employed to provide this scalability. Some works, like [93,106], use the cloud to offer this scalability due to its limitless computing and storage capacity on the paper. However, as pointed out by [110], the centrality of those kinds of solutions may not be the best one when dealing if IAMs. Some works, like [99], avoids this centricity by employing fog computing nodes in their IAM system, to deal with the IAMs in a distributed manner, which can be more appropriate when dealing with IoT. However, some works like [107,23,108] take it to another level and employes the DLT as the main technology behind their IAM. While this may offer the maximum scalability and decentralism, the computational power needed to employ this technology may not fit all IoT devices. As a result, we see several works, like [107], that utilize fog nodes as a local centralized unit interacting with the DLT, which creates a local centralized and global distributed solution. Discussion Mobility (Req.2) is another requirement that was present in a significant part of the works analyzed. Since IoT and mobility are several bonded one to another (even if not all IoT devices are mobile by nature) allows some kind of mobility. The work of [93] utilizes the isolated model, which, in theory, is a struggle for mobility. However, once it is employed in Cloud technology, we still consider it mobile. In fact, mobility is so vital that some works, like z [95,99], have this mobility as the main factor when developing their proposal and utilizes it for its own good. It is important to point out that the work [23] utilizes the self-sovereign model, but once it only creates a local DLT unable to interact with the wold, we consider it immobile. Easy Device Registration, Revocation, and Authorization (Req.3) is a gray scenario, once this is not the main topic of several works but yet still present in several of them at some degree. To offer an easy device registration and revocation, we consider that users must be allowed to self provide some essential services, like password recoveries, self-services authorizations, etc. Several works like [95,96] act in this requirement, where we point out the work of [95], where offer easy authorization is the main point of their work. On the other hand, self-sovereign models shine on this topic because the DLT technology that allows the user to register and revocate their devices in a more natural manner, and the transparency of the DLT allows easy and secure authorization. In this paper, to evaluate the requirement Flexible Architecture (Req.4) we must investigate the IAM model and the technology employed. Isolated models highly tied to the application by nature, which makes them very inflexible. Thus, we consider the work [93] as not flexible enough for IoT. The other models (centralized, federated, user-centric, and self-sovereign), decouple the IAM from the application 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 and get more flexibility. However, when we observe the technology employed by those IAMs, some particularities can be seen. The ones that utilize cloud computing rely on the concept that resources can be purchased on-demand to satisfy any needs that IAM have, both in terms of scalability and flexibility. However, the work of [36] points out that, even with the unlimited expansion-capacity of the cloud, the distance between the IoT devices and the IAM significantly impacts the latency of authentication and authorization. To offer a proper answer to flexibility problems, the works [111,112], for example, alleges that cloud has limited flexibility to support IAMs and point out fog computing as the solution of the flexibility and the latency problem originated by the cloud. In [99], the author utilizes fog devices of different platforms to perform the authentication and authorization, which can significantly reduce the delay of those functions and the flexibility of the IAM as a whole. Lastly, when we observe DLT-based IAMs [107,23,108], it still in the early stages, with several workings being published recently. However, since that DLT already showed its potential to be used to a wide range of applications and data. We believe that DLT transparency, security, and distributed way to store information and transactions is enough to prove its capacity to offer a flexible and decentralized architecture. The Adaptive Authentication (Req.5) and the Continuous & Contextual Security (Req.6) are higly tied one to another. The first one implies that the IAM system must modify the authentication operation to respond to some change in its operating environment. In contrast, the second one implies that the IAM system must consider contextual information to provide better security. Regards to the first requirement, some works take this requirement as a priority and develop the IAM system centered on having some adaptation mechanisms on their authentication operation. The cause of this adaptation, on the other hand, varies a lot from one work to another. Previous works as [98], for example, take the time and location of the user into one university to make the authentication operation adapt based on the safety level required of the place and time. On the other hand, [102] uses the trust relationship of the device's owner to decrease the safety level of the authentication operation to friends when visiting his home. However, as pointed out by the own author of [98], most of the current contextual information is very simple and has some major flaws. The work of [98] concludes that it takes some time to learn the user mobility pattern inside the university to make the authentication mechanism adequate. If this user changes its routine for some reason, the adaptive authentication operation must learn this patter again. Meanwhile, while the authentication operation does not learn the new pattern, the user will experience several authentications denies, even if he is a legit one. On the other hand, the work of [102] is based on a single feature: the trust of the owner. While this may sound great for the user side, several flaws arise when putting the human factor inside the adaptive authentication. A single human error caused by a misleading over promoting could lead to several data leaks, as we can observe in literature [113]. In short, we see that most of the works that employ adaptive authentication utilize only a few characteristics to adapt or rely on humans to make this adaptation. We believe that exists a lack of a complex solution that utilizes a large amount of context information that IoT nodes could gather . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 The Relationship Management (Req.7) requirement is the most neglected in our analysis, with only four works addressing it clearly. However, even in those works who implement the relationships, only simple relations are implemented. In [93], the author implemented all IoT relationships and interactions into a simple publisher-subscriber approach. The proposals [98,106] are the ones who address the IoT relationship more. The first one creates a friend list that changes the authentication based on their friendship and trust, while the second creates some ownership relationships. While both works address those relationships and have this aspect in common, their relationship management is pretty simple and unable to implement more complex relationships as the ones that we present described in Section 2.1. In a Healthcare IoT scenario, for example, if there is a need to implement fine-grained access to patient's IoT devices, this simple publisher-subscriber approach is unable to deliver this need. However, if the whole IAM system already has a net of relationships among devices, users, and applications, enabling fine-grained access is much more straightforward. We believe that this lack of proposals that model complex IoT relationships causes significant problems to enable not only the fine-grained access but also difficult the creation of adaptive authentication mechanisms with the capacity to handle complex situations. The Privacy & Consent (Req.8) requirement is another one highly tied to the IAM model. We observe that works that utilize the isolated, centralized, and federated model at its pure form, does not offer any mechanism to guarantee the consent of their users. This problem can be partially dealt with in user-centric models by allowing users to choose its IdPs, as observed in some works [95,104,106]. However, even if users can choose their IdP, this IdP still represents a third-party that has some control over the users' identities. In theory, the self-sovereign model should offer this extreme consent mechanism by allowing the user to choose the information that he wats to share. Nevertheless, works like [107] that utilize fog nodes (locally centralizes) show that some third party entities can be necessary even in the self-sovereign model. 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 6 Challenges in Identity and Access Management for IoT During the past decade, IAMs went from a very narrow and limited research field to an academic and commercial promise to the future of the Internet. With the popularization of IoT, several challenges arise due to the complexity and scale that it brings for IAMs. This scenario boosted research, where several works are aware of these new challenges that IoT introduces. In this section, we present some directions to the future of IAMs. 6.1 Improving the relationship and context identity In this survey, we show that IoT refers to a hyper-connected world, interconnecting heterogeneous IoT devices in several areas with and without human interaction. This characteristic implies that IoT creates a complex set of relationships, which are not well explored in IAM. The discussions above clearly show that, although much research is going on, this topic still has a lot of open issues. Except for the work of Majeed et al. [102], all other works we analyze show the IdP as a flat-file database for their users and devices, with simple relationships such as"owner", "borrowed" and "network". We believe that this approach does not address the dynamicity of IoT, and only allows the creation of simple authentication and authorization methods, which may not fit all applications. With more complex relationships and contextual information, the IAMs can better comprehend the device and user behavior, enabling the IAM to use more information to differentiate normal and abnormal behaviour [114]. We suggest that IAMs must go towards graph databases, which is a database that uses graph structures to represent and store data [115]. The use of these techniques should enable the representation of complex relationships and increase IAMs flexibility for IoT. Additionally, this complex data structure, when combined with machine learning, statistical modeling, and predictive analytics, should be able to detect abnormal behavior or even predict when a security breach is about to happen [116]. Moving towards adaptive security In this survey, we show that IoT is growing in popularity, and it is connecting various devices, users, and services into a dynamic network. However, the lack of dynamic security mechanisms is still an open issue that impacts the scalability of IAMs [117]. Except for the works of Gonccalves et al. [97] and Majeed et al. [102], the security mechanisms, such as authentication and authorization, are static with limited context visibility. In short, all these mechanisms have a predefined rule which defines the form that a user authenticates and what access she has to a determined service. In IoT, these rules may not be practical to implement since it lacks efficient response strategies in dynamic scenarios [118]. This dynamicity causes several security problems for IoT, such as inappropriate access or invalid authentication denies. We suggest that adaptive security mechanisms should enable the IAM to change its security flow based on the context and relationships of the user (both human and non-human). With this adaptability, the IAM should be able to observe, analyze, and react to IoT dynamically on the fly, adjusting the complexity of security mechanisms to adequate levels based on the users' relationships and context [118]. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 6.3 Self-sovereign IAMs in IoT The self-sovereign IAM model comes with the premise to make users the owners of their identities by providing a unified, interoperable, and tamper-proof data structure. In this survey, we show that a distributed ledger can provide secure connections to support identity operations, such as authentication and authorization. If we take into account the seven laws of identity, we can conclude that the self-sovereign identity model is, in fact, the future of IAM. This distributed ledger enables a massive append-only data structure, where once the system adds data, it is not possible to remove any more. Due to this inability to remove data and the consensus algorithms, the DLT provides trust where there is no trust; once the system publishes a transaction at DLT, it is not possible to refute. The main benefit of the self-sovereign IAM is that no one besides the user is responsible for the protection of their identity data, meaning that the user creates, manages, and utilizes its identities. However, in practice, the DLT demands high computational and energy resources, which makes them unfitting for IoT devices, since most of then are limited in both aspects. Many works rely on centralized devices, which act as central authorities to IoT devices, such as the "Miner" [23] and the private DLT host [108]. Therefore, while the self-sovereign aims to remove any authority for the IAM system, it ends up creating local centralization hosts, which goes in the opposite direction of the main idea of the self-sovereign IAM. Therefore, we belive that, even with we show that DLT is not a silver bullet solution for IAMs in the IoT context, it is not enough to invalidate the benefits this technology enables and further researches must be done in this direction. Conclusion In this survey, we have presented the concepts, applications, and characteristics of IoT and the concepts and models of IAM systems. To conclude, we present a list of research projects and discuss open research issues. We have explored the literature in an extensive and comprehensible way, discussing the existing works and envisioning the future of IoT and IAM. We observe high demand for the IAM system to be more scalable and more dynamic, driven by the popularization of IoT. We observe that in IoT, access management is more complicated since it needs to understand, not only the right levels of that device but the context of the device and why it is making the request. Among the alternatives proposed recently to complement the current IAM systems, we suggest that if the system should be able to apply machine learning, statistical modeling, and predictive analytics, we can improve access in terms of scalability. We also show that blockchain has the potential to revolutionize the IAMs; however, it is not a silver bullet, since it has serious performance problems, which can end up being isolated self-sovereign IAMs networks that work well for their purpose but does not scale for an IAM that connects all devices of IoT.
27,291.6
2020-09-08T00:00:00.000
[ "Computer Science", "Engineering" ]
Hidden Markov Model Based Visual Perception Filtering in Robotic Soccer Autonomous robots can initiate their mission plans only after gathering sufficient information about the environment. Therefore reliable perception information plays a major role in the overall success of an autonomous robot. The Hidden Markov Model based post-perception filtering module proposed in this paper aims to identify and remove spurious perception information in a given perception sequence using the generic meta-pose definition. This method allows representing uncertainty in more abstract terms compared to the common physical representations. Our experiments with the four legged AIBO robot indicated that the proposed module improved perception and localization performance significantly. Introduction High level planning modules of autonomous robots have to rely on the perception capabilities to make sensible decisions.Without consistent perception information, autonomous robots cannot act at all since available information can never be precise enough to allow accomplishing any goals in dynamic environments.A specific instance of this problem may be found in the Standard Platform League (SPL) (www.tzi.de/spl) of the RoboCup organization (www.robocup.org).In SPL, robots are only equipped with a monocular color camera with limited field of view.In addition to this limited perception capability, onboard computing power is also a limiting factor in the robots' performance.Together these factors increase the overall uncertainty and pose many challenges to researchers.In SPL, teams of autonomous robots play soccer without obtaining any external help from human operators or an overhead camera.The robots typically use visually perceived landmarks, such as goal posts, beacons and corners formed by white field lines to locate themselves on the field shown on Figure 1. Figure 2 shows a diagram of the core software modules commonly used by SPL teams.The visual perception module generates perception information from the images received by the camera.Next the localization module locates the robot on the field and stores its findings in a global memory location.Given the current world model, the planner module generates a decision, which is carried out by lower level control algorithms.The planning modules of the robots can generate the most robust decisions only after obtaining consistent low level perception information.When the information generated by the perception module is spurious, localization precision degenerates condemning the planning module to generate only suboptimal plans.Most SPL teams (Chown, C. et al., 2008) (Akin, H. L. et al., 2008) (Stone, P.;Hester, T & Quinlan, M. 2008) (Röfer, T. et al., 2009) have used heuristic approaches to filter out the spurious landmarks, including sanity checks for size and dimensions of perceived objects.This work proposes a novel probabilistic visual filtering technique based on the Hidden Markov Model infrastructure to remove any spurious or unexpected perception information.Using the proposed method it is possible to develop a prior belief over the visual state space of an autonomous soccer robot.Using this prior estimate, the robot can distinguish between correct and spurious perception information without utilizing any manually coded sanity checking algorithms.This filter can be implemented as a post perception module as shown in Fig. 2. The rest of this paper is organized as follows.Some background information on current visual perception filtering techniques is provided in Section 2, followed by the detailed explanation of the proposed method in Section 3. Real world experiment results are presented in Section 4 and Section 5. Finally Section 6 concludes with an overview of the findings and some ideas for further studies. Related work Most SPL teams have employed heuristic approaches (Chown, C. et al., 2008) to filter out spurious landmarks, including sanity checks for size and dimensions of perceived objects.For instance, Cerberus Team (Akin, H. L. et al., 2008) has used a ball perception module solely based on sanity checks.In this approach, it is not possible to handle all possible cases, since maintaining such a large set of constraints is a tough programming task.Furthermore such a module is not general at all, even slightest alteration in the environment (e.g.changing the size of the ball) is sufficient for the module to fail completely.Some sanity checks have used more elaborate heuristics than simple size or ratio based checks.Using the internal sensors of the robot, it is possible to design heuristic sanity checking algorithms which take into account the robot's current physical posture (Stone, P.;Hester, T & Quinlan, M. 2008).For instance Cerberus Team (Akin, H. L. et al., 2008) has implemented a flying ball sanity check, which projects the candidate ball perceptions onto the ground plane using a camera matrix transformation calculated using the robot's internal sensors.In fact some teams only rely in the projected values for distance perception of objects (Stone, P.;Hester, T & Quinlan, M. 2008).Similarly, B-Human Team of SPL has used projected lines to block out the robot's own view in the image input (Röfer, T. et al., 2009).The perception module does not process regions marked with these lines, hence any misperceptions that could occur in these regions are eliminated.In addition visual processing takes less processing time since only a smaller region of the image is processed.All of these heuristic solutions are aimed at removing spurious perceptions.However, such hand coded approaches can never guarantee completeness due to the immense size of the input space.Consider a pixel on an image, which can display 256 3 different values in the commonly used RGB color model.If the image has 320x240 of these pixels, then there are (256 3 ) (320 * 240) = 3.07 x 10 554,858 possible numerically distinct images.Enumerating through the possible images to test the heuristic methods is a non-trivial task.Consider the next available higher resolution (256 3 ) (640 * 480) = 8.95 * 10 2,219,433 , which shows that enumeration is quickly out of question as the resolution increases.In fact, there are methods to reduce the size of the space by using a classification step (Akin, H. L. et al., 2008), which can reduce the number of possible colors in a pixel from 256 3 to around 10.As a result, we end up with (10 3 ) (320 * 240) = 1 * 10 76,800 distinct states to check, and further methods might be introduced to provide even more reduction.However, all of these reductions will introduce numerous assumptions with side effects involving systematic errors.For instance, if the classification method is not working as expected due to lighting conditions then the colors may be misperceived leading to another kind of perception problem. Proposed method Typically perception capabilities of an agent are expected to increase as the amount of perception data received increases.However there is no free lunch in perception processing as in any other processing system.As the perception capabilities increase, processing power requirements also require an increase, which might not always be available due to the limitations of mobile robotics platforms.One of the best ways of handling large amounts of data with limited processing power is to employ probabilistic methods (Fox, D.;Thrun, S. & Burgard, W., 2005).The visual perception filtering is an instance of such problems: there are large amounts of data coming from a very large visual state space that needs to be processed and the processing cycle is expected to be at most in the order of tens of milliseconds on a limited mobile platform.In this section the underlying probabilistic framework is first described briefly.Next a description of the proposed method is provided in accordance with the probabilistic framework. Probabilistic Infrastructure Hidden Markov Model (HMM) (Alpaydin, E., 2004) (Cemgil, A. T., 2008) is essentially a probabilistic filtering method, which can be used to track various possible paths in a state space.Given a state space definition, a transition model and an observation model, an HMM can track the incoming signal of the observed state and provide an expectation for the next observation state.At any time point t, in the received observation sequence, an HMM maintains a probability distribution over the defined state space.The maximal point of the probability distribution represents the most likely state.There can be states with very similar expectation values depending on the characteristics of the received sequence.However the model becomes more effective in tracking the incoming signal as time passes.The sections below provide further information on the basic design questions of HMM components. States HMM typically works with a state vector representing all possible states of the system.It is important to design the state vector at the right level of abstraction.A too specific state vector with too many states would be intractable to process.Similarly, a too general state vector might not provide enough detail about the environment.Thus the goal of state definition design is to come up with a concise and efficient state definition. Transition Model Once state definition is set, the next step in designing an HMM is to formulate a transition model to provide an idea about the successor state of the system given the current state.According to the Markovian assumption of the HMM, a single state of the system defines the system completely independent of any past states.Thus our prediction about the current state should be sufficient to make predictions about the next possible states.In order to predict the next state we essentially need a vector of the same size as the state vector representing the next state given the current state.Once we have this definition, we can calculate the next expected states using the following equation: for any given state x, transition function f(x), and transition probability p(x). We therefore define a probability distribution for each state, represented by a discretized probability vector.These vectors are used to form a matrix called the transition model matrix. Observation Model An observation may not necessarily belong to its corresponding landmark due to the uncertainties associated with the observation generation procedure.For instance, in robotic soccer a goal bar may be falsely perceived as a beacon, or a goal bar may be perceived where nothing should be observed. To handle such uncertainties the observation model of an HMM defines another set of vectors, specifying a probability distribution over all possible states for each observation.Similar to the transition matrix, an observation model matrix can be generated using the observation probability vectors for each state. Visual Perception Filtering using an HMM The filtering algorithm is presented in Fig In this representation a small set of discrete values are used to represent all of the possible physical conditions in which the corresponding landmark might be observed.This definition removes the higher level module dependencies including localization information since we are no longer interested in the specific position of a robot in the environment.Instead, the module only requires an indication of a possible metapose.Thus all that the system requires as input is reduced to the output of the lower level perception modules.It is possible to define high level maps using the meta-pose as the state definition of the proposed HMM implementation.Commonly such maps are constructed based on specific landmarks that indicate particular positions in a given environment, whereas the use of meta-pose definitions allows us to define more abstract maps.For instance, two goals on the opposite sides of the field can be considered as landmarks in a robotic soccer field.The meta-pose definitions of these landmarks provide us, all the possible physical configurations of a robot in which the corresponding goal might be seen.In this case, the high level map will contain two landmarks, each representing a physical goal.These definitions enable us to apply further reasoning on the received perception information.For example, two landmarks can not simultaneously be observed due to the physical limitations of the environment and the robot's narrow visual field of view.One benefit of meta-pose definitions is that they allow implementation of the previously mentioned high level reasoning using a simple HMM filtering implementation without any need to specify sanity checking rules explicitly. In the 2008 version of AIBO soccer field in the Standard Platform League (Fig. 1) the landmarks selected for the experiments were the two beacons and the four vertical bars of the goals, making a total of six landmarks.A seventh state was used to represent the meta-pose in which no observations were made.The state vector was initialized with the uniform distribution since no observations were available at the beginning of processing. Transition Model Definition Columns of the transition matrix, as shown in Gaussian transition probability distribution is a reasonable assumption, derived from the observation of physical constraints of the environment.When a landmark is perceived, observing that particular landmark and the landmarks around it becomes more probable in the next state.This assumption can be used in other domains as well, where the states are expected to be observed in an ordered fashion. Having no observation in an image indicates the current state to be the seventh state.In such cases it is not easy to make a guess about the next state.Therefore, the seventh column of the transition matrix has uniform distribution.The cells with the value zero are taken to be 0.0001 in the implementation of all matrices so that the probabilities will not converge to zero. Observation Model Definition Table 3 shows the observation matrix used in the experiments.The rows indicate observations received in the corresponding states.For example, the second row shows information about the second meta-pose, which corresponds to the right yellow goal bar on the robot soccer field.The value in the second column of the second row of the matrix indicates the probability of being in meta-pose number 2 when a right yellow goal bar is observed.As can be seen in the table, the diagonal values are all the same.The matrix contains all combinations of possible perceptions and possible states.The values were formulated using the empirical observations and prior expertise on the subject.Our goal Observed Meta-Pose 1 2 3 4 5 6 7 1 0.50 0.12 0.12 0.01 0.12 0.12 0.01 2 0.15 0.50 0.15 0.01 0.01 0.01 0.03 3 0.15 0.15 0.50 0.01 0.01 0.01 0.03 4 0.01 0.12 0.12 0.12 0.12 0.12 0.01 5 0.15 0.01 0.01 0.50 0.50 0.15 0.03 6 0.15 0.01 0.01 0.15 0.15 0.50 0.03 Expected Meta-Pose 7 0.12 0.12 0.12 0.12 0.12 0.12 0.25 Table 3. Expert coded observation matrix parameters perception module rarely perceives goal bars on beacons.Thus all such values are given a rather low value of 0.15.Other misperception expectations are also defined similarly.Just like the transition matrix, the seventh state also requires special treatment.Perceptions may or may not be correct when the system is in the seventh state, since the robot is not in any one of the physical meta-poses that indicate a prior position of the robot.The value at column 7, row 7 is lower than the rest of the diagonal values.The reason for this difference is that the proposed HMM implementation believes that the presence of no observation indicates the seventh meta-pose slower compared to other cases, in order not to waste the precious effects of the received perception information.The final parameter of our HMM implementation is the unexpected state threshold, which defines how much belief is required for a state to be an expected state.Considering the above parameters the value of 0.1 was found to be appropriate empirically. Computational Complexity The probabilistic solution consists of a single HMM update for each received observation, which requires multiplication of a state vector and two matrices.If the state space is of size N, then the state vector is also of size N and the transition/observation matrices are of size NxN.Since the values of the vectors and matrices are known at compile time, there is much room for optimization.In the most extreme case, all possible values may be calculated before hand to implement a lookup table for the HMM update procedure.Therefore, it is possible to perform an HMM update in our implementation in constant time, which makes the computational complexity of the system O(1). Experimental setup Two different set of experiments in our study were conducted with the four legged AIBO robot (http://tinyurl.com/yb5zeej) in a scaled SPL field (version 2008), shown in Figure 4.In the first set of experiments performance of the proposed module was tested according to its effects on the number of misperceptions.Figure 3a shows the path the robot follows on the field during the misperception experiments.The data collected at each one of the five runs were manually analyzed to calculate the false positive and true positive responses of the system.In the second set of experiments effects of the proposed module on our current localization algorithm (Monte Carlo Based Localization) (Kaplan, K. et al., 2005) were tested 1 .Since in most robotic systems, the localization module is a mission critical module it was considered to be a good modality for our experiments.Figure 3b shows the path the robots follow during the localization performance experiments.This path was particularly selected to be towards the misplaced goal landmark since the effects of the proposed module could only be observed in the presence of misperceptions.In the localization performance experiments, the robot went to its target position from its starting point.The robot was stopped when it reached its target and logs were recorded.This experiment was repeated five times starting from each of the two initial locations.An overhead camera system was used in these experiments to provide the ground truth location of the robot as supervision input.The output of the localization algorithm was compared with the ground truth value.In both experiments, the AIBO robot went to a specific point on the field using its localization module shown in Figure 4. Some additional landmarks were placed around the field at unexpected positions to generate spurious observations.For instance a yellow goal was specifically placed on top of the blue goal and two additional beacons were misplaced on opposite corners of the field, as shown in Figure 5. Misperception Elimination Experiment The results of the misperception elimination experiment are presented in Table 4.About 80 percent of the misperceptions were removed in a total of 4089 frames. Along with these misperceptions, some of the legitimate perception information was also removed as a side effect.These amounted to 22 percent of the otherwise available perception information.When frames with no observation were considered valuable, then the ratio of false positives (legitimate perception information) decreased to 10 percent. Localization Performance Experiment The primary effect of our proposed post-perception module was the removal of misperceptions from the localization input.Particles of the Monte Carlo localization algorithm (Kaplan, K. et al., 2005) diverged in cases where misperceptions appeared consistently in the input of localization.In such cases our algorithm only used the odometry information to update the pose estimate, delaying effects of the divergence for a limited period of time. Figures 6-8 show the progress of particles during a typical experiment.The red dot shows the pose estimate of the robot and the pink dot shows the ground truth provided by the overhead camera system.The robot started from a corner of the field facing the misplaced yellow goal.At the beginning, when the robot was at a distant point on the field, there were no misperceptions observed and the particles converged satisfactorily as seen in Figure 6. Figure 7 shows that one of the bars of the yellow goal was falsely perceived and the particles diverge after nine frames (around 0.27 seconds) as shown in Figure 8.A numeric representation of the proposed module's effects on the localization algorithm was the particle based error metric, which was calculated as an average of the errors made by each particle compared to the ground truth position: This metric showed a rapid increase or decrease behavior depending on the received perception information.Since the unexpected perceptions were removed by the proposed module, sudden changes in particle errors were less common in the corresponding graphs.When the proposed module was active, the spurious perception information was not sent to the localization module, avoiding divergence of the particles.Figures 9-11 shows results of the localization performance experiments, where the error rate is calculated according to the error metric defined above.Blue curves in these figures represent error rates in the standard run of the localization algorithm.The red curves show the reduced error rate, that we obtained by introducing the proposed post-perception filtering module into the robot control system.For instance, in Figure 9, the standard run of the present localization module diverged four times; whereas the proposed module handled these divergencies by removing the misperceptions, as shown with the red curve.Figure 10 also shows a similar filtering case, where the proposed module has been able to remove disastrous effects of misperceptions.Figure 11 represent a typical situation, where the proposed module has been able to filter first three divergencies by removing erroneous perception information.However, if misperceptions are quite persistent and correct perception information is not available anymore, then the post-perception module starts to believe in the spurious perception, after about 400 frames or 12 seconds.On the other hand, this degeneration is necessary to handle kidnapping situations, where the robot is instantly moved to a new location in the environment. Conclusions Commonly used perception algorithms on mobile robots have many assumptions and are thus bound to be suboptimal, primarily due to time and/or space complexity problems associated with the size of the input space.Most of the problems encountered in these methods may be traced to inaccurate perception of the landmarks, either by lack of their perception or worse by their misperception.In this study, we proposed a Hidden Markov Model based approach, which creates an expectation of landmarks to be perceived.It has been possible to detect unexpected landmarks using this probabilistic approach.The experiments we conducted in real world environment, namely on the Standard Platform League setup of the RoboCup, demonstrated benefits of our proposed method implementation clearly.Results of the misperception elimination experiments indicated that 80 percent of the misperceptions were eliminated on the average.The second set of experiments in our study provided even more conclusive findings.Initially the localization module failed in the presence of spurious landmarks.However the results of our second experiment showed that the localization module worked much more successfully when misperceptions were filtered by the proposed module.Consequently, our study has demonstrated the critical role of Hidden Markov Model in filtering misperceptions in visual input.Further studies should be performed to extend our proposed module with online learning methods to be used in learning the effects of misperceptions in dynamic environments and with sensor fusion techniques for better use of simpler perception methods. Fig. 4 . Fig. 4. Misperception (a) and localization (b) experiment paths.The circles indicate the starting points; the plus signs show the targets.
5,164.6
2009-12-01T00:00:00.000
[ "Computer Science" ]
Numerical-Study of Large-Liquid Rocket Plume-Flow-Field Aiming at the complexity of plume field structure caused by plume collision in the rising section of multi-nozzle parallel rocket, this paper establishes an analytical model of the rising section of a nine-nozzle configuration rocket. The plume field phenomenon is studied at different heights through numerical simulation. The reliability of the numerical method is verified based on the comparison of simulation and wind tunnel test data. The analytical results. Show that violent collision occurs between the jets in the ascent section of the multi-nozzles rocket, and many phenomena such as circulating vortex, gas reflux, reflux back splash and so on occur at different altitudes. When the altitude is less than 25km, gas reflux and circulation vortex appear in the rocket base. With the increasing of altitude, the jet collision impacts the base. After the altitude reaches 45km, back splash appears in some areas of the rocket base. There is an obvious flow diversion phenomenon on the side wall surface. The higher the altitude, the greater the expansion angle of the jet after the gas exits the nozzle. Introduction To enhance the capacity of large rockets, the United States, Russia, the European Union, and Japan have used the power scheme of multiple nozzles in parallel, such as the Saturn-IV of the United States, IV rocket [1], Russia's Proton-M rocket, the EU's Ariane-5-series of rockets [2], and Japan's H-II II A, et al.In the ascent process of multi nozzles parallel rockets, the complex interaction between the high speed, high temperature and high pressure jets at the exit of the engine and between the jets and the incoming streams leads to an extremely harsh thermal environment at the bottom of the rocket body, so the prediction of the thermal environment at the bottom of the multi nozzles parallel large rockets has a direct impact on the direction and validity of the rocket thermal protection program.However, due to the complexity of the flow field environment at the bottom of multi-nozzles rockets, there is a large gap between theoretical analysis and wind tunnel test and telemetry data, which makes it difficult to be used to predict the flow scenario at the bottom of large rockets directly [3].Therefore, it is necessary to design and establish a numerical analysis model for large rockets to predict the plume flow field and the bottom flow field environment during the launching process to provide a reference for the thermal protection program of multi nozzles large rockets. At present, many foreign scholars have conducted relevant studies on the plume and bottom flow field of a large multi nozzles rocket launching process.Zhou Z T [4] simulated the reactive and non-reactive flow of three nozzles configurations, when the flight.altitude increases, the temperature of the rocket rises accordingly, and the bottom heat flux.shows a trend of increasing and then decreasing Whitmore S [5] analyzed the effect of radiative heating on the oxidizer to fuel ratio of additively manufactured hybrid rocket fuel and concluded that the emerging thermoplastic material combustion anomaly is due to radiative heat transfer.Cross P [6] investigated the effects of complex refractive index, particle size distribution, and exit plane radiation boundary conditions on radiative heat transfer within solid propellant rocket motors, improving the accuracy of radiative heat flux predictions for rocket nozzles.Maxim [7] analyzed the radiative heat transfer analysis for the Mars atmosphere containing carbon dioxide and nitrogen using DO radiation(Discrete ordinates, DO)) and for different spatial discretization angles.George F [8] analyzed measured data from a Delta rocket strapped with six solid boosters and concluded that solid propellant-booster exhaust plume radiation and turbine exhaust backflow and reignition are the main factors affecting the heating rate of the rocket thrust structure. In recent years, many scholars in China have carried out a series of studies on rocket plume and bottom thermal environment problems through numerical analysis.Yang Y [9] carried out numerical simulation on the plume and thermal environment of a multi-engine parallel rocket, and analyzed the influence of the engine plume and the heat flow distribution of the rocket body at different heights.The influence of external flow field on the bottom thermal environment was analyzed by Yan Z J [10] with modeling the core stage with four boosters.Zhou.Z T [11] studied the thermal environment at the bottom of the liquid rocket, and found that the heat distribution at the bottom of the rocket shows the trend that the radiant heat decreases with the rise of the altitude, and the convective heat at the bottom of the rocket increases first and then decreases.Yang X J [12] combined theoretical and numerical analysis methods to analyze the thermal environment of solid rocket aft section, and found that there is a difference between heaven and earth, and the convective heat and radiant heat calculations under the flight environment should be fully considered. Up to now, scholars have analyzed the plume and its thermal environment of large rockets mainly by means of numerical simulation in the domestic and abroad, and the research object mainly focuses on the heat flow distribution of single/double nozzles.However, large rockets often use the power scheme of multiple nozzles in parallel, and their bottom flow field characteristics are significantly different, and the thermal environment laws are also different.At the same time, large multi nozzle rockets need to across different altitudes in the ascent process, and their environmental conditions vary greatly.Multi stage parallel rocket plume flow field and body heat flow by the environment is seriously affected, and the bottom of the rocket flow state and the role of the mechanism is still unclear.The development of the law and the influence of the factors is still blurring, which is an urgent need to carry out research related to the configuration of the multi-nozzles for a new generation large rockets rocket body and the bottom of the anti-heat.The research on multi nozzles configuration is urgently needed to provide a reference for new generation of large rockets and bottom heat protection. This paper takes nine-nozzle configuration liquid oxygen kerosene liquid rocket as the research object, and mainly studies the plume flow field structural characteristics at the rocket body at different heights and its influencing factors.Base on the results, the program reference is provided for the layout of the nozzle at the bottom of the multi-nozzle liquid rocket. 2.1.Physical Model The geometrical model of a nine-nozzle liquid oxygen rocket is shown in Figure 1.The length of the core stage rocket is L and the diameter of the bottom is D. The layout of the nozzle at the bottom of the rocket is shown in Figure 2, with no deflection in the center nozzle.The outlet distance from the bottom of the rocket is 1 L , and the angle of deflection of the axial nozzle of the engine outward is γ .During the ascent of a liquid rocket, the bottom flow region is mainly affected by the engine jet and the high speed incoming flow, which is a typical compressible flow [13], which is typical compressible flow.Based on the Navier-Stokes equations, the control equations based on energy, momentum and continuity equations can be expressed as follows: Where ρ is the density, U is the velocity vector, and E is the total energy, including kinetic energy, internal energy i and potential energy P , as in Eq.( 4).The p is the pressure, j j h J is the energy dissipation due to diffusion of components, r S is the radiative energy source term, and T is the viscous stress tensor. The liquid rocket fuel and oxidizer is liquid oxygen kerosene type in this paper.The combustion products contain water vapor, carbon dioxide, carbon monoxide and other gas components, the composition of the mixture of gas.The constant specific heat of the gas is determined by the mass fraction percentage of the mixture of gases.According to the literature [14], the finite chemical reaction model can be replaced by the air-gas component mixing equivalent for the calculation of heat flow distribution of liquid fuel rocket.The calculation results of the air-gas are not much different.At the same time, the combustion process of liquid oxygen kerosene rocket motor is more complete, and the percentage of re-flammable gases in the products is very low.The effect of re flammability on the wall surface of the rocket is relatively small, especially in the conditions of thin oxygen at high altitude, and the re-flammability effect is very small.Therefore, the air-gas component mixing equivalent finite chemical reaction model can be used for engineering calculations. The i is the gas component in the flow, the transport equation corresponding to its mass fraction Where, i R is the chemical reaction rate of the gas i , i K is the component diffusion.Meanwhile, for turbulent flow, the component diffusion can be expressed as: , , is the thermal diffusion coefficient, and 2.2.2.Spatial discrete format.In this paper, the finite volume method is used to discretize the control equations for the strong disturbances around the nozzle and the arrow in flow field.The second order windward format has faster convergence and high numerical stability, so it is used with the second order TVD format.The diffusive term is in the central difference format, and the gradient is solved based on the least squares method, and the unit surface flux discretization is in the Roe FDS format [15].The Gauss Seidel iterative method is used [16]. Turbulence model. The SST k-w turbulence model is adopted, which is more realistic for the collision of multiple gas jets at the bottom of the rocket as well as the high Reynolds number turbulent flow simulation between the jets and the air.The density distribution of the heat fluxes obtained is in better agreement with the experimental results [17][18][19].SST(Shear Stress Transport, SST) k-w, first proposed by Menter [20], is a turbulence model for solving shear flow problems containing wall function constraints.The transport equation of the SST k w model is: Where, k is the turbulent kinetic energy, ω is the turbulent dissipation rate, Γ ω and Γ k are the equivalent diffusivities, k G and w G are the generalized generation source terms, k Y and w Y are the dissipation source terms, w D is the cross diffusion source term. Radiation model. The products of mixed combustion include CO2 and strong radiative gases such as water vapor in the engine combustion chamber during the flight of a liquid rocket [21].Meanwhile, solid particles results from the rapid cooling of the under combusted portion of the fuel.Under the high temperature environment, the strong radiative gases and solid particles work together to transfer part of the radiative energy to the solid bottom of the rocket and be absorbed, heating the bottom wall.DO(Discrete Ordinates, DO) [7,22,23], discretizing the space into a finite number of three dimensional angles to calculate the radiation problem, can generally be improved by denser discretization with the high calculation accuracy.DO radiation model can calculate the radiation problem in all optical depth intervals, especially the radiation heat transfer problem with medium, and the calculation theory is mature, the numerical results are stable.DO can be used to solve the bottom of the large scale liquid rockets radiant heat problem.The space transport radiation of DO model is as follows transfer equation is: Where, λ is the wave length, λ a is the spectral absorption coefficient, λ I is the spectral radiant intensity, bλ I is the black body intensity given by Planck's equation, s r is the path length, r r and s  r are the positional coordinates and scattering vectors respectively, s σ is the scattering coefficient, Ω is the steric angle. When the liquid rocket rises to a high altitude, the water vapor and carbon dioxide content in the space is high.The Planck average absorption coefficient is used to fit the absorption coefficient of the medium, which can more efficiently and accurately simulate the condition of weakening of the radiation intensity caused by the absorption of the radiant energy in the transmission process.Grey body model [24], the absorption coefficient is calculated by using the Planck average absorption coefficient [13,25], which is expressed as Eq (10).The absorption coefficient is expressed as: Where, η κ is the absorption coefficient of absorption corresponding to the wave number η , ηb I is the black body intensity corresponding to the wave number η .Table 1 gives the molar percentage distribution of different gas components in the environment. Grid division and boundary conditions The geometric model is symmetric, so one-half model is used for calculation in this paper.During high altitude flight, the flow at the bottom of the rocket is the most intense and complex.Thus, the mesh is encrypted in the region of the bottom plate of the arrow body to ensure the accuracy of numerical calculations.The whole fluid domain is a cylinder with a radius of d and a length of e L .Meanwhile, to capture the flow condition near the wall more accurately, the boundary layer theory is used to analyse the flow separation at the wall, with a total of 18 boundary layers and an initial thickness of in h .The incoming flow direction is the pressure far field boundary condition, and the back end is the pressure outlet boundary condition.The rocket wall is the constant temperature wall, and the nozzle interior is the adiabatic wall.The engine inlet temperature in T is 3539.6K,and the pressure in P is 17.2MPa.The grid division and boundary conditions are shown in Figure 3. Model validation To verify the accuracy and validity of the numerical methods, the four nozzle rocket [14] and L [26], with downsizing test condition [27], numerical simulation is carried out.The four-nozzle downsizing model is used to simulate the effect of gas jet on the heat shield at the bottom of the rocket under the real incoming flow conditions in the wind tunnel.The distribution of the heat flow density at the bottom is measured by the sensors pre-set in the radial fixed position at the bottom of the rocket.In the wind tunnel test, the Mach number of the incoming stream at infinity M  is 2, and the nozzle expansion ratio is 6.9, and the expansion angle is 17.5°, and the throat diameter is 0.021m.In the numerical validation, the pressure ratio of the nozzle / c p p  is 1190.the nozzle arrangement of the simulation model is as follows Figure 4, with 4 nozzles in total.The sampling line is the intersection line between the symmetry plane of the nozzle and the bottom plate.The comparison between the ICMSOA-2023 Journal of Physics: Conference Series 2755 (2024) 012038 IOP Publishing doi:10.1088/1742-6596/2755/1/0120386 measured values of the sensor under wind tunnel test conditions and the numerical calculations in this paper after smoothing and dimensionless processing, the dimensionless processing method is shown in Eq (12).The dimensionless results shows that the temperature distribution on the wall surface of the four nozzle rocket in the numerical simulation agrees well with that measured in the wind tunnel test, which indicates that the numerical calculation method of this paper has good accuracy in calculating the distribution law of the plume field of the multiple nozzle plume. Analysis and discussion During the ascent phase of a large multi nozzle rocket, there will be a large altitude span, during which the external incoming flow environment will change significantly, and the main parameters include the ambient pressure, temperature, and the incoming flow Mach number.the incoming flow Mach number and environmental conditions at different altitudes are listed in Table 2.According to the nine working conditions corresponding to different altitudes in Table 2, the development of the external plume of the liquid rocket and the distribution of the heat flow in the heat shield at the bottom of the rocket at different altitudes are mainly analyzed.In the radial distribution plots, the larger / b r r is, the closer to the edge of the bottom plate.Due to the large changes in environmental parameters, the development of the multi nozzle plume is affected by the external pressure and the Mach number of the incoming flow.The plume expansion and collision between the plumes will produce differences in the liquid rocket ascent phase.Two different cuts of A/B in the flow field, the grey line is the base plate of the rocket, the outer orange indicates the side wall of the rocket are shown in Figure 5, and the green circle indicates the nozzle, and the sampling lines of the base plate in the following text are the intersection lines of the B cut and the base plate. Mach number distribution of the plume field Figure 6 shows the distribution of the plume of the multi nozzle rocket at different altitudes, with the cut surface is A. As the flight altitude rising, the expansion angle of the gas jet increases, and the collision between the jets forms a collision zone.The upper boundary of the collision zone is constantly close to the base plate of the rocket, and the jet collision gradually forms the gas flow to the base plate, the gas reflux.With the height increasing, the incoming pressure decreases sharply, the degree of compression of the jet is weakened, and the degree of expansion of the jet is further aggravated.bottom of the arrow is lower, the air supply flow is closer to the bottom of the arrow, as shown in Figure 7(d~e).As the height increasing further, the pressure gradient from the collision zone of the jet to the bottom plate increases, resulting in part of the jet in the collision zone impacting the bottom plate in the reverse direction, as shown in Fig. 7(d to e) shows.The ambient pressure is reduced, the effect of gas reflux is strengthened, resulting in the pressure between the bottom plate is greater than the ambient pressure, the circulating vortex near the bottom plate disappears, and the collision zone away from the bottom plate is formed by the sidewall incoming and outgoing jets as shown in Fig. 7(d to e). Figure •7(f i).Meanwhile, at high altitude(H>35 km), obvious flow separation phenomenon can be observed in the sidewall and part of the reflux splash zone in the bottom plate. Dynamic pressure distribution at the base of the rocket Figure 7 shows the dynamic pressure distribution at the bottom of the rocket with the flight altitude, where different colors represent different magnitudes of the dynamic pressure, and the section mode is B. At low altitude, due to the gas jet ejection effect [28], the surrounding air flows into the bottom region of the rocket, forming a gas feed flow.As the altitude increasing, the velocity of the incoming flow increases, and the pressure at the bottom of the arrow is lower, the air supply flow is closer to the bottom of the arrow, as shown in Figure 7(a ~ c).As the height increasing further, the pressure gradient from the collision zone of the jet to the bottom plate increases, resulting in part of the jet in the collision zone impacting the bottom plate in the reverse direction, as shown in Figure 7 (d~e).with the ambient pressure is reducing, the effect of gas reflux is strengthened, resulting in the pressure between the bottom plate is greater than the ambient pressure, the circulating vortex near the bottom plate disappears, and the collision zone away from the bottom plate is formed by the sidewall incoming and outgoing jets as shown in Figure 7(f ~ i).Meanwhile, at high altitude(H>35 km), obvious flow separation phenomenon can be observed in the sidewall and part of the reflux splash zone in the bottom plate. Conclusion This paper establishes an analytical model of nine-nozzle rocket for the plume flow field and thermal environment at the bottom of the ascent section of multi nozzle rocket.The plume flow field and thermal environment is discussed at the bottom of the plume under a total of nine altitudes.After discussing, the following conclusions: The collision between different jets occurs in the flow field of the nine-nozzle rocket, and the flow structure of the bottom plate differs greatly.When the altitude is less than 25km, jet reflux and circulating vortex appear in the bottom area of the rocket.With the increase of altitude, the jet collision impacts on the bottom plate.After the altitude reaches 45km, part of the back splash occurs in the bottom area of the rocket.There is an obvious flow diversion phenomenon in the side wall.The higher the altitude is, the larger the jet expansion angle is. The results of this paper can provide a certain reference for the thermal protection of new generation of large multi nozzle liquid rockets. Figure 1 . Figure 1.Geometric model of the launch rocket.Figure 2. Disposition of nozzle installation. (a) Symmetric mesh (b) Refinement of local mesh Figure 4 . Figure 4.The Nozzle layout at the bottom of rocket. Figure 7 shows the dynamic pressure distribution at the bottom of the rocket with the flight altitude, where different colors represent different magnitudes of the dynamic pressure, and the section mode is B. At low altitude, the surrounding air flows into the bottom region of the rocket, forming a gas feed flow.As the altitude increasing, the velocity of the incoming flow increases, and the pressure at the ( Figure 7 . Figure 7. Contours of dynamic pressure distribution in the bottom of rocket. Table 1 . Molar percentage of gases.
4,944.8
2024-05-01T00:00:00.000
[ "Engineering", "Physics" ]
SIMPLE MODEL OF PELLET COMBUSTION IN RETORT BURNERSIMPLE MODEL OF PELLET COMBUSTION IN RETORT BURNER the pellet supply to the burner is discontinuous, because the feeder is in operation for tens of seconds and then certain time in rest during the next pellet batch is fired. The primary air is usually blown into the feeder, but also directly to the fired pellets through slits in the burner mouth. The fuel is gradually fed to the bed, where it is heated and gradually releases volatile gases [2]. Depending on the specific fuel, volatile gases start to be released already from 150 – 200 °C. Gases then pass through the hot upper layer of bed, which leads to their ignition and subsequent burning in the combustion chamber [3]. The upper layer of bed consists mainly of fixed carbon and non-combustible inorganic material, with inter-particle space filled by gases. Combustion of the solid fuel in the burner is an important issue when discussing the CFD simulation of combustion in automatic boiler. In the present work is employed a simplified method for modeling the fuel bed, which is based on mass and heat balances in order to simplify the simulation of combustion in pellet boiler. The model for solid fuel combustion in a burner is created for the purpose of automatic boiler simulations. Such approach does not require a detailed bed model of fired solid fuel. A simple model of the bed can be very useful for designers and engineers of automatic boilers. The described approach to modeling the combustion process in a burner helps to shorten the calculation time and simplify the model of pellet combustion in various types of automatic boilers for households. Introduction CFD simulations may help to increase understanding and provide detailed prediction of combustion taking place in a boiler. However, modeling of combustion is more complex in biomass boilers than in gas or oil-fired boilers, due to the complexity of the heterogeneous reactions in the bed, the turbulent reactive flow in the freeboard and the very strong coupling between those two regions. However, it is usually not required to describe in depth and in every detail all phenomena that occur in a combustion system. Instead, CFD calculations should give an approximate view of system behavior, help in troubleshooting and provide insights necessary to fine-tune the system's operation, as well as give assistance when dealing with new designs [1]. From an engineer point of view a detailed model of combustion process in a burner is too complicated to be useful, as it requires great amount of time and effort to set it up, run, and analyze its results. Therefore it can be useful to introduce some assumptions in order to simplify a boiler simulation. From an overall point of view on a boiler as a unit, only integral factors like heat and mass transfer in the bed and the boiler are relevant. A simplified method of bed modeling based on thermal and mass balances was employed in this work to describe pellet combustion process in the simulation of an experimental 20 kW boiler. The predictions from CFD simulation were compared with the analytic results from thermal and mass analysis of the combustion process. The model employed in this work may be readily adapted for the modeling of other solid fuel burners and boiler designs. Several combustion parameters need to be defined in order to a practicable model would be achieved. Combustion process in a retort burner The analyzed boiler ( Fig. 1) is equipped with a retort burner, which works on underfeeding principle. Fuel (e.g. pellets) is fed from a fuel tank through a horizontal pipe by a feeding screw. The burner elbow changes the direction of movement and pellets are slowly pushed from the bottom into the mouth of the burner, where the combustion process starts. From long-time viewpoint, the pellet boiler operation is a steady process. In fact, the pellet supply to the burner is discontinuous, because the feeder is in operation for tens of seconds and then certain time in rest during the next pellet batch is fired. The primary air is usually blown into the feeder, but also directly to the fired pellets through slits in the burner mouth. The fuel is gradually fed to the bed, where it is heated and gradually releases volatile gases [2]. Depending on the specific fuel, volatile gases start to be released already from 150 -200 °C. Gases then pass through the hot upper layer of bed, which leads to their ignition and subsequent burning in the combustion chamber [3]. The upper layer of bed consists mainly of fixed carbon and non-combustible inorganic material, with inter-particle space filled by gases. CFD model description The present CFD model was set up within commercial code Ansys Fluent, but it would be the same in other software as well. The modeling of biomass boiler includes two main areas of interest: 1) combustion process of biomass in the bed and 2) homogeneous reactions and heat transfer in the combustion chamber (freeboard). These two processes are strongly coupled, as freeboard reactions depend on the gases leaving the bed, and as the radiative heat flux emitted by flames above the bed drives the processes inside the bed [1]. Combustible gases in reality begin to be released at the so-called devolatilization temperature, which according to the Ion`s work [8] is about 330 °C (about 600 K). With further increase of temperature, the composition of the released gases changes. Thus, there is apparent that devolatilization process is strongly coupled to the temperature in combustion chamber, which in turn depends on the amount and composition of the released volatile gases. Therefore, in the present simplified model it is necessary to select a reference temperature, which is key to determining the composition of released gases [7]. There was chosen a temperature level of 600 K (about 330 °C) to estimate the composition of introduced volatile gases, adopting the individual mass fractions according to Ion [8] or Thunman [9]. For simplification, the model assumes devolatilization like an instantaneous phenomenon of thermal transformation of fuel. Volatile gases are represented by CO, CO 2 , H 2 , H 2 O, plus CH 4 representing light hydrocarbons, and finally C 6 H 6 representing heavy hydrocarbons (tar), optionally also NH 3 . The amount of nitrogen in a majority of biomass fuels is below 1 %, (although some phytomass fuels may contain up to 4 %). It reacts by endothermic reaction (the energy gain is negative). The reaction can be neglected, because the relative energy gain is very small. Some authors have defined the biomass by a substitutive substance with the chemical formula C 6 H a O b . It replaces biomass in dry, ash-free state, where a and b are coefficients [5 and 10]. For this substance are defined physical parameters similar to the real for biomass. The devolatilization model of biomass is then described by the following reaction (1). Small part of the combustible gases burns in the bed, but the main combustion process takes place in the combustion chamber above the burner (freeboard). Burned pellets, or their parts are pushed away from the bed surface and then to burner edge, where they gradually burn out. The edge of the retort burner is usually made of cast iron, which resists well the hot environment and accumulates heat, so it creates favorable conditions for fuel gasification. Oxygen in the primary air, which is not used for the char combustion, is preheated so that there is no problem with incomplete oxidation of combustible gases in the combustion chamber [4]. Residues (ash and unburnt fuel) are either (in the case of fine particles) blown away by the combustion air or (heavier particles) are gradually pushed out of the burner to the ashtray. Secondary air is introduced into the combustion chamber at a certain height above the burner through nozzles in the intermediate wall, which leads to the burnout of remaining combustible gases. Before secondary air blows into the combustion chamber, it is heated in the distribution channel, pipes and intermediate wall. Biomass fuel Ultimate and proximate analysis of the fuel that is used in the model is shown in Table 1. More physical and chemical properties of various biomass fuels could be found in databases, which are quoted in [5] or e.g. [6] and in similar work [7]. Parameters of fuel like moisture, ash content, ratio of char and volatile combustible compounds, and calorific value are obtained in proximate analysis, which is performed experimentally. From the point of view of energy content it is more practical and useful as the ultimate analysis of fuel and it defines fuel parameters necessary as input for CFD simulations. Proximate analysis is in this work directly used to define composition of gases entering the combustion chamber. As noted above, primary air is supplied together with fuel to the bed. Gaseous products of fuel drying and devolatilization are thus mixed with primary air. This defines the composition of gas entering the combustion chamber. Approximate composition and properties of the employed fuel Table 1 Ultimate analysis Proximate analysis Bed and particle parameters description of the processes occurring in the burner is not required in this work, the burner may be considered as the source (inlet) of flammable gases, thermal energy and primary combustion air, the oxygen in which is already partially consumed. Gas leaves the bed at a certain temperature, which is higher than the devolatilization temperature. If the gas species were inserted into the computational domain through mass sources, the FLUENT software would provide no option to specify their inlet temperature. This problem can be eliminated by assumption that the volatile gases enter the domain through the inlet boundary. The inlet however may not be located at the interface of the bed and the freeboard, as in that case the bed would behave as a reflective surface (diffuse or specular). Thus it is better to place the inlet below the porous fuel bed. A substitution for the heterogeneous char oxidation in the bed (2) is one of major simplifications, which can be adopted only for automatic boilers. This reaction was replaced by volatile compounds (CO and CO 2 ) and a heat source in the bed. The number of moles of gaseous species and the released energy during reaction were calculated externally, according to equations (2) and (3) [1 and 7]. Volatile compounds produced during the heterogeneous char reaction were incorporated into the mass flow inlet of species and the energy generated by char oxidation reaction was considered as the heat source in the bed. The simplified model then considered only homogeneous gas reactions. One role of the bed was to distribute fluid flow on the interface with freeboard evenly across the burner. Furthermore, the bed provided space and time to heat up the gas. These two effects of the bed are closely coupled. The pellets fill up the bed volume and they had the shape of cylinders with known properties. With this assumption, it was possible to replace the bed volume by a porous zone. There was necessary to specify parameters of the porous zone to generate a correct pressure loss in the gas passing through the pellets. Different flow regimes were expected in the boiler due to its complex geometry. The gases were practically still in some regions and, on the contrary, high gas velocities and fully turbulent flow were expected in areas such as the flame or secondary air injections. Beneath the bed was primary air inlet and various thermo-chemical processes take place in the bed, thus there was also expected turbulent flow. The realizable k-ε model was employed to account for the effect of turbulence due to its proven effectiveness in industrial applications [12 and 13]. In this work the modeling of a packed bed was performed without considering channeling effect. In turbulent flows, packed beds were modeled using both permeability a (a viscous resistance coefficient is / 1 a) and an inertial loss coefficient C 2 . One technique for deriving the appropriate values of the porous properties involves the use of the Ergun equation [14]. It is a semiempirical correlation applicable over a wide range of Reynolds numbers and for many types of packing. where x i is the number of moles of a given species involved in the process. After gasification, only a solid substance remains in the bed. This charcoal can be considered for wood pellets as pure carbon, because the ash content is usually below 1 %. Sulphur content is at trace level, so it is neglected as well. Char combustion is a complex process that is affected by the fuel composition, particle shape and boiler conditions. A simplified model is used by Porteiro [11], which considers a heterogeneous reaction of char to form CO and CO 2 (2), where the ratio of the CO/CO 2 formation rate depends on the temperature (3). A representative char combustion temperature of 1373 K (about 1100 °C) is employed in this work to estimate the composition of the char combustion products [7]. The present work simulates combustion at nominal power of the boiler, as the most representative operating condition. In the long-term point of view, the combustion can be considered as continuous steady process. Adopting this assumption implies that the various zones in the boiler, where certain processes dominate (such as heating, drying, gasification, combustion of volatile gases and char combustion…) are fixed in space. Then the overall combustion process can be considered as steady. Main boundary conditions of the model then include mass flow inlet defined at the fuel and air inlet (burner), secondary air inlet and flue gas outlet to the chimney (pressure outlet). Model of fuel conversion -bed model As noted above, it is evident that the most complicated processes occur in the burner. The top layer of burner volume that contains burning pellets is called the bed. Within the bed take place several phenomena, from the initial heating of fuel, through its drying, devolatilization, gas combustion and fixed carbon burnout. Reactants, which include the primary combustion air and the solid fuel, are fed to this bed layer. In the computational model devised in this work, the bed is a part of the computational domain and there is no separate bed model to define the boundary conditions, similarly as in [1]. If boiler is considered as a device for transformation of chemically stored energy into heat carried by hot utility water, then burner is a device for fuel transformation to combustible gases and consequently to flue gases, while the thermal energy is released. The subsequent combustion of devolatilized fuel takes place in the freeboard (combustion chamber). As detailed , m C d Permeability (5) and inertial loss coefficients (6) were estimated by the Ergun equation using the mean diameter D p of the fuel particles [15]. The sphericity W and the spherical equivalent diameter d eq were calculated from fuel parameters shown in Table 1 using formulas (7) and (8): The fuel in the burner was modeled by 4 layers with different porosity and total height 20 mm, which were discretized by several layers of hexahedra. Mesh of bed should be sufficiently fine in order to cover fluid flow changes (Fig. 2). The computational model used the symmetry of the boiler, and therefore it modelled only 1/4 of the boiler. The boundaries of the model in the radial direction were defined as symmetry planes, The effect of the bed porosity on the gas flow was introduced by the addition of a source term S i , calculated by the formula (4), into the momentum equation. Three parameters were needed in the CFD code to evaluate the source term: permeability a, inertial losses coefficient C 2 and porosity f. The source term was composed of two parts: a viscous loss term (Darcy`s, the first term in equation (4)), and an inertial loss term (the second term in the same equation (4)) [15]. To cover the present case of simple homogeneous and isotropic porous media, it was sufficient to use the same porous properties in the whole bed volume. In equations (5) and (6) The limitations of the bed model include the assumption that the reactants are well mixed with each other and that the heterogeneous combustion process is also uniform in space. Another important limitation of the model is that it considers constant temperatures of devolatilization and of char combustion. Perhaps the main simplifying assumption is that the production of volatiles in the bed is independent of the conditions in the combustion chamber. An advantage is that the model bed is set up directly in the CFD model and does not require programming of external libraries. The present model does not substitute more advanced models and tools that can be used to design biomass combustion systems, such as three dimensional bed, transient modeling, solid to gas conversion or bed particles feeding. These all are however quite complex tasks, which require the implementation of external libraries. The work introduces the model that may provide the useful tool in the design of small automatic boilers, where fuel consists of well-defined pellets, the fixed bed is small relative to the combustion chamber, and where high development costs preclude the application of more sophisticated tools. Although the described model was developed for the retort burner, the model can be applied also on wide range of various pellet burners and boiler. Acknowledgement This article has been prepared within the framework of the project R&D -APVV-0458-11 "Solving issues with low-melting ash during the biomass combustion". The author JH gratefully acknowledges financial support of the Ministry of Education, Youth and Sports within the programme "National Sustainability Programme I", project NETME CENTRE PLUS (LO1202). as there was no tangential flow. The wall of the burner, which was in contact with the bed, was considered adiabatic. A significant effect on the temperature of combustion chamber had the radiative heat transfer in the fuel bed. Radiation increased the temperature inside the bed, as shown by the temperature fields on the symmetry planes in Figs. 3 and 4. In the model set-up, there was important to set the absorption coefficient of the burner walls equal to unity (black body). Otherwise, all radiative flux would be reflected back. During the model development it was also found as very important to carefully design the porous fuel volume and the primary air inlet, because the area of the input boundary leads to radiation losses. Therefore the inlet area should be small and shielded from the direct radiative flux of the combustion chamber. It also had to ensure uniform velocity and mass flux distribution on the bedfreeboard interface. In simulations, several design alternatives for the supply of reactants were tested. The most appropriate method in this particular burner was from placing the inlets on the burner perimeter. The inlet had the shape of a narrow slit in under the porous bed. The space under the bed was open to horizontal flow, which helped to distribute the fluid flow. This solution ensured almost uniform flow in the layer above it. Conclusion The described model is substitution of combustion process and it greatly simplifies the modeling approach for pellet combustion and also it is simple and easy to apply for a user of CFD software. This model can be used for simulation in a relatively simple way and it is able to predict the general behavior of solid fuel-fired boilers. In developing the model, it is necessary to consider the impact of the assumptions which may vary depending on the design of the burner and boiler.
4,576.6
2015-12-31T00:00:00.000
[ "Engineering", "Physics" ]
A photonic frequency discriminator based on a two wavelength delayed self-heterodyne interferometer for low phase noise tunable micro/mm wave synthesis Low phase noise frequency synthesizers are of paramount interest in many areas of micro-mm wave technology, encompassing for example advanced wireless communication, radar, radio-astronomy, and precision instrumentation. Although this broad research field is not bereft of methods for the generation of either low phase noise micro- or mm waves, no universal system applicable to low phase noise generation for micro and mm waves has yet been demonstrated. Here we propose a new photonic frequency discriminator based on a two wavelength delayed self-heterodyne interferometer which is compatible with such an objective. The photonic frequency discriminator can be a reference both for micro and mm waves to lower their phase noise. As a proof-of-concept, we demonstrate a low phase noise tunable OEO (6–18 GHz) and locking of a heterodyne beat between two cw lasers (10–400 GHz) with low relative phase noise. The required components for the photonic frequency discriminator are off-the-shelf and can be readily assembled. We believe this new type of photonic frequency discriminator will enable a new generation of universal precision tunable sources for the X, K, V, W and mm-bands and beyond. OEOs comprise a loop with optic -electro and electro -optic conversion and oscillate at frequencies corresponding to integer multiples of the free-spectral range (FSR) of the loop. To select a particular oscillation mode, an RF bandpass filter is installed in the loop. RF bandpass filters can be tunable by using photonic RF filters [19][20][21] or tunable RF filters 22,23 , enabling tunable OEOs [19][20][21][22][23] . Because both photonic RF filters and tunable RF filters are widely tunable, OEOs can be designed to oscillate in a large frequency range. For example, ref. 20 reported oscillation frequencies from DC to 60 GHz by using a photonic RF filter based on phase shifted fiber Bragg gratings. However, to date, the phase noise of tunable OEOs is not truly competitive with what is achievable with frequency synthesizers based on conventional microwave technology (summarized in the discussion section). Although tunable OEOs exhibit excellent tunability without phase noise degradation, inducing oscillation around the W band (75-110 GHz) or beyond is very difficult because of the requirement for high bandwidth RF components. At a certain carrier frequency, the method based on heterodyning two cw lasers becomes more powerful than OEOs. Heterodyning at a photo detector generates micro/mm waves with a carrier frequency equal to the frequency separation between the two cw lasers. By changing the optical frequency of one of the two cw lasers, the generated carrier frequency can be widely tuned. However, the phase noise of the micro/mm waves is governed by relative phase noise of the two cw lasers; which can be very high if the two lasers are independent. It is therefore desirable to provide a strong level of correlation between the two cw lasers to reduce their relative phase noise. Appropriate laser correlation can be realized through optical frequency combs, including both mode-locked optical frequency combs 14 and electro-optic combs (EO combs) 15,16 , or having the lasers share a common optical cavity 17,18 . In this letter, we introduce a novel photonic frequency discriminator (PFD), which is used to reduce the phase noise both for tunable OEOs and heterodyning of two cw lasers. In a proof-of-concept demonstration, a low phase noise tunable OEO (6-18 GHz) and a low phase noise tunable heterodyne beat from two cw lasers are shown via locking to the PFD. Working Principle The PFD is based on a two wavelength delayed self-heterodyne interferometer (TWDI) [24][25][26] as shown in Fig. 1a. The TWDI has two different wavelength optical inputs and a DC output. The DC output of the PFD contains the relative phase noise between the two optical inputs with a delayed transfer function 27 (H(jf), please refer to the supplementary material) of an imbalanced Mach-Zehnder interferometer (iMZI). The two optical inputs with a phase noise PSD of L in1 (f) and L in2 (f) are coupled into the iMZI through a 2 × 2 optical coupler. One arm in the iMZI has a long fiber delay (~200 m for a tunable OEO and 50 m for heterodyning two cw lasers) and the other arm has an optical frequency shifter (f AOM ~ 160 MHz) in the form of an acousto-optic modulator (AOM). After combing the light from the two iMZI arms through an optical coupler, the two outputs from the optical coupler are optically bandpass filtered to separate the two optical inputs. At the photo detectors, signals at the AOM frequency for each optical input are generated. Mathematically, the photocurrents (i 1(2) (t)) after the PDs in the time domain can be expressed as, Here, ν in1(2) , ϕ AOM (t), ϕ in1(2) (t), and τ are the optical frequency of input 1(2), phase noise of the synthesizer for the AOM, phase noise of input 1(2), and delay time in the iMZI. By mixing these two signals, the output from the mixer (V mix (t)) is Note that common noise is cancelled out through the mixing process, resulting in no detrimental effect from the AOM and down-conversion of fiber delay noise (i.e. fluctuations of τ) in the iMZI from ν in1 (or ν in2 ) to ν in1 − ν in2 . The phase noise PSD of the signal after the the mixer (L mix (f)) can be evaluated at quadrature as L mix (f) is used as an error signal, and a feedback loop is configured to make L mix (f) as small as possible. In the case of a tunable OEO, the two iMZI inputs are generated via phase modulation of a single longitudinal mode cw laser via with the output from the OEO (f OEO ) as shown in Fig. 1b, generating an EO comb. The two EO comb modes at of +/−N th order are used as the two optical inputs for the TWDI. In this case, Here, ϕ cw (t)and ϕ OEO (t) are phase noise of the cw laser and OEO, respectively. Experimentally, all EO comb modes go through the iMZI, and the two EO comb modes (+ and − N th orders) are taken out by the two optical bandpass filters, respectively. From these relations, L mix (f) is Here, L OEO (f) is phase noise PSD of the OEO, respectively. Note that phase noise of the cw laser is cancelled out in the mixing process 26 . By feeding back to a modulator in the OEO, using L mix (f) as an error signal, phase noise of the OEO is reduced. Note that although phase noise reduction of an OEO by locking to a conventional PFD based on delayed self-homodyne interferometer has been demonstrated 28 , our novel PFD based on TWDI has two significant advantages. One is higher sensitivity because of sensitivity magnification with a factor of (2N) 2 , enabled by the use of the EO comb. More importantly, the second advantage is that our PFD requires much less high bandwidth RF components. While conventional PFD detect f OEO and process the signal at that frequency, our signal frequency is f AOM (~160 MHz), which is much easier to process and has higher performance. In the case of locking of two cw lasers for mm wave generation, the two optical inputs are simply the two independent cw lasers (Fig. 1c). In this case, ν in1 = ν cw1 , ν in2 = ν cw2 , ϕ in1 (t) = ϕ cw2 (t), and ϕ in2 (t) = ϕ cw2 (t) are satisfied. Here, ν cw1(2) and ϕ cw1(2) are optical frequency and phase noise of cw laser 1(2), respectively. From these relations, Here, L cw1(2) (f) is the phase noise PSD of cw laser 1(2). By feeding back to one of the two cw lasers, using L mix (f) as an error signal, relative phase noise between the two cw lasers is reduced. Note there are reports, in which relative phase noise between two cw lasers is reduced by locking to a fiber delay 29 . There is one significant advantage for our PFD. Previous demonstrations have detected the heterodyne signal between two cw lasers with and without fiber delay and processed the signals at that frequency, which prohibits an extension of the method to the W-band or beyond. On the other hand, in the present system, just as for the case of the OEO, the detected and processed signal frequency is the AOM frequency; therefore the frequency separation between the two cw lasers can be easily extended to the W-band or beyond. In summary, as explained above, our novel PFD is suitable as a reference both for tunable OEOs and heterodyning of two cw lasers by just selection of appropriate optical inputs, thus presenting a unique opportunity for universal low noise frequency synthesis in the spectral range from micro -mm waves, even THz. Tunable OEO A schematic of the tunable OEO is shown in Fig. 2a. Tunability relies on the use of a tunable RF filter (YIG filter, 3 dB bandwidth of 40 MHz and tunability from 4 to 26.5 GHz). Although tunable OEOs have been demonstrated with YIG filters 22 , tunable OEOs with YIG filters are not a good choice for stable, low noise OEOs, because the passband of YIG filters is jittering due to the noise of the drive current, resulting in an unstable OEO oscillation frequency. However, the disadvantage can be converted to an advantage by using a YIG filter as a modulator 23,28 . In our system, the YIG filter is used not only for frequency tuning, but also as a modulator when the OEO is actively locked to the PFD. Except for the expanded use of the YIG filter, the basic OEO as presented here is constructed similarly to standard OEOs with a 200 m fiber. Please refer to the method section for more detail. Upstream of the intensity modulator, part of the RF signal is coupled out by an RF coupler, and injected to the PFD, followed by an electric amplifier. Low phase noise output can be partially coupled out after the electric amplifier. When the OEO is locked to the PNA, the bias voltage of the intensity modulator is used as a fast modulator, because the modulation bandwidth of the YIG filter is lower than that of the intensity modulator. The bias voltage of the modulator regulates the optical/RF power in the OEO loop, inducing OEO oscillation frequency modulation through OEO oscillation dynamics 28,30 . Any intensity modulation mechanism can in principle be used such as an acousto-optic modulator (AOM) or an additional intensity modulator in front of the OEO loop. Once the OEO loop gain exceeds the OEO loop loss, the OEO starts oscillating. The oscillation frequency is roughly set by the passband of the YIG filter, and determined exactly by the integer multiple of the inverse of the "effective" OEO loop delay. Note that the "effective" delay can be changed, depending on the wavelength of the cw laser 31 or optical/RF power 28,30 . This is why the bias voltage of the intensity modulator can be used as a frequency modulator. As shown in Fig. 2b,c, the OEO oscillation frequency can be selected from 6 to 18 GHz via changing the drive current of the YIG filter. Examples of RF spectra are shown in Fig. 2c. However, without locking, the OEO is jittering as shown in Fig. 2d (measurement time of about 10 ms). Please also refer to a supplementary movie. Figure 2e shows the free-running phase noise of the OEO. Note that to suppress the frequency jitter, the OEO is locked to the PFD with 1 kHz feedback bandwidth. The high oscillation mode frequency is pushed out to around 1 MHz because of the short OEO fiber, which is significantly shorter than for other reported tunable OEOs (summarized in the discussion section). Note that a spike at 1 MHz in Fig. 2e is an artifact from calibration, and the actual phase noise at 1 MHz is −80 dBc. By using a multi-loop configuration 32,33 , the spike can be suppressed, sacrificing ease of frequency tuning. When the tunable OEO is locked to the PFD, a fraction of the output from the OEO, appropriately amplified with an RF amplifier drives the phase modulator, generating an EO comb (Fig. 3a). The EO comb is then used as input to the TWDI. With locking of the OEO to the PFD, fine/continuous frequency tuning of the OEO is obtained by tuning a delay control stage in the PFD (Fig. 3b). Please also refer to a supplementary movie. The mode-hop free continuous tuning range is about 300 kHz without control of the delay stage in the tunable OEO. However, by adjusting the delay in the tunable OEO by the same amount as in the PFD, tuning by more than 1 FSR is obtained, allowing for synthesis of microwave frequencies in the whole OEO tuning range without any frequency gaps. The in-loop and out-of-loop phase noise PSD for an OEO with 10 GHz carrier are shown in Fig. 3c. Please refer to the methods for more detail. When the +/−10 th sidemode orders of the EO comb are used, the phase noise of the OEO is suppressed by more than 50 dB, compared with that of the free-running OEO. The out-of-loop phase noise is larger than the in-loop phase noise below 50 kHz Fourier frequency offset, indicating, in this frequency range, the obtained phase noise is limited by the sensitivity of the PFD. Actually, the estimated sensitivity of the PFD well overlaps with the out-of-loop phase noise. The sensitivity limit of the PFD comes from white phase noise from either electric amplifiers after PDs in the PFD or shot noise of the PDs above 500 Hz, which is converted to 1/f 2 phase noise for the PFD through the delayed transfer function. Below 500 Hz, 1/f phase noise is observed likely caused by electric amplifiers and PDs in the PFD, which is converted to 1/f 3 for the PFD through the delayed transfer function. Above 50 kHz Fourier frequency offset, out-of-loop follows in-loop phase noise. In this frequency range, not the PFD, but feedback gain limits the achievable phase noise. A servo bump is clearly observed around 130 kHz. To verify that indeed the sensitivity of the PFD limits achievable phase noise, out-of-loop phase noise, with use of the +/−3 rd sidemode orders of the EO comb, was also measured. As shown in Fig. 3d, about 10 dB of excess phase noise is observed, which is due to the decrease in magnification factor, i.e. −10 dB ~ 20 × log(6/20). According to this result, use of higher order sidemodes of the EO comb lowers phase noise of the tunable OEO. Thus even larger reduction in phase noise can be achieved by using a phase modulator with ultra-low half wave voltage. Alternatively, cascading of phase modulators allows for the generation of higher order sidemodes, but at the expense of a more complicated system [34][35][36] . Again note that the OEO does not have any high oscillation modes up to 1 MHz. Nevertheless, the OEO exhibits very low phase noise for tunable OEOs as summarized in the discussion section. Regarding long-term stability, the present system is limited by fiber delay fluctuations in the PFD, leading to an OEO oscillation frequency drift of about 1 kHz/5 min, likely caused by temperature fluctuation. However, if long term stability is required, the OEO oscillation frequency can be phase locked to an external reference such as a reference derived from GPS by feeding back to the fiber delay control stage in the PFD. In the experiment, the OEO oscillation frequency is phase locked to a commercial external synthesizer by feeding back to a fiber length control module based on a PZT in the PFD, allowing stable, high resolution frequency tuning as shown in Fig. 3e. The locking bandwidth should be small enough (<100 Hz) so as not to degrade the phase noise of the OEO. Heterodying two cw lasers Here a set-up as explained with respect to Fig. 1a,c is implemented, where two independent cw lasers are injected into the PFD. The frequency separation is selected between 10 and 400 GHz by changing the optical frequency of one of the two cw lasers. An optical coupler is inserted in one arm of the iMZI. The output from the coupler includes the two cw lasers. In a first experiment, the output is photodetected, generating a beat at the separation frequency. We were not able to detect beat frequencies higher than 30 GHz because of the limited bandwidth of the PD available in our lab. Figure 4a shows the frequency drift of the beat at 10 GHz carrier frequency without locking to the PFD. The beat drifts about 10 MHz in 5 minutes. With locking to the PFD by feeding back to one of the two cw lasers, the drift is suppressed by a factor of 1000 (Fig. 4b). Although the drift may not be critical for applications such as radar and wireless communication, the drift needs to be eliminated when the system is used as a synthesizer. In a demonstration, we phase-locked the beat to an external reference by feeding back to a fiber length control module based on a PZT in the PFD, similar to what was used in the experiment with the tunable OEO, thereby generating an in-loop frequency drift at the sub-Hz level as shown in Fig. 4c. The frequency can be adjusted by changing the reference frequency as shown in Fig. 4d with 2 Hz frequency step, which is the minimal frequency step of our external reference. Note that the feedback bandwidth of the phase locked loop should be small enough so as not to degrade the phase noise of the beat. Phase noise without locking to the PFD is equal to the relative phase noise between two cw lasers (Fig. 5a). The relative phase noise with locking to the PFD is suppressed by more than 60 dB at low frequency offsets (Fig. 5a). A servo bump is observed at about 650 kHz. The feedback bandwidth is limited by both loop length (~50 m) and the modulation bandwidth of the cw laser. Although only the phase noise for a 30 GHz heterodyne signal is shown, note that no phase noise magnification was observed in the range from 10 to 30 GHz, indicating that relative phase noise between the two cw lasers does not depend on generated carrier frequency. Therefore we believe that heterodyning of two cw lasers can be scaled to the mm/THz range without phase noise degradation. Indeed, in the following we demonstrate that phase noise is independent of carrier frequency, specifically for frequency intervals at 100, 200 and 400 GHz. Because such high bandwidth PDs are not available in our lab, indirect phase noise measurements were carried out, in which relative phase noise between the two cw lasers is measured through an EO comb with 10 GHz comb spacing. For more detail refer to methods. The result is shown in Fig. 5b. Since the phase noise of our 10 GHz synthesizer (Hewlett-Packard, 8341A) is not low, the phase noise of the EO comb cannot be ignored, which hampers phase noise measurements below 100 kHz frequency offset. However, as shown in Fig. 5b, phase noise above 100 kHz frequency offset for 100 and 400 GHz heterodyne beats is the same as for 30 GHz (200 GHz is also the same, although not shown in Fig. 5b for simplicity). Although the measurement is indirect, we believe mm waves from high bandwidth PDs have the same phase noise, because amplitude to phase noise conversion is typically around −30 dB for UTC 37 and PIN 38 photodiodes. Also, the saturation current of recent UTC photodiodes with bandwidths of several hundred GHz can be more than 1 mA (e.g. UTC-PD Photomixer Module from NTT Electronic), resulting in an expected shot-noise below −150 dBc/Hz. As shown in Fig. 5a, suppression of phase noise above 100 kHz frequency offset is limited because of the limited feedback bandwidth. To overcome this, a Brillouin cavity 39,40 can be optionally installed. The experimental setup is shown in Fig. 5c. The output from the coupler in the one arm of the iMZI, which includes the two correlated cw lasers, is coupled into the Brillouin cavity to excite stimulated Brillouin scattering. More detail is shown in the method section. Because of the common mode cavity, the narrow Brillouin gain bandwidth, and cavity dynamics, phase noise especially at high frequency offsets can be effectively suppressed as shown in Fig. 5d. We also confirmed phase noise is the same for 10, 20, and 30 GHz. Overall, by employing both the PFD and a Brillouin cavity, phase noise over a broad frequency offset range can be effectively suppressed. Further phase noise reduction is feasible by using intrinsically low phase noise cw lasers (e.g. Sub-Hz Linewidth Semiconductor Laser from OEwaves), because the obtained phase noise is likely limited by the phase noise reduction factor of the Brillouin cavity 40 . Discussion We compared the obtained phase noise of our precision tunable OEO with other frequency tunable synthesizers, including other types of tunable OEOs [19][20][21]23 , systems based on optical frequency combs 7 , and commercial frequency synthesizers based on RF technology (e.g. N5183B with low phase noise option from Keysight Technologies) (Table 1). Although exceptional performance is demonstrated by using optical frequency combs especially at low frequency offset, the performance relies on ultra-stable optical reference cavities, which are hard to operate, complex, and bulky. Such systems are very useful for specialized metrology labs, but are not adequate for the real world. This is because more practical methods, i.e. tunable OEOs have been developed, which sacrifice phase noise performance in favor of simplicity and cost. When comparing various tunable OEOs, not only phase noise, but also the higher oscillation mode frequencies should be discussed. Simply incorporating longer fiber delays in OEOs produces lower phase noise, but the frequency of the first higher oscillation mode becomes correspondingly smaller. From this point of view, our tunable OEO shows the best performance, i.e. low phase noise while pushing out the first higher oscillation mode to 1 MHz. Finally, we compared our tunable OEO with commercial frequency synthesizers. Although at first glance our tunable OEO performs only marginally better when looking at Table 1, our tunable OEO shows 20 dB better performance above 100 kHz frequency offset. More importantly, our tunable OEO has significant benefits, namely no principle phase noise degradation with scaling of carrier frequency. Though our demonstration here was limited to the range from 6 GHz to 18 GHz, the system can be extended to higher carrier frequencies. The PFD for the tunable OEO can be easily upgraded by using a larger bandwidth phase modulator. The tunable OEO will require a high bandwidth intensity modulator, photo detector, RF amplifier, and YIG filter. Fortunately, all these are commercially available at least up to 50 GHz. Regarding the phase noise performance achieved with heterodyning of two cw lasers, we compare our results with other methods, which can be extended to mm waves ( Table 2). These methods are based on EO combs 16,41 , optical phase locked loop (OPLL) via mode-locked combs 14 , and Brillouin cavities 42 . Note that the method based on EO combs cannot be easily extended beyond 300 GHz because of the requirement for many EO comb modes. In addition, the performance is limited by phase noise from the RF synthesizer required for generation of the EO comb. OPLL do not exhibit phase noise degradation with carrier frequency, but the achievable phase noise is limited by shot noise (−90 dBc/Hz). Since the reported phase noise data from OPLL are 10 years old, the data were updated. Please see the supplementary material. State-of-the art phase noise for mm wave generation is reported in ref. 42 . The method is based on a Brillouin cavity similar to the present system. By using two phase locked loops to suppress multimode excitation of Brillouin scattering, the use of a Brillouin cavity with a long fiber (~110 m) was enabled, resulting in low phase noise. However, that method requires two low phase noise RF synthesizers around 10 GHz and two narrow optical bandpass filters. In our system, by pre-stabilizing two cw lasers via the PFD, the relative phase noise of the output from our Brillouin cavity shows as low a phase noise as ref. 42 despite use of a shorter fiber (20 m) without requirement for low phase noise synthesizers. In addition, the PFD for heterodyning two cw lasers can be simplified. In the demonstration, we used two optical BPFs to separate the two cw lasers. However, a bi-directional configuration can be implemented instead of two optical BPFs. Please see the supplementary material. In such a configuration, the two cw lasers are input to an iMZI from opposite directions. Phase noise of the two cw lasers is then photo detected on opposite sides of the iMZI. To prevent the injection of one cw laser to the other, isolators need to be included. Once phase noise of the two cw lasers is detected independently at the PDs, the signals are mixed to generate an error signal to lock the two cw lasers in the same way as demonstrated here. The configuration is easier to implement for optical frequency tuning of one of the two cw lasers because no tuning of the bandpass filters is required. Finally, we like to comment on mm-wave generation based on RF technology. Recent progress enables carrier frequency multiplication even up to THz, starting from a 10-20 GHz source (e.g. Millimeter-Wave Accessories from Keysight Technologies). However, phase noise degradation is unavoidable. In addition, because the bandwidth of frequency multipliers is limited, having a broad tunable frequency range with one frequency multiplier is difficult. Conclusion We propose and demonstrate a novel ultra-high sensitivity photonic frequency discriminator, which can serve as a universal tool for low phase noise micro and mm-THz wave synthesis. Based on frequency locking to the PFD, low phase noise microwave signals are generated from a tunable OEO, whereas low phase noise mm wave signals can be obtained from heterodyning two cw lasers. In a proof-of-concept, low phase noise frequency synthesis from an OEO continuously tunable from 6 GHz to 18 GHz with a frequency resolution of 2 Hz is demonstrated. Unlike for conventional low phase noise OEOs, short fiber loop lengths are permissible without excessive phase noise degradation because of the superior sensitivity of the novel PFD, which in turn pushes the high oscillation mode frequency out to 1 MHz. For heterodyning of two cw lasers, the presented phase noise measurements imply no carrier frequency dependent phase noise degradation up to carrier frequencies of at least 400 GHz. Appropriate utilization of the PFD as presented here facilitates a significant improvement over the phase noise performance limits of conventional RF technology within a common and simple photonic architecture, applicable not only to microwaves, but also mm and -THz waves. The present system even rivals or improves on the performance of existing photonic methods (as listed in Tables 1 and 2, developed for limited frequency ranges). It should also be pointed out that further improvements in performance compared to what is presented here are possible, e.g. by using a phase modulator with ultra-low half wave voltage or cascading phase modulators for the tunable OEO and using intrinsically low phase noise cw lasers for heterodyning as discussed before. The demonstrated PFD offers a new paradigm for future low phase noise precision frequency synthesizers up to the THz frequency range, with superior performance and enhanced utility compared to any other present day technology. Methods Photonic frequency discriminator. Two optical inputs are coupled to an iMZI through a 2 × 2, 50:50 optical coupler. Although, in principle, one AOM is enough to observe beat signals at the PDs, two AOMs (about 80 MHz and −80 MHz frequency shift) are installed to reduce detrimental effects from optical interference between the 0 th and 1 st order diffraction modes. Since the long fiber delay is non polarization-maintaining fiber, a polarization controller and polarizer are installed after the long fiber. Other than the fiber delay, all components Table 1. Phase noise of various synthesizers for 10 GHz carrier*. *25 GHz for PM + OBPF. **Unit of phase noise is dBc/Hz. ***We use "None" for the offset frequencies higher than f high_osc . ****f high_osc stands for frequency of first high oscillation mode. *****N5183B with low phase noise option from Keysight Technologies. ScIentIfIc REPORTs | (2018) 8:13719 | DOI:10.1038/s41598-018-31712-y are polarization-maintaining. One arm has a 90:10 optical coupler. The 10% output from the optical coupler is used for the experiment on locking of two cw lasers as described in the main manuscript. The output from the two arms is interfered via a second 2 × 2, 50:50 optical coupler. The iMZI is inserted into an aluminum enclosure with 4 mm thick walls. The two optical bandpass filters are tunable over a large frequency range and also have a tunable bandwidth down to a minimum bandwidth of 10 GHz. The two PDs have 2 GHz bandwidth. The signals after the PDs are amplified and mixed in an RF mixer, generating a DC signal. Tunable OEO. A single-longitudinal-mode cw laser is intensity-modulated by an intensity modulator and propagates through a 200 m fiber delay. The required opto-electronic conversion is obtained in a photo detector (3 dB bandwidth of 17 GHz), generating an RF signal that is amplified (frequency range of 5-20 GHz, and 35 dB gain in total), bandpass-filtered and applied to the intensity modulator. The RF bandpass filter is based on a YIG filter (3 dB bandwidth of 40 MHz and tunability from 4 to 26.5 GHz). Upstream of the intensity modulator, 10% of the RF signal is coupled out by an RF coupler, and injected to the PFD after amplification up to about 33 dBm. Low phase noise RF output is partially taken out after the amplification. Note that although we didn't observe any excess phase noise from the amplification, even if there is excess phase noise, the excess phase noise is also suppressed, because the feedback loop tries to minimize the phase noise after the amplification. The EO comb after the phase modulator shown in Fig. 1b is split in two. One is used for phase noise suppression as shown in the main manuscript, and the other (called output 2) is used for out-of-loop phase noise characterization. Output 2 is inserted into another PFD (called PFD 2), which comprises again an iMZI, followed by optical bandpass filters, PDs, and an RF mixer as described in the main manuscript. However, for phase noise analysis the fiber lengths in the iMZI are changed. Fiber lengths of about 1 km and 100 m are chosen for phase noise measurements in the 100 Hz -100 kHz and 100 kHz and 1 MHz frequency offset ranges, respectively. Relative phase noise measurement between two cw lasers with frequency separation of >100 GHz. An output from one arm of the iMZI goes through a phase modulator. The phase modulator is driven by a 10 GHz synthesizer with an RF amplifier, generating sidemodes from the two cw lasers. When the frequency separation is about 100 GHz, a beat with less than 1 GHz carrier frequency between the + and −5 th sidemode orders from the two cw lasers is observed at a photo detector after optically bandpass filtering only the two sidemodes. In the same way, +/ Control circuit for feedback. When OEO or heterodyning two cw lasers is locked to the PFD, an output from the PFD is used as an error signal. The error signal is put into a home-made PI2D loop filter (similar to D2-125, Vescent PHOTONICS). An output from the PI2D loop filter is split to two. One goes to a fast modulator, i.e. an EOM for OEO and laser current for heterodyning two cw lasers. The other is put in a home-made integrator. The output is used for slow, but large tuning range modulator, i.e. YIG filter for OEO and PZT in the cw laser for heterodyning two cw lasers. For OEO, a home-made current buffer with an adder (ADA4870, Analog devices) is inserted between the integrator and YIG filter, because YIG filter requires high current. For OEO, a first high oscillation mode at 1 MHz needs to be suppressed in the error signal to avoid oscillation of the feedback loop at the frequency, which limits feedback gain of the feedback loop. For this, OEO loop length is set to roughly equal to the fiber length of the PFD, making the first high oscillation frequency equal to the null frequency of the PFD (Fig. S1 in the supplementary material). The first high oscillation is significantly reduced by the null frequency at the error signal. PFD with a Brillouin cavity. The Brillouin cavity consists of a 2 × 2 optical coupler with 90:10 coupling ratio and about 20 m fiber. The Brillouin cavity is enclosed by a 4 mm thickness aluminum box. By coupling sufficient optical power to the Brillouin cavity (~15 mW for each cw laser), back scattered Brillouin tones with frequencies of ν cw1 − f Bri and ν cw2 − f Bri are generated. Here, f Bri is the Brillouin frequency shift. To resonantly couple two cw lasers into the Brillouin cavity, two Pound-Drever-Hall locks (PDHs) are implemented for each cw laser. To observe error signals for the PDHs, the two cw lasers are phase modulated, where transmitted light from the Brillouin cavity is photo-detected, followed by demodulation, resulting in error signals. The error signals are fedback to one of the cw lasers and a fiber length control module based on a PZT in the Brillouin cavity for the two PDHs. Please see the supplementary material for the experimental setup.
7,931
2018-09-12T00:00:00.000
[ "Physics" ]
Multidimensional tie strength and economic development The strength of social relations has been shown to affect an individual’s access to opportunities. To date, however, the correspondence between tie strength and population’s economic prospects has not been quantified, largely because of the inability to operationalise strength based on Granovetter’s classic theory. Our work departed from the premise that tie strength is a unidimensional construct (typically operationalized with frequency or volume of contact), and used instead a validated model of ten fundamental dimensions of social relationships grounded in the literature of social psychology. We built state-of-the-art NLP tools to infer the presence of these dimensions from textual communication, and analyzed a large conversation network of 630K geo-referenced Reddit users across the entire US connected by 12.8M social ties created over the span of 7 years. We found that unidimensional tie strength is only weakly correlated with economic opportunities (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R^2=0.30$$\end{document}R2=0.30), while multidimensional constructs are highly correlated (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R^2=0.62$$\end{document}R2=0.62). In particular, economic opportunities are associated to the combination of: (i) knowledge ties, which bridge geographically distant groups, facilitating the knowledge dissemination across communities; and (ii) social support ties, which knit geographically close communities together, and represent dependable sources of social and emotional support. These results point to the importance of developing high-quality measures of tie strength in network theory. The strength of social relations has been shown to affect an individual's access to innovation 1 , access to economic opportunities 2 , life expectancy 3 , and happiness 4 . According to Granovetter's classic theory about tie strength 5 , information flows through social ties of two strengths. First, through weak ties. These ties, despite being used infrequently, bridge distant groups that tend to posses diverse information, facilitating the knowledge dissemination across communities. Second, information also flows through strong ties. These ties, by being used frequently, knit close communities together, and represent dependable sources of social and emotional support. To date, however, the correspondence between tie strength and population's economic prospects has not been quantified, largely because of the inability to operationalize tie strength based on Granovetter's conception. Typically, network studies operationalize strength with indicators like frequency or volume of contact 6 . Eagle et al. did so by studying the relationship between the structure of a national communication network and access to socio-economic opportunity 7 . They found that network diversity was associated to opportunities, but communication volume or number of contacts was not. The prospect that tie strength is not a unidimensional construct ranging from weak to strong but might be multidimensional is broadly consistent with theoretical and experimental work by Marsden and Campbell 6 and Wellmann and Wortley 8 . It is also consistent with Granovetter's original operationalization of strength as "a (probably linear) combination of the amount of time, the emotional intensity, the intimacy (mutual confiding), and the reciprocal services which characterize the tie. " 5 These indicators have been repeatedly found to be only weakly related to frequency of contacts 6,7 . Therefore, network studies using frequency of contacts to model strength are capturing only one aspect of the linkages among individuals. Our work departed from the premise that tie strength is a unidimensional construct, built upon work on social psychology starting from Granovetter's conception of tie strength, and identified and validated ten fundamental dimensions of social relationships 9,10 . In previous work, we showed that these ten dimensions correspond to how people perceive and categorize most of their own social relationships 9 , and we built a state-of-the-art NLP tools to infer the presence of these dimensions from textual communication 10 . In this work, we used these tools to analyze a large conversation network of geo-referenced Reddit users across the entire US ( ∼13M ties). Then, going back to Eagle et al. 's work and borrowing their methodological framework 7 , we were able to test whether the structure of a national communication network (in particular, its tie diversity) was related to access to socio-economic opportunities, and whether switching from a unidimensional notion of tie strength to a multidimensional one would improve explanatory power. We found that tie diversity measured on the networks of knowledge exchange www.nature.com/scientificreports/ and social support correlates much more strongly with economic development ( R 2 = 0.62 ) than diversity measured on a network simply weighted on frequency of interactions ( R 2 = 0.30). In line with Granovetter's conception of tie strength, we found that knowledge ties and social support ties: are hardly distinguishable solely based on frequency of interaction; have opposite geographic distribution (knowledge ties are global, spanning longer geographical distances, while social support ones are local, typically staying in the same state); and both contribute to economic opportunities (states with higher GDP per capita are characterized by both global access to knowledge and local access to support). These results point to the importance of developing multidimensional measures of tie strength in network theory to better reflect the nature of human relationships that social links ought to model. Results From a set of 65M comments posted on Reddit by 1.3M users between the years of 2006 and 2017, we extracted the social interactions of all Reddit users that we could geo-reference at the level of the 51 US states using highaccuracy heuristics validated in previous work (see "Methods"). In Reddit, conversations develop over discussion threads. If user i commented over either a submission or a comment of another user j, we considered that i sent a message to j, as it is common practice when studying Reddit conversation networks 11 . We created a directed communication graph G (U, E) to model such exchange of messages. The set of nodes U contains all the georeferenced Reddit users in our dataset. Two users i and j are connected by a directed edge (i, j, w(i, j)) ∈ E if user i sent at least one message to user j. The edge weight w(i, j) represents the frequency of contacts and it is equal to the total number of messages sent. In total, the graph contains 630K nodes and 12.8M edges. The distribution of node degree and link strength is shown in Fig. SI1. By applying our social dimensions classifier to the corpus of messages, we identified the subset of messages that express a social dimension d (see "Methods" for details). In particular, we focused on the dimensions of knowledge exchange and social support (respectively, knowledge and support for short). Other dimensions are discussed in Supplementary Information). The classifier ranked the messages according to their likelihood of containing expressions of a given social dimension; we marked with dimension d only the top 1% of messages from the likelihood ranking of d (we discuss results with looser thresholds in Supplementary Information, Fig. SI2). Out of these smaller sets of messages, we constructed dimension-specific communication graphs G d using the same procedure we adopted for building the overall communication graph G . Such dimension-specific graphs capture only one type of social interaction each; for example, the knowledge graph G knowledge contains only edges formed by knowledge-exchange messages, and edge weights encode the number of knowledge-exchange messages flowing between the two endpoints. The dimension-specific graphs contain roughly 1% of the edges of the full communication graph and between 16 to 23% of its nodes, depending on the dimension (see Table 2). The networks of knowledge and support include 20% and 21% of all nodes, respectively. The edges of G knowledge and G support overlap only slightly: around 2% of the edges of each graph are also present in the other. By having a sample of edges annotated with both social dimensions and weight, we were able to look into the relationship between frequency of contacts, knowledge, and support. The typical weight of edges connecting users who exchange knowledge is not dissimilar from the typical weight of those providing support. Figure Percent change �p(d|l) of the probability that a dimension d is expressed by a social tie spanning a geographical distance l, compared to random chance. The change is estimated by comparing the real data with distance measurements on 50 instances of a null model that reshuffled user locations at random; the average values along with their 95% confidence intervals are reported. Distances are discretized in five bins, each containing the same number of social ties. Bins are labeled with the median distance of the ties inside that bin. The 'zero distance' bin contains almost exclusively pairs of users who live in the same state. Two types of measurements are presented: (i) at the level of social relationships, where each social tie is counted once regardless of its weight, and (ii) by performing a distance measurement for each individual message, thus effectively weighting more pairs of users who communicated frequently. www.nature.com/scientificreports/ compares the weight distribution of edges connecting users who exchanged knowledge with the weight distribution of edges connecting those who exchanged support. A two-sample Kolmogorov-Smirnov test (a statistic to measure the distance between two distributions) indicated that the two distributions, albeit statistically different, are very similar: KS = 0.03 (p = 0.0) on a range from 0 (indicating identical distributions) to 1 (maximum difference). This comparison exposes the inherent limit of quantifying tie strength with the mere frequency of interactions to adequately qualify the nature of social relationships. In Reddit conversations, the main difference between knowledge and support ties does not lie in their strength but in their geographic span. The probability of creating knowledge ties increases with the geographical distance between the two endpoints, while the probability of creating support ties drops with distance ( Fig. 1B,C). This is consistent with theoretical expectations. Knowledge production on the Web follows Pareto's law: a restricted number of experts create and spread information to a vast audience 12 ; consequently, knowledge ends up being locally scarce 13 and needs to travel longer distances to reach multiple communities. In past studies, a similar pattern was detected for the communications within large corporations, where geographically distant ties were estimated to be more effective conduits for knowledge flow 14,15 . The opposite trend holds for support. Geographical distance impacts significantly people's ability to provide both material and emotional support 16 . Despite computer-mediated communication has grown the opportunities for providing remote support 17 , people have an innate sense for local attachments and an economic advantage to foster them 18 , which might be why support appears more rarely in long-distance relationships 8 . Last, we tested if dimension-specific graphs are more indicative of economic development than the full communication graph. We did so by borrowing the experimental setup by Eagle et al. 7 , who studied the network of phone calls among residents of England and measured the spatial and social diversity ( D spatial ) for each of nearly 2,000 regional exchanges in the country. D spatial captures the diversity of areas that the residents of a given area communicate with, and they found it to be correlated with the Index of Multiple Deprivation-a composite score of social and economic development based on UK census data. They also tested the robustness of their results with an alternative measure of diversity D social that captures the diversity of people connected to the residents of a given area. We reproduced Eagle et al. 's experimental setup and ran an Ordinary Least Squares linear regression (OLS) to predict per-capita Gross Domestic Product (GDP) of US states in the year 2017 19 from the spatial diversity at state-level computed on (i) the full communication graph ( D spatial ) and (ii) the two dimension-specific communication graphs ( D knowledge spatial , D support spatial ). Results for D social are highly aligned with those for D spatial , and we discuss them in Supplementary Information. We focused on 44 states for which Reddit penetration is sufficient and aligned with the population distribution (see "Methods"), however we found qualitatively similar results when considering all states (see Supplementary Information, Table SI3). Regressions models with different combinations of social and spatial diversity are presented in Tables SI1 and SI2. In Table 1 we compare three linear regressions models: one based on population density only (a validated predictor of economic growth 20 ), one using spatial diversity on the full graph with links weighted based on frequency of interaction, and one using the two spatial diversity scores calculated on the graphs of knowledge and support. The model based on the selected social dimensions is 138% more accurate than the density-only baseline, while the model based on the full communication graph is only 15% more accurate. To check whether the difference in performance is due to the selection of knowledge and support ties or just to the smaller sample considered, we ran a regression using a random sample of ties as small as the number of knowledge ties, and obtained the worst fit ( R 2 adj of approximately 0.1, see Supplementary Information). In the regression model with the social dimensions, the coefficient for knowledge diversity is positive and the one for support diversity is negative. People living in areas characterized by superior economic outcomes access novel information that is not available locally by establishing a diverse set of global interactions, which is in agreement with the weak tie pillar of Granovetter's theory. Residents of states with highest per-capita GDP draw their social support mostly from local connections, in agreement with the strong tie pillar of the theory. The effect size of knowledge is stronger (almost double) than the effect size of support, which indicates that the process of knowledge exchange is the primary correlate of economic development, and the network of support compounds Table 1. Linear regressions to predict GDP per capita of US states from: (left) population density only; (center) spatial diversity computed on the full communication graph; (right) spatial diversity computed on dimension-specific communication graphs. Population density is added as a control variable in the latter two models. Adjusted R 2 and Durbin-Watson statistic for autocorrelation (values close to 2 indicate no autocorrelation) are reported. The contribution of individual features to the models is described by their betacoefficients, standard errors (SE) and p-values. www.nature.com/scientificreports/ over it. A linear regression including other social dimensions is discussed in Table SI4, but the interplay between knowledge and support is more predictive than any other combination of dimensions. Discussion In agreement with Granovetter's theory, we found that economic development at the level of US states is associated to the abundance of global ties that carry factual knowledge and with the abundance of local ties providing social support. This finding is compatible with the established notion of innovation being fueled primarily by novel information flowing from diverse regions of the social network, and secondarily by an adequate support network to favor the re-elaboration of those ideas locally. This perspective enriches the corpus of experimental evidence about the existence of a trade-off between seeking novel information and building tight networks of support 13,21,22 . We showed that geographical regions generally experience that trade-off but the regions that achieve high economic success are those that have both global outreach of knowledge exchange and local networks of support. In contrast with a variety of network science studies, we provided evidence that frequency of contacts might not be a good proxy for tie strength: network diversity calculated on a weighted social network is weakly associated to economic development at state level. Moreover, our results challenge the equivalence between weak ties and knowledge flow, at least for the case of Reddit. Interestingly, we found that knowledge and support ties differ in terms of their geographical span, with knowledge ties being far-reaching, and support ties being local. The ability of measuring directly these two aspects of social interaction that are postulated by Granovetter's theory to be drivers to innovation enhances the predictive and descriptive power of network models. Strikingly, narrowing down the analysis to a small subset of messages that express either knowledge or support yields a predictive performance that is as much as double of that of models used in previous research that considered only frequency of contacts 7 . The ability of decomposing relationship data into interpretable social constituents opens up ample avenues of exploration in social network analysis. Studying how different social dimensions are instantiated by different anatomic patterns of social networks such as their community structure or the centrality of their actors might be a promising research direction. Also, this work showed the association of knowledge and support with GDP, but other social dimensions may well explain other socio-economic outcomes such as health or quality of life. Both our data and methods suffer from limitations that future work may address. Unlike the work by Eagle et al., upon which our experimental setup was based 7 , our study relies on social network data that covers only a small sample of the population; this was a necessary sacrifice in order to gain the crucial ability to analyze the content of social interactions. Among all the social platforms from which we could have collected conversational text, we selected Reddit because its richness of information and variety of social interaction types. Other popular platforms (e.g., Facebook, Twitter) either authorize data collection exclusively from volunteer users 23 or expose data APIs that may be limited by volume, temporal scope, and known sampling biases 24 . On the contrary, Reddit allows for the collection of the full conversation history between any pair of users, and includes metadata useful for their characterization, such as geo-localization 25 . Also, Reddit's etiquette, credit system, and topic-oriented subreddits encourage social participation for purposes that are akin to real-life social networks 26 , such as socialization, entertainment, and information exchange 27 , while naturally disincentivizing practices that disproportionately favor status-seeking, which are prominent in platforms such as Twitter and Facebook 28,29 . As a result, Reddit's comment threads enjoy properties that are typical of human conversations, such as the high topical coherence of successive messages in a thread 11,30 . Because of these desirable properties, Reddit has been the platform of choice for hundreds of quantitative and qualitative studies on social behavior in the last ten years 31 . Furthermore, the anatomy and dynamics of the Reddit conversation network exhibit properties that are in line with those of most social networks [32][33][34] , which speaks to the potential of our findings to generalize to other contexts. These properties include broad distributions of the node degree and of the frequency of most user activities [35][36][37] (see also Fig. SI1), marked community structure 38 , assortativity 36 , and burstiness of interactions 39 . Nevertheless, Reddit user base is biased towards males (64%) and young adults (36% in the age range 18-29, 22% in the range 30-49), and our study focuses entirely on US residents 40 ; therefore, replicating our analysis to multiple conversation networks is in order to corroborate the robustness of our results. Within Reddit, our perspective on the ecosystem of social interactions is restricted by our focus on the physical space. In particular, the communication graphs include only a sample of all the existing edges, namely those that connect users whose geo-locations could be estimated. This entails three main biases. First, the majority of interactions are left out of the picture, thus potentially reducing the predictive and descriptive power of our models. Second, the social links we considered were not randomly sampled, as they connect users who self-selected themselves to join geo-salient subreddits. Last, the limited resolution of the user spatial location (state-level) affected our ability to perform a finer-grained geographic analysis (e.g., at city level). To address these biases, future work ought to consider social systems where a larger portion of users can be geo-referenced at a finer geographic resolution. Even if our social dimensions classifiers were trained on Reddit data and were shown to achieve high accuracy (see "Methods"), their output is not error-free. To improve both precision and recall, a systematic error analysis and a fine-tuning of the model with additional training data would be in order. The ten social dimensions, albeit more comprehensive than any existing model, do not exhaustively map all the possible elements that define social interactions. The concepts that these social dimensions encode are rather broad and encompass a rich spectrum of nuances. The main goal of this work was to go beyond simple frequency of contacts as a proxy for tie strength, offering well-founded interaction archetypes that could be explored and refined in the future. Reddit data collection. Reddit is a public discussion website particularly popular in the United States where half of its user traffic is generated. Reddit is structured in an ever-growing set of independent subreddits (1.2M at the time of writing) dedicated to a broad range of topics 25 . Users can post new submissions to any subreddit, and other users can add comments to submissions or to existing comments, thus creating nested conversation threads. The vast majority of Reddit submissions and comments since 2007 is publicly available through the pushshift.io API 41 . For the purpose of this study, we gathered the content created in two temporal windows: from 2007 until the end of 2012, and for the whole year of 2017. The findings presented in the "Results" section were obtained using the data from these two windows jointly, but having at hand two collections from distinct time periods allowed us to study how data recency affects the ability to predict the desired outcome (see Supplementary Information, Fig. SI3). In total, we collected 65M comments from 1.3M users. We restricted our study to users whom we could geo-reference at the level of US States. Although Reddit does not provide explicit information about user location, we used a location-estimation heuristic proven to be effective in previous work 42 . We first identified 2,844 geo-salient subreddits related to cities or states in the United (https:// www. reddit. com/r/ Locat ionRe ddits/ wiki/ faq/ north ameri ca). We assigned a user to a state if (i) they posted at least n submissions or comments in subreddits related to that state, and (ii) 95% or more of their comments and submissions posted to geo-salient subreddits were done in subreddits related to that state. The findings presented earlier were obtained with n = 3 ; in Supplementary Information (Fig. SI4) we discuss results obtained by varying this threshold. Overall, we found 632k users who are likely to be located in one of the 51 US states. The number of users per state ranges from less than 1k (Wyoming) to 61k (California). In total, these users posted 16.2M comments in total (9.8M in 2007-2012, and 6.4 in 2017). Filtering states by Reddit penetration. States in which the number of Reddit users is not proportional to the number of residents might distort the representation of social communication patterns that actually take place in those states. To identify such cases, we proceeded as follows. We first plotted the census population in 2017 against the number of Reddit users, across states (Fig. 2, left). We then obtained the best linear fit of the data and calculated the residuals between the number of Reddit users and the predicted value according to the linear fit. Last, we calculated the distribution of residuals and removed states whose residuals were more than 1 standard deviation away from the average of the distribution. Those included two states whose Reddit user base was higher than what one would expect based on their population (DC and AK) and two for which it was lower (MS and WV). In addition, we removed three outlier states whose Reddit penetration was lowest (less than 1000 users), which left us with a total of 44 states (Fig. 2, right). Social dimensions from textual conversations. Social science research proposed several categorizations of constitutional sociological dimensions that describe human relationships 8,51,52 . By surveying such extensive literature, Deri et al. 9 compiled one of the most comprehensive categorizations to date, which identifies ten main dimensions of social relationships (Table 2). This theoretical model is rather exhaustive in that most relationships are accurately defined by appropriate combinations of the ten dimensions-Deri et al. showed it by asking hundreds of volunteers to write down keywords that described their relationships and found that all of Figure 2. Relationship between population and number of Reddit users across US states. The best linear fit is shown, together with its slope β and the R 2 coefficient to measure the goodness of fit. On the left, all states are included. On the right, the states whose Reddit penetration was too low or was not proportional to the population of residents were removed. www.nature.com/scientificreports/ them fitted into the ten dimensions. The ten social dimensions are frequently expressed through conversational language and, most importantly, these verbal expressions can be captured with computational tools. We infer the social dimensions from Reddit messages using the NLP model proposed by Choi et al. 10 , which comes with a publicly-available python implementation (http:// www. github. com/ lajel lo/ tendi mensi ons). Given a textual message m and a social dimension d, the model estimates the likelihood that m conveys d by giving in output a score from 0 (least likely) to 1 (most likely). Rather than using a multiclass classifier, the model includes ten independently-trained binary classifiers C d , one per each dimension. This choice was driven by the theoretical interpretation of the social dimensions 9 , as any sentence may potentially convey several dimensions at once (e.g., a message expressing both trust and emotional support). Each classifier is implemented using a Long Short-Term Memory neural network (LSTM) 53 , a type of Recurrent Neural Network (RNN) that is particularly effective in modeling both long and short-range semantic dependencies between words in a text, and it is therefore widely used in a variety of NLP tasks 54 . Like most RNNs, LSTM accepts fixed-size inputs. This particular model takes in input a 300-dimension embedding vector of a word, one word at a time for all the words in the input text. Embedding vectors are dense numerical representations of the position of a word in a multidimensional semantic space. Such representations are learned from large text corpora. This model uses GloVe embeddings 55 learned from Common Crawl, a text corpus containing 840B tokens. The dimensions classifiers C d were trained using about 9k sentences that were manually labeled by trained crowdsourcing workers. Most of these sentences were taken from Reddit, which makes it the ideal platform to apply the model on. In their experiments, Choi et al. reported very high classification performance which averages to an Area Under the Curve (AUC) of 0.84 across dimensions, and specifically 0.82 for knowledge and 0.83 for support. AUC is a standard performance metric that assesses the ability of a classifier to rank positive and negative instances by their likelihood score, independent of any fixed decision threshold. The AUC of a random classifier is expected to be 0.5, whereas the maximum value is 1. Given in input a message m, the classifier outputs a score s d (m) that expresses the likelihood that message m contains dimension d. In practice, the classifier estimates a score for each sentence in m and returns the maximum score, namely: s d (m) = max sentence∈m s d (sentence) . By using the maximum score, we considered a message as likely to express dimension d as its most likely sentence, thus avoiding the dilution effect of the average. This reflects the theoretical interpretation of the use of the social dimensions in language 9 : a dimension is conveyed effectively through language even when expressed only briefly. To conduct our analysis, we binarized the classifier scores s d (m) using an indicator function that assigns dimension d to m if s d (m) is above a certain threshold θ d : We used dimension-specific thresholds because the empirical distribution of the classifier scores s d varies noticeably across dimensions (see Fig. 3, left), which makes the use of a fixed common threshold unpractical. We made a very conservative choice of θ d as the value of the 99th percentile of the distribution of the classifier score s d , thus favoring high precision over recall. This effectively reduces the number of messages to 1% of the total and the number of edges to slightly more than 1% of the total. In Supplementary Information (Fig. SI2, right), we experimented with different percentiles, starting from the 75th. As a result of this procedure, a comment could end up being labeled with multiple dimensions. To measure the extent to which pairs of dimensions are related, we computed the Spearman rank cross-correlation matrix of the classifier scores of all dimension pairs across all messages (Fig. 3, right). Some pairs of dimensions such as status, trust and support occur more frequently together, but overall the ten dimension model exhibits a fairly high degree of orthogonality. To make sure that the ten dimension classifier is not capturing simply the sentiment of the text, we correlated the dimensions scores with the scores from Vader, a simple yet widely-used sentiment analyzer 56 . The correlations were all very low except for a negative correlation with the conflict dimension. Table 2. The social dimensions of relationships surveyed by Deri at al. 9 . The last column reports the fraction of nodes of the full communication graph G that are included in each dimension-specific graph G d . The fraction of nodes in the last column is not exclusive, because nodes can be found in multiple dimensionspecific graphs. Our work focused mainly on the dimensions of knowledge and support. Figure 4. Example of how a dimension-specific conversation multigraph G d is built. First, the text classifier for dimension d is applied to all messages and outputs scores that are proportional to the likelihood of a message containing dimension d. Then, for each dimension individually, a score threshold is determined based on a selected percentile α in the overall score distribution. In the illustrated example, the value corresponding to the α percentile is 0.75. Last, only the edges with the messages that pass that threshold are kept; the messages are counted to compute the edge weight. www.nature.com/scientificreports/ Computing diversity of interactions. Eagle et al. 7 define two measures of diversity: social D social and spatial D spatial . In practice, the two metrics are highly correlated, hence in the main Results we report findings for D spatial . In Supplementary Information, we discuss findings for both diversity measures. Given a user i, we first calculated the proportion of the total number of messages that i sent to j, namely: where k is the total number of i's social contacts on the communication graph G . In telephone network, the strength of a tie was measured as the total call duration, whereas we measured it as the total number of messages. We then calculated the normalized Shannon entropy of those proportions: The dimension-specific social diversity was computed with an analogous formula, but taking into account only the edges in the dimension-specific graph G d : where k d is the total number of i's social contacts on the dimension-specific graph G d . To compute the spatial diversity D spatial , we first calculated the proportion of total volume of messages exchanged by user i with any other users living in area a: where A is the total number of areas and U a ⊂ U is the subset of users living in area a. We then computed the spatial diversity as the normalized entropy of the p ia proportions: The same formulation is applied to the dimension-specific graphs: Last, we computed the diversity values at area level by averaging the diversity scores of users living in the same area: Linear regression. Linear regression is an approach for modeling a linear relationship between a dependent variable (GDP, in our experiments) and a set of independent variables (diversity measures), and it does so by associating a so-called β-coefficient with each independent variable such as the sum of all independent variables multiplied by their respective β-coefficients approximates the value of the dependent variable with minimal error. Specifically, we used an Ordinary Least Squares (OLS) regression model to estimate the coefficients such that the sum of the squared residuals between the estimation and the actual value is minimized. The diversity metrics given in input to the regression were approximately normally distributed and bounded in the interval [0,1] (see Fig. SI5) , . www.nature.com/scientificreports/ Modeling geographical span. To study the dependency between geographical space and social dimensions, we estimated the conditional probability p(d|l) of a dimension d occurring in conversations characterized by a given geographic span (or length) l. Specifically, we considered the set E@l of all edges in the conversation graph G that connect users at geographic distance l, and the subset of those edges E d @l that belong to the dimension-specific graph G d . We then computed the conditional probability as the number of dimension-specific edges over the total number of edges at distance l, namely: p(d|l) = |E d @l| |E@l| . Because activity and connectivity are not uniformly distributed across states, the probability p(d|l) alone could yield a biased view of the interplay between interactions and space. To understand why, consider a scenario in which most of the users are concentrated in one single state. In such a scenario, all users would be constrained to interact mostly with people from that state, and the resulting spatial patterns will be just reflecting the underlying activity and spatial distributions rather than being indicative of explicit user choices. To account for this, we discounted p(d|l) by a probability p null (d|l) computed on randomized data. In particular, we generated a random null model by randomly reshuffling the locations across users. By doing so, we preserved both the connectivity properties of the conversation network and the population distribution across states, yet destroying the original relationship between social links and spatial locations. Finally, we computed a normalized score �p(d|l) = p(d|l) p null (d|l) − 1 , which measures the % change of the probability of interaction compared to what it is expected by chance. To obtain the conditional probability associated to individual messages rather than social links, we also computed an alternative version of �p(d|l) that considers each message as an individual edge in the graph, thus effectively weighting more pairs of individuals who communicated often. Since we could geo-reference users at state-level only, we approximated the span of a social link between two users to the length of the straight line connecting the geographic centroids of their states. Given the relatively limited spatial resolution of such a definition, we were bound to a coarse partitioning of distances. Effectively, we divided the set of edges in quintiles based on their geographic span distribution, thus obtaining five equally-sized distance bins, the first of which contains almost exclusively interactions among people in the same state ( l = 0). Data availability We made all the data used in this study publicly available. The data consists of: (1) individual messages scored with the ten dimension classifier and the identifiers of the sender and receiver; (2) estimated location of the users in the communication graph; (3) aggregated data at state-level reporting the diversity metrics. The DOI of the publicly accessible data is https:// doi. org/ 10. 6084/ m9. figsh are. 19918 231. The pre-trained social dimensions classifier is available at http:// www. github. com/ lajel lo/ tendi mensi ons.
8,299.2
2022-12-21T00:00:00.000
[ "Economics", "Psychology", "Sociology" ]
Analog vacuum decay from vacuum initial conditions Ultracold atomic gases can undergo phase transitions that mimic relativistic vacuum decay, allowing us to empirically test early-Universe physics in tabletop experiments. We investigate the physics of these analog systems, going beyond previous analyses of the classical equations of motion to study quantum fluctuations in the cold-atom false vacuum. We show that the fluctuation spectrum of this vacuum state agrees with the usual relativistic result in the regime where the classical analogy holds, providing further evidence for the suitability of these systems for studying vacuum decay. Using a suite of semiclassical lattice simulations, we simulate bubble nucleation from this analog vacuum state in a 1D homonuclear potassium-41 mixture, finding qualitative agreement with instanton predictions. We identify realistic parameters for this system that will allow us to study vacuum decay with current experimental capabilities, including a prescription for efficiently scanning over decay rates, and show that this setup will probe the quantum (rather than thermal) decay regime at temperatures $T\lesssim10\,\mathrm{nK}$. Our results help lay the groundwork for using upcoming cold-atom experiments as a new probe of nonperturbative early-Universe physics. Since the pioneering early work of Coleman and collaborators [1][2][3], false vacuum decay (FVD) has primarily been studied using instanton methods, in which one obtains a semiclassical approximation of the decay rate by solving the equations of motion in imaginary time.These methods are made tractable by imposing O(d+1) symmetry on the resulting Euclidean 'bounce' solutions which describe the bubble nucleation event (with d the number of spatial dimensions).However, this symmetry assumption is broken on dynamical and/or inhomogeneous spacetimes that are relevant to cosmology, and precludes us from studying interesting and observationally important issues such as correlations between multiple bubbles [27,28].Furthermore, additional assumptions are required to interpret the instanton in real time; specifically, it is assumed that a critical bubble 'appears' at some instant in time.This prevents any study of the precursors of such an event in terms of the real-time dynamics of the field. Recently, a promising new method for addressing these questions has emerged: the use of ultracold atomic Bose gases as quantum simulators of relativistic bubble nucleation [29][30][31][32][33][34][35][36][37][38][39][40].These systems exhibit coherent quantum behavior on scales that can be directly imaged in the laboratory, and can be manipulated into mimicking the dynamics of a Klein-Gordon field in a potential with true and false vacua.Cold-atom experiments have already been successfully used to study discontinuous phase transitions in quantum fields [41][42][43][44][45][46], including nonrelativistic thermal vacuum decay [47].Atomic simulators of relativistic FVD are now under active development by several groups, offering the prospect of studying vacuum decay in real time and in a controlled and reproducible manner, with the promise of new insights that complement those from long-established Euclidean techniques.These insights could have a transformative impact on our understanding of the early Universe, potentially helping to answer some of the most fundamental questions in cosmology, such as why there is more matter than antimatter [11][12][13], and whether our observable Universe is embedded in a larger 'multiverse' [6][7][8][9][10]. Previous analyses of these analogs have focused on their classical equations of motion, showing that these are equivalent to the Klein-Gordon equation for a relativistic field in the appropriate limit.Here we go further by calculating the spectrum of quantum vacuum fluctuations in the analog false vacuum state.This fluctuation spectrum is a crucial input for lattice simulations of the cold-atom system, in which the fluctuations are represented as classical stochastic variables in order to obtain a semiclassical approximation of the decay process.These simulations are our main theoretical tool for guiding the development of the analog experiments, and ultimately for helping us interpret the experimental data. After describing our proposed analog system in Sec.II, we show in Sec.III that the false-vacuum fluctuation spectrum matches that of a Klein-Gordon field on scales where the classical analogy holds.This result was not guaranteed by the existing classical analogy, and thus provides further evidence for the suitability of this system as a relativistic analog.After an exhaustive search of the cold-atom literature, we identify a homonuclear potassium-41 mixture as the most promising experimental setup, and in Sec.IV we present a realistic set of parameters for a 1D realization of this system.This includes a protocol for scanning over parameters that allows us to vary the decay rate while keeping all other scales in the effective relativistic theory fixed.In Sec.V we then carry out a suite of semiclassical lattice simulations of this system, using our results for the fluctuation spectrum to generate realistic vacuum initial conditions.We verify that the field undergoes exponential decay as expected, and that the decay rate scales exponentially with the amplitude of the initial fluctuations, in qualitative agreement with the instanton prediction.Finally, in Sec.VI we explore the impact of finite temperatures on the decay rate, and argue that current experimental technologies can probe the regime of quantum rather than thermal decays.We summarize our results in Sec.VII, and discuss avenues for further development of this work. II. THE ANALOG FALSE VACUUM In this section we review the essential details of the analog FVD system we are interested in, as first proposed by Fialko et al. [30], and subsequently studied in Refs.[31][32][33][34][35][36][37].This system consists of a two-component Bose-Einstein condensate (BEC), with each atomic species described by a complex bosonic field ( The operators ψ † i (x) and ψi (x) create and annihilate atoms of species i in the position eigenstate |x⟩, respectively.Their amplitudes therefore determine the local number density of each species, ni (x) = ψ † i ψi , while their phases φi (x) encode coherent wavelike behavior and interference effects.The dynamics of these fields are described by the Hamiltonian which consists of a nonrelativistic kinetic term for each species, as well as a quartic self-interaction of strength g > 0 due to repulsive s-wave contact interactions between atoms.This interaction sets the characteristic energy scale of the BEC, E = gn, where n = ⟨n⟩ is the mean number density.The integral in Eq. ( 2) is over a finite spatial volume V that is either one-or two-dimensional, with the BEC confined tightly along the remaining dimensions, rendering them nondynamical. We have specialized here to the case where both species have equal masses (m 1 = m 2 = m), equal intraspecies scattering (g 11 = g 22 = g), and zero interspecies scattering (g 12 = g 21 = 0).These conditions can be realized in practice by letting our two species be two different hyperfine states of the same atomic isotope, and applying an external magnetic field at the zero-crossing of a Feshbach resonance in the interspecies channel g 12 [31,48].Another possibility is to trap a single atomic species in a double-well potential; the atoms in each of the two wells then act as the two species, and only scatter with other atoms in the same well [49,50]. The Hamiltonian (2) excludes the usual external potential term that describes the trapping of the atoms along the extended direction(s).Our proposed experiment uses a 'box trap' which effectively approximates an infinitewell potential [51,52], so that the given Hamiltonian is accurate inside the trap.This is desirable for simulating relativistic physics as it maintains translation invariance in the interior region, with a near-homogeneous density profile.The density rapidly tapers to zero at the walls of the trap on a characteristic scale called the healing length, For the experimental parameters we consider here, this scale is smaller than the size of the BEC by a factor of 500 (see Table I).We therefore treat the field as homogeneous with periodic boundaries throughout this paper, as in most previous studies of this system [30][31][32][34][35][36][37][38].(This setup is also a reasonable approximation to a 1D ring trap, as used in e.g.Ref. [53].) Extending our results below to include the box trap and corresponding boundary conditions requires a calculation of the full spectrum of inhomogeneous eigenmodes, which has yet to be carried out for this system.We will present this calculation and its impact on bubble nucleation in an upcoming companion paper. The two condensed species are coupled via a linear interaction term in the Hamiltonian, which allows atoms of species 1 to convert into species 2 (and vice-versa) at a rate ν that undergoes rapid modulation at some angular frequency ω, where ϵ ≪ 1 and λ = O(1) are dimensionless constants. In the setup with two hyperfine states, this coupling is introduced by applying a modulated radio-frequency (rf) field; in the double-well case, ν instead represents the tunneling rate between the two wells.We integrate out the fast oscillation to obtain an effective Hamiltonian Ĥeff Potential for the analog relativistic field φ, as given by Eq. (10).There are stable 'true vacuum' (TV) states at every even integer value of φ/(πφ0).For λ > 1 there are also metastable 'false vacuum' (FV) states for every odd integer value. that is valid on timescales much longer than ω −1 [54].At linear order in ϵ, we find This time-averaged picture fails to capture the presence of Floquet instabilities induced in modes whose natural frequencies are close to the driving frequency ω [32,34]. One expects that setting ω sufficiently large (i.e., making the wavelengths of the unstable modes sufficiently short) will cause these instabilities to be quenched by damping effects on small scales; however, the exact nature of this process is still an open question.The relevance of the effective Hamiltonian (6) for quantum simulation comes from considering the field which is proportional to the relative phase between the two species. 3On scales much larger than the healing length, the classical equation of motion for this degree of freedom is identical to that of a relativistic scalar field, (20) for the relative modes of the analog system.Right panel : Fluctuation power spectrum for the effective relativistic field φ.Both quantities interpolate between being Klein-Gordon-like (13) in the IR (ξk ≪ 1) and nonrelativistic (21) in the UV (ξk ≫ 1).The vertical dashed line in each panel indicates the crossover between these two regimes.Both use our fiducial parameters, given in Table I. where we identify the 'speed of light' as Note that in reality this is the sound speed of phonons in the BEC, which is roughly eleven orders of magnitude smaller than the speed of light in vacuum.However, as we see below, it plays exactly the same role as the speed of light in the effective relativistic theory that emerges on large scales.The potential appearing in Eq. ( 8) is ) As shown in Fig. 2, this contains a series of true vacua at φ tv /φ 0 = 2jπ, j ∈ Z, and for λ > 1, a series of false vacua at φ fv /φ 0 = (2j + 1)π.These correspond to the two atomic species being in phase and in antiphase, respectively; the linear coupling means that there is an additional energy density of order ϵgn 2 associated with being in antiphase, while the modulation generates an effective potential barrier that makes this state metastable.Increasing the amplitude of the modulation via λ creates a deeper potential barrier, and increases the mass of fluctuations in the false vacuum, III. QUANTUM FLUCTUATIONS IN THE FALSE VACUUM We have reviewed the known result that, on scales much larger than the healing length, an atomic Bose-Bose mixture can reproduce the classical equation of motion of a Klein-Gordon field (8) with a false vacuum potential (10).However, vacuum decay is inherently quantummechanical, so it is important to test whether these systems are also analogous at the quantum level.Here we perform this test by calculating the power spectrum of fluctuations in the false vacuum state |Ω fv ⟩, where φk are the Fourier modes4 of the effective relativistic field (7).Below we find that, on scales much larger than the healing length (ξk ≪ 1), this spectrum asymptotically matches that of the corresponding Klein-Gordon field, with corrections suppressed by powers of (ξk) 2 and ϵ.To derive this result, we adopt the standard mean-field approximation [55] in which each atomic field consists of small quantum fluctuations around a highly-occupied classical condensate wavefunction, The factor (−1) here reflects the fact that the two species are in antiphase in the false-vacuum state.We expand around a homogeneous mean-field wavefunction, whose phase evolves at a rate set by the chemical potential, µ = (1 + ϵ)gn.To study the dynamics of the fluctuations, it is convenient to remove this time evolution with a canonical transformation ψi → e iµt/ℏ ψi .This modifies the Hamiltonian to Expanding this new Hamiltonian to quadratic order in the fluctuations, we find that it can be written as with K 0 a constant energy offset associated with the meanfield solution, and separate terms K± governing the total and relative fluctuation modes, with the normalization chosen such that the modes obey canonical bosonic commutation relations.The field we are interested in is defined solely in terms of the relative modes, and at linear order in the fluctuations is given by We can therefore ignore the dynamics of the total modes for now, given that they are decoupled in the linear regime. (We return to them in Sec.VI, as they play a significant role in the presence of thermal noise.)To calculate the power spectrum (12), we must determine the eigenstates of the relative Hamiltonian K− and identify |Ω fv ⟩ as the lowest-lying of these states. 6We can do this by writing the Hamiltonian in diagonalized form, so that each normal mode, described by the ladder operators âk , â † k , acts as an independent harmonic oscillator.The false vacuum |Ω fv ⟩ is then identified as the state annihilated by âk for all wavenumbers k.In Appendix A we identify the appropriate Bogoliubov transformation relating the normal modes to the relative atomic field modes ψ− k , ψ− † k .The energy associated with excitations of the normal modes is given by which, on scales much larger than the healing length (ξk ≪ 1), reduces to the dispersion relation ( 13) of a Klein-Gordon field of the same false vacuum mass (11) we found in our classical analysis of the equations of motion.We can directly evaluate the power spectrum ( 12) by writing the Fourier modes φk in terms of the normal modes âk and using standard ladder operator identities.In the same IR limit as before we find Eq.( 13), which is exactly what we expect for the corresponding Klein-Gordon field. We already know from our classical understanding of the system that the relativistic analogy breaks down on scales much smaller than the healing length (ξk ≫ 1).In this limit, we recover a white-noise fluctuation spectrum and the usual nonrelativistic dispersion relation, The former represents an excess of power at small scales compared to the Klein-Gordon spectrum ( 13), due to nonrelativistic, high-momentum excitations of individual atoms.The interpolation between this regime and the Klein-Gordon-like results on large scales is shown in Fig. 3. IV. EXPERIMENTAL PARAMETERS Our results for the false vacuum power spectrum are a general feature of the modulated Bose-Bose mixture system described in Sec.II, regardless of any particular experimental realization.In this section, we describe a concrete set of experimental parameters (summarized in Table I) that is achievable with current cold-atom experiments, and which will allow us to probe the physics of relativistic vacuum decay. As highlighted in Sec.II, among the key requirements for our system are that both atomic species have equal masses (m 1 = m 2 ), equal intraspecies scattering lengths (a 11 = a 22 ), and negligible interspecies scattering (a 12 = 0). 7It is easy to select equal masses by using two hyperfine states of the same atomic isotope (i.e., a homonuclear mixture).However, the conditions on the scattering lengths are more difficult to arrange.It is possible to set a 12 to zero by applying an external magnetic 7 These 3D scattering lengths a ij determine the corresponding 1D interaction strength, g ij = 2ℏω ⊥ a ij , where ω ⊥ is the frequency of the transverse harmonic potential, Vtrap = is satisfied at the zero-crossing of a 12 .The 41 K resonance specified above is therefore the optimal candidate system for simulating relativistic vacuum decay. The main technical challenge with this setup is that the resonance has a width of only 155.8 mG [59], necessitating a very high level of magnetic field stability in order to stay at the zero-crossing of a 12 , as illustrated in Fig. 4. Nonetheless, this level of stability is achievable with current experimental technologies.In particular, Borkowski et al. [67] have recently demonstrated magnetic field stability at the level of ∼ 2 ppm in a cold-atom experiment.For our proposed system this corresponds to |a 12 | ≤ 0.53 a 0 (where a 0 = 5.292 × 10 −11 m is the Bohr radius).This is less than 1% of the mean intrastate scattering length a = 60.24 a 0 , which should be sufficient precision for our purposes. Given the 3D scattering properties of the two atomic species, the behavior of the effective 1D system is set by the number of condensed atoms, the size of the trap along the elongated and transverse directions, and the strength and modulation of the applied radio-frequency field.We have explored this parameter space with the goal of maximizing the natural condensate energy scale gn relative to the thermal energies k B T that can be achieved in current experiments, as this will allow us to investigate the regime of quantum (rather than thermal) decays.At the same time, we have ensured that this energy scale is not so high that transverse modes of energy ℏω ⊥ are excited, where ω ⊥ is the frequency of the harmonic trapping potential in the transverse directions.(We plan to test this explicitly in future work with 3D simulations that resolve the transverse directions.) In order to facilitate comparisons with instanton predictions (which are challenging to calibrate at any single point in parameter space), it is useful to vary the system parameters to scan over a broad range of bubble nucleation rates.The instanton decay rate per unit volume in this model scales as where n ≡ ξ d n is the dimensionless condensate number density (i.e., the number of atoms in a region of size equal to the healing length).In d = 1 dimensions the dependence on ϵ vanishes, and the decay rate is thus primarily controlled by n.This parameter also sets the size of fluctuations in the field relative to the characteristic value φ 0 , We find that it is possible to vary n while keeping the energy scale gn (and therefore all other dimensionless parameters of the system) fixed, by simultaneously increasing the number of atoms of each species N and decreasing the transverse trapping frequency ω ⊥ .This allows us to perform a controlled test of how the bubble nucleation rate scales with the amplitude of the initial fluctuations. Our proposed parameters are summarized in Table I.We vary n by a factor of 5, which is sufficient to see a significant variation in the decay rate.As we show in Sec.VI below, the energy scale gn here is large enough that the quantum-decay regime is readily accessible to current or near-future experiments. V. LATTICE SIMULATIONS Part of the value of our results on the vacuum power spectrum in Sec.III is that they can be used as an input for semiclassical lattice simulations of the cold-atom system.These simulations are a powerful tool for exploring the real-time dynamics of bubble nucleation, and are a crucial ingredient for developing and interpreting analog FVD experiments.The key idea is to encode the nonclassical nature of the problem in the initial conditions of the simulation, by drawing an ensemble of random field realizations that sample vacuum fluctuations around the homogeneous false vacuum state [68].These realizations are then evolved forward by numerically integrating the classical equations of motion.This approach is widely used in the context of atomic physics and quantum optics (where it is referred to as the 'truncated Wigner approximation' [69][70][71]), and also underpins cosmological lattice simulations of inflation and preheating [72][73][74][75][76][77][78][79][80][81][82][83] as well as vacuum decay [68,84,85]. It is common for lattice simulations of cold-atom systems to initialize the fluctuations using a white-noise power spectrum (21) [30,31,33,34], particularly in situations where the processes of interest are insensitive to the precise form of this spectrum.Bubble nucleation, however, is extremely sensitive to the statistics of the initial fluctuations, as different initial states can decay at exponentially different rates.(For example, we see from Eq. ( 22) that there is an exponential sensitivity on n.)The vacuum fluctuation spectrum derived above is therefore a crucial ingredient for realistic simulations of analog vacuum decay. In this section we use a suite of lattice simulations to study bubble nucleation from vacuum initial conditions in the 1D cold-atom system described in Sec.IV.We extract decay rates for different values of the fluctuationamplitude parameter n, and verify that the rates depend exponentially on this parameter, in agreement with the scaling found in the instanton approach.We perform the same test with white-noise initial conditions, and find decay rates that are globally larger than in the vacuum case.This confirms that vacuum decay in semiclassical lattice simulations is indeed sensitive to the statistics of the initial fluctuations, and that for the cold-atom system these must be correctly specified using Bogoliubov theory, as we have done here.We additionally investigate the conservation of the Noether charges of the effective Klein-Gordon theory in our simulations of the cold-atom system, as these are a useful diagnostic for the faithfulness of the relativistic analogy. A. Code setup We use a Fourier pseudospectral code with an eighthorder symplectic time-stepping algorithm [86] (see Appendix B for details), and work in units where the atomic mass m, healing length ξ, and sound speed c are set to unity (which is equivalent to also setting ℏ = gn = 1).Our simulations work at the level of the time-dependent Hamiltonian (4), resolving the modulation of the interspecies coupling so that we can test for the emergence of the effective time-averaged dynamics. We simulate a system with the experimental parameters specified in Table I.In code units, this setup is realized by .Left panel : Survival probability for the false vacuum state as a function of time, as estimated using ensembles of 1024 simulations for each curve.We scan over the dimensionless number density n = ξn to probe a broad range of decay rates.The gray shaded region (0.01 ≤ Pr(survive) ≤ 0.5) is used to fit an exponential decay rate Γ for each curve (shown as dashed lines).Right panel : Dimensionless decay rate per unit volume as a function of n, computed for both vacuum and white-noise initial conditions.Both curves are well-described by a linear fit, as expected from instanton calculations.The white-noise case consistently gives faster decays despite the smaller initial phase fluctuations, due to this being an excited state of the system.evolving a periodic region of volume V /ξ d = L/ξ = 500, and setting ϵ = 2.5 × 10 −3 and λ = √ 2 so that the false vacuum mass is m fv /m = 0.1.We additionally set the dimensionless modulation frequency to ωξ/c = 680, which is sufficiently large that the Floquet instability bands are above the Nyquist frequency for all of our simulations.This allows us to model the expected experimental situation where these instabilities are damped by the small-scale dynamics of the BEC, and do not affect the evolution of the IR modes; the actual experimental value of ω is unimportant so long as the Floquet instabilities are quenched.Our simulations use 2048 lattice sites and a timestep that is 1/16 times the modulation period 2π/ω, giving spatial and temporal resolution of ∆x/ξ ≈ 0.244 and c∆t/ξ ≈ 5.77 × 10 −4 , respectively.In Appendix B we show that our results are numerically converged at this resolution, and that the Noether charges of the cold-atom Hamiltonian (4) are conserved to within a few parts per billion. B. Bubble nucleation rates We extract decay rates for the analog system using ensembles of 1024 simulations, with each simulation corresponding approximately to a different possible classical history drawn from the path integral describing the full evolution of the many-body quantum state.We initialize each simulation as the homogeneous false vacuum φ = πφ 0 plus independent random draws of the vacuum fluctuations δ φ.We treat the latter as a zero-mean Gaussian random field with a power spectrum that (as shown in Fig. 3) interpolates between a relativistic spectrum in the IR and a white-noise spectrum in the UV.We have checked that this power spectrum remains statistically stationary over time by averaging over the ensemble of nondecayed trajectories, effectively testing that our initial state is indeed an eigenstate of the Hamiltonian near the false vacuum. As well as the relative phase, we also initialize the relative density and the total phase and density using random draws from their corresponding vacuum spectra.It is crucial to initialize all four fields in this way to correctly capture the vacuum state.For example, neglecting the relative density fluctuations corresponds to initializing the effective Klein-Gordon field with zero momentum everywhere, when in fact this momentum field should also contain vacuum fluctuations.In practice, we initialize the total and relative atomic field modes in our code, which is equivalent at the linear level to working in terms of the density and phase fields. We find that it is crucial that the positive-and negativemomentum Fourier modes ψ k and ψ −k are not treated as statistically independent random variables.Instead, one must draw the positive-and negative-momentum normal modes a k , a −k independently, and then obtain the Fourier modes of the atomic fields via a reverse Bogoliubov transformation.This induces a nontrivial correlation between ψ k and ψ −k that appropriately captures the quantum statistics of the false vacuum state.Failing to include these correlations in the initial conditions puts the system into an excited state that nucleates bubbles much more rapidly than the false vacuum state, and much more even than the white-noise state. We truncate all of the fluctuation spectra at a maximum wavenumber of ξk UV ≈ 3.22, which is a factor of 4 smaller The initial fluctuations in each simulation are identical except for an overall ∼ n−1/2 scaling.The level of violation is roughly stationary throughout each simulation, and approaches zero for large n, despite the nonrelativistic behavior of the system on small scales. than the Nyquist frequency of our simulations, ξk Nyq = πξ/∆x ≈ 12.9.Evidence from pure Klein-Gordon lattice simulations [87] suggests that changing this cutoff modifies the decay rate in a way that can be absorbed into a renormalization of the bare model parameters.We leave a detailed investigation of this effect in the analog system for future work, and here use a fixed UV cutoff for all of our simulations.The amplitude of the fluctuations relative to the homogeneous value of the field is set by the dimensionless number density n, which we scan over in the experimentally accessible range 10 ≤ n ≤ 50.We measure a decay rate from each ensemble of simulations by counting the number of nondecayed trajectories as a function of time, dividing by the total number of simulations to obtain an estimate of the time-dependent survival probability.In doing so, it is necessary to choose a definition for when an individual realization has decayed.We do this by setting a threshold on the volume average of the cosine of the relative phase, ⟨cos(φ/φ 0 )⟩ V .This quantity fluctuates near to −1 in the false vacuum, and grows rapidly after a bubble nucleates before saturating near +1 once the transition has percolated, as illustrated in Fig. 5.We compute the decay threshold separately for each ensemble as the lowest possible value of ⟨cos(φ/φ 0 )⟩ V for which no more than 1% of the simulations cross back below the threshold in any given timestep. 8 8 A more obvious choice would be to allow zero downward crossings through the threshold, as this would capture the notion that vacuum decay is an irreversible process.However, we find that enforcing zero downward crossings makes the algorithm easily confused by small fluctuations in ⟨cos(φ/φ 0 )⟩ V , and results in a choice for the threshold that is far too conservative.Manual Our resulting estimates of the survival probability are shown in the left panel of Fig. 6.As expected, the ensembles with smaller n, and therefore larger initial fluctuations, decay on much shorter timescales.After an initial transient, each ensemble reaches a regime of exponential decay, We fit a decay rate Γ to each curve, restricting the fit to survival probabilities between 50% and 1% in order to exclude the nonexponential regime at early times and noisy small-number statistics at late times, respectively.The resulting decay rates (in dimensionless units, and measured per unit volume) are shown in blue in the right panel of Fig. 6, and are well-described by an exponential scaling with respect to n, in qualitative agreement with the instanton prediction (22). It is important to note however that the proportionality constant linking log(Γ/V ) and n does not agree with the instanton prediction; our simulations decay significantly faster than predicted in the instanton approach.This same behavior has been observed in pure Klein-Gordon lattice simulations [68], and is an expected consequence of performing instanton calculations using the bare lattice parameters, rather than the renormalized theory [87].It is also worth pointing out that our instanton calculations are based on the effective Klein-Gordon theory, rather than the full analog system, and therefore neglects effects such inspection of the results with a 1% allowance for downward crossings confirms that this accurately captures the common-sense notion of when the field has decayed (e.g.see Fig. 5).We have checked that varying this allowed fraction between 0.5% and 2% does not significantly impact our measured decay rates. as the excess small-scale power identified in Sec.III.We plan to explore these issues in the context of the analog system in future work. As well as our simulations using vacuum initial conditions, we carry out a suite of simulations using white-noise initial conditions.This corresponds to the nonrelativistic UV limit (21) of the full power spectrum derived from Bogoliubov theory, and matches the prescription used by several previous studies of vacuum decay in cold-atom analog systems [30,31,33,34].The resulting decay rates are shown in red in the right panel of Fig. 6.These are fit only to survival probabilities between 20% and 1%, as we find that it takes longer for these initial states to settle into a period of steady exponential decay.We see that, while the resulting decay rates also follow the expected exponential scaling with n, they are globally larger for white-noise initial conditions than for the vacuum case, despite the fact that the actual amplitudes of the fluctuations are smaller in the IR in the white-noise case (compare the blue and purple curves in Fig. 3).We interpret this as evidence that white-noise fluctuations correspond to an excited state of the analog system, and thus lead to faster decays, on average, than the vacuum initial conditions we have derived here. Note that this does not imply that the white-noise spectrum is somehow unphysical.In fact, such a spectrum is the vacuum state for an alternative system with zero atomic scattering, g = 0.The enhanced decay rates shown in red in Fig. 6 can thus be interpreted as being due to a mismatch between the Hamiltonian describing the initial conditions and the Hamiltonian describing the time evolution. C. Verifying Klein-Gordon behavior While our results for the decay rates are in broad agreement with our expectations for relativistic vacuum decay, we can also directly test whether the relative phase field φ is indeed analogous to a relativistic Klein-Gordon field by computing the Noether charges for the corresponding Klein-Gordon theory, Since the Noether charges for the underlying nonrelativistic Hamiltonian are conserved with extremely high precision in our simulations (see Appendix B), any nonconservation of the Klein-Gordon charges (25) should be interpreted as being due to limitations of the relativistic analogy, rather than numerical errors.In Fig. 7 we show the violation of these charges for a series of simulations with a broad range of dimensionless number densities n.We find that violations in the Klein-Gordon energy and momentum are roughly stationary over time, and reach a regime where they scale like For our fiducial parameters this can be translated into a physical temperature using Eq. ( 27).The decay rates (extracted by fitting in the gray shaded region, shown here as dashed lines) are consistent with being temperature-independent up to roughly T ≈ 0.06; beyond this point, large fluctuations in the total modes couple to the relative modes and ruin the effective relativistic picture. |∆H|/|H| ∼ n−1 and |∆P |/|P | ∼ n−1/2 respectively, so that in the limit of small fluctuations the analogy holds with high accuracy.However, in the experimentally accessible regime n ∈ [10,50] that we are interested in here, the violation is on the order of at least a few percent in the energy.In the momentum, the relative errors reach order unity, although this reflects the fact that the total momentum of the field averaged over the entire volume V is intrinsically close to zero.While we do not believe these errors invalidate the mapping onto the Klein-Gordon theory, further improvements in the accuracy of the analog may be possible.Specifically, so far we have ignored the backreaction of the fluctuations onto the mean-field dynamics, which would modify this mapping in a way that could plausibly be absorbed into a renormalization of the parameters of the effective Klein-Gordon theory.(Similar effects have recently been investigated in the case of pure Klein-Gordon theory [87].)This would be consistent with our finding that the level of charge violation scales with the fluctuation amplitudes.We conjecture that accounting for these corrections and identifying the appropriate Klein-Gordon parameters could substantially improve the level of charge violation over that shown in Fig. 7, and also bring our decay rates into closer quantitative agreement with the instanton prediction.We plan to explore this in detail in future work. .Enhancement in the fluctuation power spectra of the total and relative phonons as a function of temperature.The vertical axis shows the ratio between the finite-temperature and zero-temperature power spectra evaluated at the minimum wavenumber kIR = π/L ≈ 6.28 × 10 −3 ξ −1 , for which the enhancement is maximized. VI. FINITE-TEMPERATURE EFFECTS Thus far we have considered only zero-temperature states of the analog system.However, any realistic experiment will inevitably be at some finite temperature, and will therefore contain thermal as well as quantum fluctuations.These are potentially a nuisance factor in studying quantum vacuum decay, giving an excess contribution to the decay rate and altering the phenomenology of the nucleated bubbles [4].It is therefore valuable to estimate the temperature threshold at which these deviations from the zero-temperature case become significant, as this can then guide the development and interpretation of the analog experiments. In the framework of the truncated Wigner approximation, we can model the thermal bath by including additional fluctuation power in our initial conditions. 9This amounts to replacing vacuum expectation values with traces over a thermal density matrix, resulting in a scaledependent enhancement to the relative phase power spectrum, as well as for the relative density, and the total phase and density.(Here coth x = (1 + e −2x )/(1 − e −2x ) is the 9 Other prescriptions and theoretical frameworks exist, including modeling the effects of the thermal bath by adding a stochastic driving term to the Gross-Pitaevskii equations [35,37,39,88].However, our treatment here allows us to model quantum and thermal fluctuations in a simple and conceptually unified way.A detailed comparison against alternative simulation methods would be interesting, but is beyond our present scope. hyperbolic cotangent function.)It is convenient to work in terms of the dimensionless temperature, where the numerical value corresponds to our particular choice of experimental parameters (cf.Table I).Fig. 8 shows the survival probability in ensembles of simulations at various temperatures, with n = 40.For dimensionless temperatures T ≲ 0.06 we see that, notwithstanding some differences in the initial nonexponential transient phase, the exponential decay rates are all consistent with the zero-temperature result.At higher temperatures, rather than finding an enhanced rate of relativistic decays, we instead find that the exponential decay model becomes an increasingly poor fit to the empirical survival probabilities.We interpret this finding as indicating the breakdown of the relativistic analogy at high temperatures, and conjecture that this breakdown is due to the impact of thermal noise on the total phonon modes.In contrast to the relative modes, which have an effective mass m fv due to the potential barrier around the false vacuum, the total modes have a massless dispersion relationship ω k ≃ ck in the IR, allowing them to become excited to very large amplitudes by the thermal bath, as illustrated in Fig. 9.The coupling between the total and relative modes then becomes significant, and spoils the effective relativistic dynamics of the relative modes.As evidence for this interpretation, we note that the T ≲ 0.06 threshold determined empirically from our simulations is just below the theoretically-predicted threshold at which the total modes of this system should lose phase coherence, Tϕ = n/ L = 0.08 [35]. Our results show that dimensionless temperatures of T ≲ 0.06 should give us access to a setting closely resembling the zero-temperature dynamics of the analog vacuum decay process.This translates into physical temperatures of T ≲ 10.9 nK for our proposed parameters.Note that our interpretation in terms of the phase coherence temperature Tϕ = n/ L implies that this threshold should scale proportionally with the fluctuation-amplitude parameter n, so that the T ≲ 10.9 nK benchmark should be viewed as a minimal requirement, with lower temperatures giving us access to vacuum decay rates over a broader range of parameter space.This benchmark is readily accessible with current experimental setups, which routinely reach temperatures on the order of a few nK, and have even recorded temperatures as low as tens of pK [89]. VII. SUMMARY AND OUTLOOK Quantum analog experiments present a powerful new tool for understanding relativistic vacuum decay.Here we have carried out a detailed study of one such proposed experimental setup, which uses a rapidly modulated coupling between two atomic Bose-Einstein condensates to engineer a metastable false vacuum state for the relative phase.We have derived the spectrum of quantum fluctuations around this state, and have shown that this spectrum asymptotically matches that of the effective Klein-Gordon field in the IR. As well as providing further evidence for the suitability of the cold-atom analog for studying relativistic physics, this vacuum fluctuation spectrum is also a crucial input for semiclassical lattice simulations of this system.By carrying out a suite of such simulations, we have confirmed the key theoretical expectations for the analog false vacuum: that it undergoes exponential decay, at a rate that is exponentially sensitive to the amplitude of the vacuum fluctuations.We have also shown that using an alternative fluctuation spectrum -in this case, white noise, which has been used in several previous studies of this system -leads to an enhanced decay rate compared to the pseudorelativistic vacuum fluctuations, as this corresponds to putting the system in an excited initial state. In carrying out these simulations, we have identified a realistic set of parameters that will allow us to study vacuum decay with current experimental capabilities.This includes a protocol for scanning over fluctuation amplitudes, and thus decay rates, while keeping all other natural scales of the system fixed, enabling detailed and controlled experimental studies of the decay rate. As well as the zero-temperature fluctuation spectrum, we have derived the enhancement of the fluctuation power due to thermal noise at finite temperature.We find that, so long as the system is below a given temperature threshold (which we argue is set by the coupling between the total and relative phase degrees of freedom), the decay rate extracted from our simulations is consistent with that at zero temperature.For our proposed parameters, this threshold lies well within reach of current experiments, meaning that we should be able to empirically test the physics of quantum bubble nucleation in the near future. Our results here rely on several simplifying assumptions, which we plan to relax in future work.In particular, we have treated the BEC system as periodic, neglecting boundary effects due to the external trapping potential.In a forthcoming companion paper, we will generalize our Bogoliubov analysis to derive the inhomogeneous vacuum fluctuations in a box trap, and investigate the impact of these boundary effects on the bubble nucleation rate.We have also neglected in our calculations the backreaction of the fluctuations onto the mean-field dynamics of the BEC, and corresponding renormalization of the bare parameters of the effective relativistic theory.Incorporating these effects should allow for a more precise understanding of the validity of the relativistic analogy, improve the initialization and interpretation of our lattice simulations, and enable more detailed comparisons with instanton predictions.These developments will enable the first experimental tests of relativistic vacuum decay. .Pointwise convergence of our numerical solutions for increasing spatial and temporal resolution (left and right panels, respectively) in simulations with n = 30.Each curve shows the maximum absolute pointwise difference in cos(φ/φ0) between one solution with the stated resolution and another with double the spatial or temporal resolution, starting from identical initial conditions.The vertical dotted lines show the time of bubble nucleation in the converged simulations.Note that the resolution used in our simulations discussed in Secs.V and VI corresponds to the red curves here.which we have split into a linear and a nonlinear piece.Each piece has its own chemical potential, which can be chosen for convenience -e.g., to minimize sinusoidal oscillations in the homogeneous mode of the total phaseas these have no effect on the relative phase φ.where the dimensionless coefficients a i , b i , (i = 1, . . ., k) are chosen such that the integrator is exact to order n in the small timestep δt.Integrators of this form are symplectic, in the sense that they exactly conserve phase space volume.We implement an efficient realization of this integrator from Yoshida [86], which uses k = 16 steps and is accurate to order n = 8. In Fig. 10 we show convergence tests of our code for increasing spatial and temporal resolution, measuring numerical errors in terms of pointwise differences in the cosine of the relative phase field, cos(φ/φ 0 ).For the level of resolution used in our simulations in Secs.V and VI, we see that the maximum error is on the order of ∼ 10 −7 prior to bubble nucleation, and at most ∼ 10 −5 even long after bubble nucleation.This indicates that our simulations are numerically converged, even in the highly dynamical nonlinear regime. We also test our code by checking for violations in conservation of the Noether charges associated with the cold-atom Hamiltonian (4), which correspond to the total number of atoms and the total momentum of the system, respectively [34].(Note that the total energy is not exactly conserved, due to the explicit time-dependence of the rf modulation term in the Hamiltonian.)As shown in Fig. 11, both charges are conserved to the level of a few parts per billion in simulations at our fiducial resolution. Figure 1 . Figure 1.Lattice simulation of vacuum decay in the 1D analog system.Nonlinear interactions between fluctuations around the false vacuum (blue) lead to the nucleation of a true vacuum bubble (red), which then expands relativistically.The simulation shown here corresponds to the blue curves in Fig. 7, and conserves the Hamiltonian of the effective relativistic theory to within ∼ 10% (see discussion in Sec.V C). Figure 4 . Figure 4.The three scattering lengths a11, a22, a12 of our proposed homonuclear 41 K mixture as a function of magnetic field strength.The quoted values and gray shaded region correspond to ±2 ppm ≈ ±1.4 mG either side of the zerocrossing, as given by Ref.[59]. Figure 5 . Figure 5. Individual random realizations of vacuum decay in our n = 35 ensemble, showing how the volume-averaged cosine of the relative phase field evolves over time.Each curve corresponds to an independent simulation, which oscillates near ⟨cos(φ/φ0)⟩ V = −1 until a true vacuum bubble nucleates, at which point the trajectory grows until it saturates near ⟨cos(φ/φ0)⟩ V = +1.The colored curves are three randomly-selected trajectories, highlighted to illustrate the typical behavior.The black dotted line shows our empiricallydetermined decay threshold for this ensemble, as found using the procedure described in the main text. Figure 6 Figure 6.Left panel : Survival probability for the false vacuum state as a function of time, as estimated using ensembles of 1024 simulations for each curve.We scan over the dimensionless number density n = ξn to probe a broad range of decay rates.The gray shaded region (0.01 ≤ Pr(survive) ≤ 0.5) is used to fit an exponential decay rate Γ for each curve (shown as dashed lines).Right panel : Dimensionless decay rate per unit volume as a function of n, computed for both vacuum and white-noise initial conditions.Both curves are well-described by a linear fit, as expected from instanton calculations.The white-noise case consistently gives faster decays despite the smaller initial phase fluctuations, due to this being an excited state of the system. × 10 5 Figure 7 . Figure 7. Fractional violation of the Klein-Gordon charges (25) as a function of the dimensionless BEC number density n.The initial fluctuations in each simulation are identical except for an overall ∼ n−1/2 scaling.The level of violation is roughly stationary throughout each simulation, and approaches zero for large n, despite the nonrelativistic behavior of the system on small scales. Figure 8 . Figure 8. Survival probability as a function of dimensionless temperature T = kBT /(gn), for n = 40.For our fiducial parameters this can be translated into a physical temperature using Eq.(27).The decay rates (extracted by fitting in the gray shaded region, shown here as dashed lines) are consistent with being temperature-independent up to roughly T ≈ 0.06; beyond this point, large fluctuations in the total modes couple to the relative modes and ruin the effective relativistic picture. Figure 9 Figure 9. Enhancement in the fluctuation power spectra of the total and relative phonons as a function of temperature.The vertical axis shows the ratio between the finite-temperature and zero-temperature power spectra evaluated at the minimum wavenumber kIR = π/L ≈ 6.28 × 10 −3 ξ −1 , for which the enhancement is maximized. Figure 10 Figure 10.Pointwise convergence of our numerical solutions for increasing spatial and temporal resolution (left and right panels, respectively) in simulations with n = 30.Each curve shows the maximum absolute pointwise difference in cos(φ/φ0) between one solution with the stated resolution and another with double the spatial or temporal resolution, starting from identical initial conditions.The vertical dotted lines show the time of bubble nucleation in the converged simulations.Note that the resolution used in our simulations discussed in Secs.V and VI corresponds to the red curves here. Figure 11 . Figure 11.Relative variations in the Noether charges (B6) for the same simulation shown in the red curves of Fig. 10. We have performed an exhaustive search of other known Feshbach resonances in homonuclear mixtures of stable bosonic isotopes of the alkali metals ( 7 Li [31, 56], 23 Na [57, 58], 39 K [59, 60], 85 Rb [61, 62], 87 Rb [63, 64], and 133 Cs [65, 66]), and have not found any other interstate resonances where the condition a 11 ≃ a 22 [48]ω 2 ⊥ (y 2 + z 2 ).fv = 4ϵ(λ 2 − 1) m = 0.1 m TableI.List of fundamental and derived parameters for our proposed 1D cold-atom experiment.Here u = 1.661 × 10 −27 kg is the unified atomic mass unit and a0 = 5.292 × 10 −11 m is the Bohr radius.The scattering length a quoted here is the mean of the two intrastate scattering lengths; the difference is ∼ 1% (cf.Fig.4).The number density n and scattering strength g are scanned over by varying the number of atoms of each species N and the harmonic trap frequency ω ⊥ respectively, while holding the energy scale gn constant.fieldatthezero-crossing of a Feshbach resonance[48], but there is then no further freedom to tune a 11 and a 22 in order to set them equal to each other.Fortunately, as pointed out by Fialko et al. [31], 41 K (potassium-41) possesses a Feshbach resonance between the |F, m F ⟩ = |1, 0⟩ and |1, +1⟩ states with a zero crossing at B ≃ 675.256G, where the condition a 11 ≃ a 22 is realized naturally with a precision of ∼ 1%. Evolution under each of these operators individually can be solved exactly; the nonlinear piece conserves the amplitude of each field and simply performs a local phase rotation, x→k represents a Fourier transform, and F −1 k→x its inverse.(Theseare implemented numerically as fast Fourier transforms, so that in practice Eq.(B4) is only exact under the assumption that the fields are band-limited with maximum wavenumber less than or equal to the Nyquist frequency on the lattice.)Whilethere is no exact solution for the evolution under O = O lin + O nlin from generic initial data, we can approximate this full evolution by chaining together a series of short steps with each of the individual operators,ψ(x,t 0 + δt) = e −ia1O lin δt ℏ e −ib1O nlin δt ℏ × • • • × e −ia k O lin δt ℏ e −ib k O nlin δt ℏ ψ(x, t 0 ) + O δt n+1 , × cos R(t, t 0 ) i sin R(t, t 0 ) i sin R(t, t 0 ) cos R(t, t 0 ) F x→k {ψ(x, t 0 )} , R(t, t 0 ) = ϵgn t − t 0 ℏ + λ ϵ/2[sin(ωt) − sin(ωt 0 )],(B4)where F
11,564
2023-07-05T00:00:00.000
[ "Physics" ]
Iterative Refinement of Cellular Identity from Single-Cell Data Using Online Learning Recent experimental advances have enabled high-throughput single-cell measurement of gene expression, chromatin accessibility and DNA methylation. We previously used integrative non-negative matrix factorization (iNMF) to jointly learn interpretable low-dimensional representations from multiple single-cell datasets using dataset-specific and shared metagene factors. These factors provide a principled, quantitative definition of cellular identity and how it varies across biological contexts. However, datasets exceeding 1 million cells are now widely available, creating computational barriers to scientific discovery. For instance, it is no longer feasible to analyze large datasets using standard pipelines on a personal computer with limited memory capacity. Moreover, there is a need for an algorithm capable of iteratively refining the definition of cellular identity as efforts to create a comprehensive human cell atlas continually sequence new cells. To address these challenges, we developed an online learning algorithm for integrating large and continually arriving single-cell datasets. We extended previous online learning approaches for NMF to minimize the expected cost of a surrogate function for the iNMF objective. We also derived a novel hierarchical alternating least squares algorithm for iNMF and incorporated it into an efficient online algorithm. Our online approach accesses the training data as mini-batches, decoupling memory usage from dataset size and allowing on-the-fly incorporation of new datasets as they are generated. The online implementation of iNMF converges much more quickly using a fraction of the memory required for the batch implementation, without sacrificing solution quality. Our new approach processes 1.3 million single cells from the entire mouse embryo on a laptop in 25 minutes using less than 500 MB of RAM. We also analyze large datasets without downloading them to disk by streaming them over the internet on demand. Furthermore, we construct a single-cell multi-omic cell atlas of the mouse motor cortex by iteratively incorporating eight single-cell RNA-seq, single-nucleus RNA-seq, single-nucleus ATAC-seq, and single-nucleus DNA methylation datasets generated by the BRAIN Initiative Cell Census Network. Our approach obviates the need to recompute results each time additional cells are sequenced, dramatically increases convergence speed, and allows processing of datasets too large to fit in memory or on disk. Most importantly, it facilitates continual refinement of cell identity as new single-cell datasets from different biological contexts and data modalities are generated. Quantitative Definition of Cell Identity from Single-Cell Data Defining cellular identity is foundational to a genomic approach to medicine, because discovering what goes wrong in disease requires a reference map of the molecular states of healthy cells. Cells have long been qualitatively characterized by a combination of features such as morphology, presence or absence of cell surface proteins, and broad function [1]. Recently, high-throughput single-cell sequencing technologies have enabled researchers to profile multiple molecular modalities, including gene expression, chromatin accessibility and DNA methylation [2]. Integrating multiple single-cell modalities offers tremendous opportunities for unbiased, comprehensive, quantitative definition of discrete cell types and continuous cell states. The resulting catalog of normal cell types promises to revolutionize fields like neuroscience, developmental biology and physiology [3]. Furthermore, knowing the molecular profiles of normal cell types points to biochemical mechanisms by which genetic and environmental factors cause disease. Multiple features contribute to cell identity, including gene expression, epigenomic modifications, and spatial location within a tissue, but it is not currently possible to simultaneously measure all of these quantities within the same single cells. Experimental methods for assaying transcriptome and epigenome from the same single cells have been demonstrated, but have not been widely adopted due to significant limitations in data quality and/or scalability. Large-scale gene expression, chromatin accessibility, DNA methylation, chromatin conformation, and spatial transcriptomic measurements of different individual cells are now widely available, but these features have generally been used separately to identify cell clusters representing putative cell types, and it is critical to investigate how these different molecular features of cell identity are related [4]. The Need for Scalable Integration of Single-Cell Data Single-cell data integration thus represents a crucial step toward enabling quantitative definition of cell identity, but existing computational approaches do not address this need. Three unique aspects make singlecell integration challenging: (1) unlike bulk multi-omic data, only one modality is generally available from each single cell; (2) the cell types proportions present in each sample may differ significantly; and (3) number of samples (n) per dataset is large and rapidly growing. Currently, scRNA-seq datasets are growing more rapidly than other single-cell data modalities, but we also anticipate rapid growth in the scale of these other data modalities in the near future. Indeed, a recent study assayed more than 100,000 cells with single-cell ATAC-seq [5], and a recent spatial transcriptomic study assayed 1 million cells [6]. A key insight motivating our approach is that techniques for so-called "online learning" [7], in which calculations are performed on-the-fly as new datasets continuously become available (as in many internet applications), provides a path to scalable single-cell data integration. Several recent single-cell data integration approaches have been developed, including Seurat v3, Harmony, and Scanorama [2,8,9], but these approaches are not designed to integrate multiple data types and/or have difficulty scaling to massive datasets. Furthermore, none of these existing methods can incorporate new data, but instead must recalculate results each time new datasets arrive. We address these limitations by developing online iNMF, an algorithm that allows scalable and iterative single-cell multi-omic integration. Integrative Nonnegative Matrix Factorization In this paper, we build upon the nonnegative matrix factorization approach at the heart of our recently published LIGER algorithm [10] to develop an online learning algorihm. The intuition behind LIGER is to jointly infer a set of latent factors ("metagenes") that represent the same biological signals in each dataset, while also retaining the ways in which these signals differ across datasets. These shared and datasetspecific factors can then be used to jointly identify cell types and states, while also identifying and retaining cell-type-specific differences in the metagene features that define cell identities. LIGER starts with two or more single-cell datasets, which may be scRNA-seq experiments across different individuals, time points, or species. The inputs to LIGER may even be measurements from different molecular modalities, such as singlecell epigenome data or spatial transcriptomic data that assay a common set of genes. LIGER relies upon integrative nonnegative matrix factorization (iNMF) [4], which solves the following optimization problem: to jointly factorize N datasets (each consisting of a genes × cells matrix X i ), inferring both shared (W ) and dataset-specific (V i ) factors (Fig. 1a). Each factor, or metagene, represents a distinct pattern of gene co-regulation, often corresponding to biologically interpretable signals-like the genes that define a particular cell type. The dataset-specific metagenes (V i ) allow robust representation of highly divergent datasets; the factorization can even accommodate missing cell types. Thus, these shared and dataset-specific metagenes can be used to quantitatively define cell identity across biological contexts in terms of inferred co-expressed gene sets or biological pathways. Online Matrix Factorization Since its proposal by Lee and Seung, nonnegative matrix factorization (NMF) has been widely used to learn interpretable representations of high-dimensional data [11]. NMF is a non-convex optimization problem, so the strongest possible convergence guarantee for an NMF algorithm is that it converges to a local minimum of the objective function. However, the widely used multiplicative update algorithm has no such theoretical convergence guarantee and shows slow convergence in practice. More efficient NMF algorithms based on block coordinate descent, including alternating nonnegative least squares (ANLS) and hierarchical alternating least squares (HALS), have been developed [12], which both converge rapidly in practice and are theoretically guaranteed to converge to a local minimum. Nevertheless, even these approaches are not able to efficiently handle large and streaming inputs such as images and videos (which often arise in web applications, hence the name "online learning") [13]. As opposed to a batch learning algorithm, an online learning algorithm accesses the data only as either single data points or mini-batches and continually updates the basis elements (metagenes in our context). An online NMF algorithm was developed with guaranteed convergence to a local minimum. This approach showed strong empirical performance and extremely fast convergence compared to batch NMF [7]. Online iNMF In this study, we extend the online NMF approach of Mairal et al. [7] to make it suitable for iNMF. Online iNMF provides two significant advantages: (1) integration of large multi-modal datasets by cycling through the data multiple times in small mini-batches and (2) integration of continually arriving datasets, where the entire dataset is not available at any point during training (Fig. 1). We envision using online iNMF to integrate single-cell datasets in three different scenarios (Fig. 1). In scenario 1, where the datasets are large and fully observed, the algorithm accesses mini-batches from all datasets at the same time and repeatedly updates the metagenes and cell factor loadings. Each cell can be revisited throughout multiple epochs of training (Fig. 1b). A key advantage of scenario 1 (compared to batch iNMF) is that only a single mini-batch needs to be in memory at a time. Scenario 1 also allows processing of large datasets without even downloading them on disk, by streaming them over the internet. In scenario 2, the input datasets arrive sequentially, and the online algorithm uses each cell exactly once to update the metagenes, without revisiting data already seen (Fig. 1c). The key advantage of scenario 2 is that the factorization is efficiently refined as new data arrives, without requiring expensive recalculation each time. A third scenario allows us to project new data into the latent space already learned, without using the new data to update the metagenes. In scenario 3, we first use online iNMF to learn metagenes as in scenario 1 or scenario 2. Then, we use the shared metagenes (W ) to calculate cell factor loadings for a new dataset, without using the new data to update the metagenes. This third scenario is highly efficient at incorporating data, allows users to query their data against a curated reference, and provides increased robustness to dataset differences in newly arriving data (Fig. 1d). Overview of the online iNMF algorithm. a, Schematic of integrative nonnegative matrix factorization (iNMF): the input single-cell datasets are jointly decomposed into shared (W ) and dataset-specific (Vi) metagenes and corresponding "metagene expression levels" or cell factor loadings (Hi). These metagenes and their corresponding cell factor loadings provide a quantitative definition of cell identity and how it varies across biological settings. b-d, Three different scenarios in which online learning can be used for single-cell data integration. (b) Scenario 1: the single-cell datasets are large but fully observed. Online iNMF processes the data in random mini-batches, enabling memory usage and/or disk storage independent of dataset size. Each cell may be used multiple times in different "epochs" of training to update the metagenes. (c) Scenario 2: the datasets arrive sequentially, and online iNMF processes the datasets as they arrive, using each cell to update the metagenes exactly once. (d) Scenario 3: online iNMF is performed as in scenario 1 or scenario 2 to learn W and V . Then cell factor loadings for the newly arriving dataset are calculated using the shared metagenes (W ) learned from previously processed datasets. The new dataset is not used to update the metagenes. Derivation of Online Algorithm for iNMF In our previous implementation of iNMF [10], we derived an ANLS algorithm to solve for H i , W , and V i . Briefly, ANLS optimizes the iNMF objective by iteratively solving a nonnegative least squares problem to update each of the matrices (H i , W , V i ) holding the others fixed. For example, the update for H i (i ∈ {1, . . . , N }) is: This is a convex nonnegativity-constrained least squares problem that can be solved efficiently using the block principal pivoting algorithm [14]. This ANLS algorithm is guaranteed to converge to a local minimum, and we previously showed that it outperforms the multiplicative updates in practice [10]. We use this strategy for computing the cell factor loadings (H) for the cells in the online iNMF algorithm. Another type of NMF algorithm, hierarchical alternating least squares (HALS), also provides guaranteed convergence to a local minimum, but often shows more efficient convergence in practice [12]. Thus, we sought to derive a novel HALS algorithm for optimizing the iNMF objective. A HALS derivation proceeds by rewriting the objective function as a sum of rank-one approximations (one for each of the K inner dimensions of the factorization), then deriving a closed-form solution for each of the K basis vectors holding the others fixed. In the case of iNMF, this can be considered a block coordinate descent strategy with (2N + 1)K vector blocks. For example, for V i,j , the jth factor in the dataset-specific metagene matrix V i , we want to solve the optimization problem: Taking the derivative with respect to kth element in V i,j gives: We then set the derivative equal to 0 and solve for (V i,j ) k subject to nonnegativity constraints. Applying the same process to all elements in V i,j yields the following update for jth column for V i : where [x] + = max{10 −16 , x}. Similar derivation for W gives: In the online iNMF algorithm, both the shared and dataset-specific metagenes are refined by applying the HALS updates for W and V i to a different mini-batch during each iteration. Optimizing a Surrogate Function for iNMF We developed an online learning algorithm for integrative nonnegative matrix factorization by adapting a previously published strategy for online NMF [7]. The key innovation that makes it possible to perform online learning is to optimize a "surrogate function" that asymptotically converges to the same solution as the original iNMF objective. We can formulate NMF using the following objective function: where W and H are constrained to be nonnegative. The original online NMF paper proved that the following surrogate function converges almost surely to a local minimum as t → ∞: where H and W are constrained to be nonnegative. We can then perform NMF in an online fashion by iteratively minimizing the expected costf t (H, W ) as new data points x t (or points randomly sampled from a large fixed training set) arrive. Intuitively, this strategy allows online learning because it expresses a formula for incorporating a new observation x t given the factorization result (W (t−1) , H (t−1) ) for previously seen data points. Thus, we can iterate over the data points one-by-one or in "mini-batches"-and also rapidly update the factorization when new data points arrive. For iNMF, where we have N data matrices X 1 , ..., X N and data points x i , the iNMF objective function is given by (1). The corresponding surrogate function is: where d i indicates which dataset the ith data point belongs to. For a new data point (or mini-batch of new data points) x t , we first compute the corresponding cell factor values h dt . In the original online NMF paper [7], the authors used a least angle regression algorithm (LARS). We chose to use the ANLS update (2) instead because it is highly efficient, designed specifically for NMF (rather than dictionary learning in general) and solves the subproblem exactly in a single iteration. To update the shared (W ) and dataset-specific (V i ) factors, we use the HALS updates (5) and (6), which are analogous to the updates used by Mairal et al [7], but derived specifically for iNMF. Because the updates for W and V i depend on all of the previously seen data points X i and their cell factor loadings H i , a naive implementation would require storing all of the data and cell factor loadings in memory. However, the HALS updates (5) and (6) depend on X i and H i only through the matrix products A = h i h i and B = x i h i . These matrix products have only K 2 and mK elements respectively, allowing efficient storage, and can be computed incrementally with the incorporation of each new data point or mini-batch x t : Implementation We implemented online iNMF according to Algorithm 1 below. We used our previous Rcpp implementation of the block principal pivoting algorithm to calculate the ANLS updates for h i . We implemented the HALS updates for W and V i using native R, since the updates require only matrix operations, which are highly optimized in R. Because the online algorithm does not require all of the data on each iteration (only a single data point or fixed-size mini-batch), we used the rhdf5 package [15] to load each mini-batch from disk on the fly. By creating HDF5 files with chunk size no larger than the mini-batch size, we were able to create a memory-efficient implementation that never loads more than a single mini-batch of the data from disk at once. In fact, we can even go a step further and analyze datasets that are not stored on the same physical hard drive as the machine performing iNMF. We show below that it is possible to analyze data by streaming over the internet without downloading the entire dataset onto the disk. For scenario 1, in which the mini-batch size specifies the total number of cells to be processed per iteration across all datasets, we sample p i cells from each dataset i, proportional to its dataset size. Thus, each minibatch in scenario 1 is composed of a representative sample of cells from all datasets. For scenario 2, in which only one dataset is available at a time, we sample the entire mini-batch from the current dataset. For a minibatch size of 5,000 cells, reading each mini-batch from disk added minimal overhead (less than 0.35 seconds per iteration) (Fig. S1). We also employed two heuristics that were used in the original online NMF paper: (1) we initialized the dataset-specific metagenes using K cells randomly sampled from the corresponding input data and (2) we removed information older than two epochs from matrices A and B. Online iNMF Converges Efficiently Without Loss of Accuracy Compared to Batch iNMF We first assessed the performance of our online iNMF algorithm using two scRNA-seq datasets from the frontal cortex (n = 156,167 cells) and posterior cortex (n = 99,186 cells) of the adult mouse with 1,111 highly variable genes. These datasets are part of the adult mouse brain atlas recently published by Saunders et al. [16]. Because the online algorithm optimizes the expected cost, we tracked the value of the iNMF objective on both the training data (80% of the entire dataset) and a held-out testing set (20% of the entire dataset) not seen during training. The original online NMF paper also used this evaluation strategy [7]. For this experiment, we set the number of factors (K) and the tuning parameter λ to 40 and 5, respectively. We compared iNMF against the previously published batch methods for iNMF, including multiplicative updates (Mult) [4] and alternating nonnegative least squares (ANLS) [10]. We calculated H f rontal and H posterior for the testing set using the W ,V f rontal and V posterior learned on the training set, then used these values to compute the objective on either the training or testing set. The online iNMF algorithm (mini-batch size = 5,000) converges much faster than ANLS and Multiplicative updates on both the training and held-out set ( Fig. 2a-b). Within approximately 500 seconds of runtime, Algorithm 1 Online Learning for Integrative Nonnegative Matrix Factorization . . , N 2: Initialize elements in W using unif(0,2). Initialize Vi using randomly sampled cells from Xi, i ∈ 1, . . . , N 3: for t = 1 to T do 4: Sample a mini-batch xi of size pi from Xi, i ∈ 1, . . . , N 5: Compute hi using ANLS, i ∈ 1, . . . , N hi = arg min Discard information older than 2 epochs 7: Discard information older than 2 epochs 8: Update W and Vi using HALS 9: end for 10: Solve for Hi using ANLS with the current W and Vi, i ∈ 1, . . . , N 11: return W, Vi, Hi, i ∈ 1, . . . , N the online approach achieves a significantly lower training iNMF objective (Fig. 2c). Online iNMF also shows superior performance on several other datasets from different biological contexts (Fig. S2). Furthermore, the convergence behavior of the online algorithm on both training and test sets is relatively insensitive to the mini-batch size (Fig. 2d-e). For mini-batch sizes from 1,000 to 10,000, the convergence behavior is nearly identical. As the mini-batch size approaches the training dataset size (150,000 or 200,000), the first few iterations take considerably longer, slowing the convergence time, but the final objective remains unchanged. Moreover, for a fixed test set, the runtime needed to reach convergence remains relatively constant once the total number of cells exceeds some minimum threshold (around 50,000, in this case). (Fig. 2f ). This behavior likely occurs because, for a cell population of fixed complexity (for example, a tissue containing 12 cell types), only some fixed number of observations is required to effectively learn the metagenes. Thus, using the entire dataset to update the shared and data-specific metagenes at each iteration becomes increasingly inefficient as the dataset size exceeds the minimum threshold size needed to learn the metagenes. Conversely, the relative efficiency of online iNMF compared to batch methods increases with dataset size. Since ANLS is the batch algorithm that we previously developed for solving the iNMF optimization problem, we refer it as batch iNMF in subsequent analyses. We next investigated whether online iNMF yields similar dataset alignment and cluster preservation to batch iNMF. We applied both online iNMF and batch iNMF to three scRNA-seq data collections, followed by quantile normalization of cell factor loadings. Besides the mouse cortex (n = 255,353 cells) datasets mentioned above, the other two datasets are human peripheral blood mononuclear cells (PBMCs, n = 13,999 cells) and human pancreatic islets (pancreas, n = 14,890 cells). The PBMC dataset consists of 7,451 interferon-β (IFNB)-stimulated PBMCs and 6,548 control PBMCs. The human pancreas collection includes eight separate datasets across five sequencing technologies (SMARTSeq2, Fluidigm C1, CelSeq, CelSeq2, and inDrops). For both PBMC and pancreas datasets, we selected 2,000 variable genes for analysis. Then we created UMAP visualizations of the resulting factor loadings, colored by datasets and published cell type labels (Fig. 3). These plots allow visual and qualitative assessment of dataset alignment and data structure preservation. As the figure shows, the online iNMF algorithm yields visualizations that are very similar to batch iNMF, suggesting nearly identical dataset alignment and accurate preservation of the original cluster structure for all three data collections. Convergence behavior for online iNMF and two previously published batch iNMF algorithms on scRNA-seq data from the adult mouse cortex. a,b, The online iNMF algorithm converges much more rapidly to a similar or better objective function value compared to the previously published batch methods-alternating nonnegative least squares (ANLS) and multiplicative updates (Mult)-on both training and testing sets. c The online iNMF algorithm leads to a significantly lower objective function value within the same amount of training time, compared to the batch methods. d-e The convergence behavior of online iNMF is nearly identical for mini-batch sizes from 1,000 to 10,000. f, The online iNMF algorithm becomes increasingly efficient (in terms of decrease in objective function value per unit time) as dataset size increases. The time required for the algorithm to converge does not significantly increase with growing dataset size once the dataset size exceeds 50,000 cells. Online iNMF Yields State-of-the-Art Single-Cell Data Integration Results Using Significantly Less Time and Memory We next benchmarked online iNMF (scenario 1) against batch iNMF [10] and two state-of-the-art singlecell data integration methods, Seurat v3 [17] and Harmony [8]. We selected these methods for comparison because a recent paper benchmarked 14 single-cell data integration methods and found that Harmony, Seurat v3 (hereafter referred to as Seurat), and LIGER consistently achieved the best dataset alignment and cluster preservation on a range of datasets [18]. The Harmony algorithm starts from an initial, uncorrected PCA embedding, then integrates scRNA-seq datasets through an iterative process of soft clustering and cluster-specific correction that optimizes for both dataset alignment and cluster separation. The core of the latest Seurat algorithm is canonical correlation analysis (CCA) followed by identification of mutual nearest neighbors ("anchors"). Seurat uses these anchors to calculate batch correction vectors that align the corresponding cells across datasets. Harmony, Seurat, and batch iNMF all require the entire dataset to be stored in memory, so we expect online iNMF to offer substantial improvements in both time and memory usage. To benchmark time and memory usage, we generated five datasets of increasing sizes (ranging from 10,000 to 255,353 cells in total) sampled from the same adult mouse frontal and posterior cortex data. Then we utilized them to compare the runtime and peak memory usage of online iNMF (mini-batch size = 5,000) and the other methods (Fig. 4a). As expected, the runtime required for online iNMF does not increase significantly as the dataset size grows, and the memory usage is constant. Online iNMF is also the fastest method overall, with Harmony the second fastest. Notably, the gap between Harmony and online iNMF widens as the dataset size increases; we also ran both methods on a dataset of 1.3 million cells from the mouse embryo and found that online iNMF finished in 25 minutes using 500 MB of RAM (see below for details), whereas Harmony required 92 minutes and 111 GB of RAM. Seurat and batch iNMF are significantly slower than online iNMF and Harmony on the mouse cortex data, and the runtime of Seurat increases the most rapidly of any method. Furthermore, the online iNMF algorithm uses far less memory than any other approach, with memory usage completely independent of total data size. By design, online iNMF processes a fixed number of cells during each iteration, which can be determined based on the available computing resources. The peak memory usage of online iNMF with a mini-batch size of 5,000 and k = 40 factors is approximately 360MB, no matter how many cells in total are processed. In contrast, batch iNMF, Harmony, and Seurat use increasing amounts of memory as the total number of cells increases. Batch iNMF and Harmony display a similar linear relationship between memory usage and dataset size and use much less memory than Seurat. The memory required by Seurat shows an exceedingly rapid increase given more cells in the input (38 GB of memory is needed for analyzing 100,000 cells). Nevertheless, all three methods besides online iNMF require all of the cells to be loaded in memory, and thus the memory usage grows linearly with dataset size. Next, we quantified the data integration and cluster preservation performance for online iNMF and the other methods (Fig. 4b-c). Following the benchmarking strategy used by Tran et al., we assessed both the alignment performance (measured using two metrics) and the cluster preservation performance (measured using two metrics). For all experiments, we ran both online iNMF and batch iNMF five times, ensuring that the comparisons account for variation due to different initializations. We used the same number of dimensions for all 4 approaches in each comparison (20 dimensions for PBMCs and 40 for pancreas). To quantify dataset alignment, we used the k-nearest-neighbor Batch-Effect Test (kBET) metric [19] and the alignment score of Butler et al. [20]. kBET uses a chi-square statistic to test the null hypothesis that the nearest-neighbor distances in the aligned latent space do not differ according to batch. Thus, a higher average p-value indicates better data integration. The alignment score examines the mixing level in the local neighborhood of each cell; proper batch correction produces a large alignment score near 1, while uncorrected datasets have an alignment score near 0. To assess clustering performance, we applied Louvain community detection with the same parameters to the aligned latent spaces obtained by all methods. We then compared the resulting clusters with the published cluster assignments using cluster purity and adjusted Rand index (ARI). Our results show that online iNMF performs as well as or better than the state-of-the-art methods. The online and batch iNMF algorithms align the PBMCs and pancreas datasets equally well, beating Harmony and Seurat. Furthermore, the online algorithm achieves scores close to batch iNMF on both data collections, confirming the gain in the computational efficiency does not come at the cost of accuracy in data embedding. The difference between iNMF and the other methods is especially pronounced when comparing the values of the kBET metric. We suspect that this difference occurs because our approach includes quantile normalization, which is stronger than the alignment strategies used by Harmony or Seurat. Consistent with our results, the benchmark of Tran et al. also included the pancreas dataset and found that LIGER (batch iNMF) gave substantially higher kBET values than competing methods [18]. The online and batch iNMF algorithms produce comparable clustering results to the other approaches, although Harmony gives slightly higher cluster purity and adjusted rand index. Overall, this analysis indicates that the time and memory efficiency of LIGER does not sacrifice result quality. We also compared the performance of online iNMF, Seurat, and Harmony when integrating two datasets of different modalities (Fig. S3). We used single-nucleus RNA-seq (n =101,647 cells) and single-nucleus ATAC-seq (n =54,844 cells), which were generated from the mouse primary motor cortex (see next section for more details). Harmony showed the worst alignment performance, possibly because this approach was not originally designed for multi-modal integration, unlike LIGER and Seurat. In contrast, both LIGER and Seurat produced UMAP visualizations indicating successful alignment of RNA and ATAC data. However, the kBET and alignment metrics indicate the LIGER much more thoroughly removes the residual dataset differences than either Seurat or Harmony. To demonstrate the scalability of online iNMF, we applied the algorithm following scenario 1 and analyzed the scRNA-seq data of Saunders et al., which contains 691,962 cells sampled from nine regions (stored in 9 individual datasets) spanning the entire mouse brain. We identified 2,384 genes that are highly variable in at least one of the regions. Using these genes, we performed 3 epochs of iNMF with mini-batch size of 5,000, K = 40 and λ = 5. We found that quantile normalization was not necessary for this dataset-iNMF alone was sufficient to integrate the datasets. Using online iNMF, we factorized the entire dataset in 24 minutes on a MacBook Pro (Intel i7 processor) using about 350 MB of RAM. We note that the published analysis by Saunders et al. did not analyze all nine tissues simultaneously due to computational limitations. Furthermore, we estimate (based on the data in Fig. 4a) that performing this analysis using our previous batch iNMF (ANLS) algorithm would have taken 3 hours and required over 20 GB of RAM. Online iNMF Rapidly Factorizes Large Datasets Using Fixed Memory We then visualized the embedded cells, colored by published cell type labels, in the first and second UMAP coordinates separately for all nine mouse brain regions (Fig. 5a). The online iNMF algorithm clearly retains the data structure as the cells within each class are well grouped together. For cells broadly marked as neurons, the distribution varies across regions, indicating neuronal subtypes specialized to different parts of the brain. For example, neurogenic cell populations are identified predominantly in the hippocampus and striatum, consistent with reports of hippocampal and striatal neurogenesis in adult mammals [16,21,22]. Additionally, we used the factorization to group the cells into 40 clusters by assigning each cell to the factor on which it has the largest loading. We then examined differences in the regional proportions of each cell cluster. Neurons and oligodendrocytes show the most regional variation in composition, consistent with previous analyses [23]. The total proportion of oligodendrocytes varies by region, but individual subtypes of oligodendrocytes are not region-specific, as expected. In contrast, individual subtypes of neurons are highly region-specific, reflecting diverse regional specializations in neuronal function (Fig. 5b). We also investigated the biological properties of our cell factor loadings. Reassuringly, our cluster assignments largely represent subtypes within the broad cell classes and do not span class boundaries. As expected, neurons show by far the most diversity with eight subclusters. In contrast, ependymal cells, macrophages, microglia, and mitotic cells each correspond to only a single cluster (Fig. 5c). To further demonstrate the scalability of online iNMF, we analyzed the mouse organogenesis cell atlas (MOCA) recently published by Cao et al. [24]. After filtering, MOCA consists of transcriptomic profiles for 1,363,063 cells from embryos between 9.5 to 13.5 days of gestation. Using 2,557 highly variable genes, we performed online iNMF in 25 min using less than 500MB of RAM (mini-batch size = 5,000, K = 50, λ = 5, epoch = 1). Note that the memory usage is higher for MOCA than for the mouse brain dataset because of the higher value of k, not because of the number of cells. We then performed a 3D UMAP embedding using the same parameters chosen by Cao et al. [24]. The resulting visualization shows that the cells from all five gestational ages are well aligned (Fig. S3a), and the structure of 10 different developmental trajectories as defined by Cao et al. is also accurately preserved (Fig. S3b). We reasoned that, because online iNMF processes only one mini-batch at a time, our approach allows processing datasets by streaming them over the internet, removing the need to store multiple copies of large datasets. To demonstrate this capability, we created an HDF5 file containing the mouse cortex datasets (n = 255,353 cells) and saved the file on a remote server maintained by the HDF group. Using the remote HDF5 capabilities of the h5pyd package, we read mini-batches over the internet, rather than from disk. Processing the cortex dataset in this fashion took about 18 minutes for completing iNMF, compared to around 6 minutes using local disk reads. This capability provides the unique advantage that users can re-analyze large cell atlases, without requiring each user to download and store the entire data collection. Online allows Iterative Refinement of Cell Atlas from Mouse Motor Cortex One of the most appealing properties of our online learning algorithm is the ability to incorporate new data points as they arrive. This capability is especially useful for large, distributed collaborative efforts to construct comprehensive cell atlases [25,26,3]. Such cell atlas projects involve multiple research groups asynchronously generating experimental data with constantly evolving protocols, making the ultimate cell type definition a moving target. To demonstrate the utility of online iNMF for iteratively refining cell type definitions, we used data generated by the BRAIN Initiative Cell Census Network (BICCN), an NIH-funded consortium that aims to identify all of the cell types in the mouse and human brains. During a pilot phase starting in 2018, the BICCN generated single-cell datasets from a single region of mouse brain (primary motor cortex, MOp) spanning 4 modalities (single-cell RNA-seq, single-nucleus RNA-seq, single-nucleus ATAC-seq, single-nucleus methylcytosine-seq) and totaling 786,605 cells. These datasets have been publicly released on the BICCN data portal (https://nemoarchive.org/). Over the past two years, the four consortium centers have sequentially generated datasets, re-running the experiments as additional replicates and new protocols become available. Thus, this data collection provides an ideal case study to demonstrate how online iNMF can refine a cell atlas as additional cells are sequenced. Following scenario 2 (Fig. 1c), we used online iNMF to incorporate the MOp datasets (n = 408,885 neurons from eight datasets after filtering, 3717 selected genes) in chronological order, refining the factorization with each additional dataset (Fig. 6). These datasets represent a sort of historical record reflecting the rapid development of single-cell experimental techniques, with the first dataset generated using SMARTseq, the dominant protocol before the advent of droplet-based protocols. Subsequent datasets reflect newer technologies, including two versions of the 10X Genomics scRNA-seq protocol (v2 and v3); droplet-based single-nucleus RNA-seq; droplet-based single-nucleus ATAC-seq; and single-nucleus methylcytosine-seq. We used a fixed mini-batch size of 5,000 cells, k = 40, λ = 5, and performed a single epoch of training (each cell participates in exactly one mini-batch). When adding a new dataset i(i ≥ 1), we incorporated a new dataset-specific metagene V i and randomly initialized it. We did not use the previously seen data to refine the factors after the initial single epoch per dataset. Then we re-computed the cell factor loadings for all datasets (H 1 , . . . , H i ) using the latest metagenes. Lastly, we quantile normalized these cell factor loadings. Fig. 6. Iterative refinement of cell identity using multiple single-cell modalities from mouse primary motor cortex. We integrated four scRNA-seq datasets, two snRNA-seq datasets, one snATAC-seq dataset and one snmC-seq dataset (n = 408,885 neurons). a, Sequential integration of six scRNA-seq datasets (scenario 2). b, Integration of snATAC-seq data in addition to the scRNA-seq and snRNA-seq data using the shared metagenes (W ) learned in (a) (scenario 3). c, Integration of DNA methylation data (snmC-seq) by shared metagene projection (scenario 3). d, Joint clusters obtained using the cell factor loadings of all eight aligned datasets. The clusters were annotated based on the visual cortex cell types from Tasic et al. Our approach successfully incorporates each new single-cell or single-nucleus RNA-seq dataset without revisiting previously processed cells (Fig. 6a). Although no ground truth labels or published clustering assignments are available for this data collection, UMAP visualizations indicate that the structure of the datasets is iteratively refined with each successive dataset that is added. However, the single-nucleus ATAC-seq dataset, the 7th dataset to arrive, does not align as well as the RNA datasets when processed according to scenario 2 (Fig. S5). We reasoned that this may be because the ATAC-seq data is a completely different modality, and thus scenario 2 may not be the best strategy for incorporating the ATAC-seq. Thus, we also integrated the scATAC-seq data according to scenario 3 ( Fig. 1): We first performed online iNMF on the MOp scRNA-seq data, then used the shared metagenes (W ) to project the snATAC-seq data into the same latent space as the scRNA-seq and snRNA-seq data (Fig. 6b). Then we integrated the snmC-seq data in the same way (Fig. 6c). This strategy produced excellent integration results, and we were able to jointly identify 17 cell types from the scRNA-seq, snRNA-seq, snATAC-seq and snmC-seq data (Fig. 6d). Using marker gene expression, we labeled these cell types according to the taxonomy for mouse visual cortex neurons recently published by Tasic et al [27]. We also confirmed that scenario 2 is robust to the order of dataset arrival. To do this, we inspected the effect of random initializations and random ordering of the input datasets on the iterative refinement of cell identity (scenario 2). We ran the online algorithm on the six RNA datasets with five different initializations and five different dataset orders. Different orders result in comparable variation in final cluster assignments compared to the variation from random initialization (average pairwise adjusted Rand index = 0.706±0.049 from random input orders vs. 0.693±0.071 from random initializations). Additionally, UMAP visualizations colored by our final cell type annotations are qualitatively very similar. (Fig. S6). Discussion The online iNMF algorithm processes single-cell datasets, possibly from different modalities, each assaying a common set of genes. By reading mini-batches from disk, online iNMF not only converges faster than batch approaches, but also decouples memory usage from dataset size. The online algorithm can even process large datasets stored on a remote server without the need to keep the datasets on a local disk. We anticipate that the efficiency gains of online iNMF will be even greater as the scale of single-cell datasets increases. Furthermore, we do not sacrifice performance for efficiency-our online algorithm performs as well as or better than state-of-the-art methods, including batch iNMF, Harmony and Seurat. We envision online iNMF enabling single-cell data integration in three different scenarios. In scenario 1, when all single-cell datasets are currently available, the online iNMF algorithm rapidly factorizes the singlecell data into metagenes and cell factor loadings using multiple epochs of training, as we demonstrated on the adult mouse brain data (nine regions) collection. In scenario 2, the online algorithm iteratively refines cell identity as the single-cell datasets sequentially arrive. We demonstrated this capability using single-cell gene expression data from mouse primary motor cortex. We anticipate that scenario 2 will prove useful as researchers continually incorporate newly sequenced cells to build comprehensive cell atlases. Scenario 3 proved especially useful for integrating completely different data modalities, such as the single-cell RNA-seq, single-nucleus ATAC-seq, and single-nucleus DNA methylation data from the mouse primary motor cortex. As more single-cell datasets of rapidly increasing size become available, online iNMF holds great promise for integrating single-cell multi-omic datasets and cataloging cell identity using limited computational resources. Fig. S2. Convergence behavior for online iNMF and two batch iNMF algorithms on scRNA-seq data from the adult mouse brain, human PBMCs and human pancreas. The online iNMF algorithm exhibits faster convergence and better objective minimization shortly after a fixed amount of training time. The advantage of the online algorithm in convergence speed is more apparent for larger datasets. a-c Adult mouse brain (n = 691,962 cells, 9 individual datasets). d-f Human PBMCS (n = 13,999 cells, 2 individual datasets). g-i Human pancreas (n = 14,890 cells, 8 individual datasets). Fig. S3 . Benchmarking integration using datasets of different modalities. Online iNMF efficiently and accurately integrates scRNA-seq (n = 101,647 cells) and scATAC-seq (n = 54,844 cells) datasets from the mouse primary motor cortex (kBET = 0.923, Alignment score = 0.737). Harmony does not align the two modalities as well as either Seurat or online iNMF (kBET = 0, Alignment score = 0.305). Seurat aligns the datasets, but shows some residual dataset differences (kBET = 0, Alignment score = 0.645) (n = 25,000 cells from each dataset due to memory constraints). S6. Effect of random algorithm initialization and random input ordering on iterative refinement of cell identity. Online iNMF are applied five times on six MOp scRNA-seq and snRNA-seq datasets for each random setting (scenario 3). a, random algorithm initialization. b, random input ordering. Clusters from random orderings had an average pairwise ARI of 0.706 (±0.049), compared to an average pairwise ARI of 0.693 (±0.071) for random initializations.
9,892.4
2020-01-17T00:00:00.000
[ "Computer Science", "Biology" ]
Who Cares About Stock Market Booms and Busts? Evidence from Data on Mental Health This paper investigates the relationship between share prices and mental health, exploiting the availability of interview dates in the British Household Panel Survey to match the level and changes in the FTSE All Share price index to respondents over the period 1991-2008. We present evidence that the level, 6 month and yearly changes in the share price index are associated with better mental health while greater uncertainty, as measured by index volatility, is associated with poorer mental well-being. Finally, using several proxies of investor status, we find little evidence that this relationship is confined to holders of equity based assets, suggesting that the observed relationship does not arise via wealth effects. Instead, it appears as though share prices matter to mental health because they perform the role of economic barometer. Introduction Data on well-being or mental health are increasingly used to complement traditional research methods in economics, and to inform public policy -particularly in the UK where the government has recently launched a program to measure national well-being. The aim of this paper is to complement existing research on the welfare effects of economic booms and busts by examining the relationship between the stock market performance and mental health. Existing studies typically focus on the effect of share price fluctuations on consumption and leisure patterns (see inter alia Banks et al., 2012;Disney et al., 2010, for UK evidence on the aged). While changing consumption and leisure patterns may underpin any association between economic cycles and well-being, focusing on mental well-being may reveal new insights if it transpires that economic conditions affect levels of distress independently from changes in personal economic circumstances. For example, Di Tella et al. (2001Tella et al. ( , 2003 find that macroeconomic conditions, as measured by unemployment rates, matter to happiness even after taking into account the effects of high unemployment on personal income and employment status. To explain this result, they suggest that unemployment rates are informative of economic prospects. Researchers have recently begun to explore whether asset prices perform a similar role as an economic barometer (see for example Deaton (2012) for evidence on share prices and Ratcliffe (2012) for evidence on house prices). Asset prices may provide unique signals of economic prospects compared to unemployment rates if asset prices are more forward looking in that they reflect the net present value of future revenue streams. Asset markets may therefore aggregate the beliefs of many forward looking individuals and firms with respect to longer term economic prospects. A priori, however, one might expect any correlation between asset prices and mental health to reflect the effect of unexpected asset price fluctuations on personal wealth. The little evidence that exists linking stock markets to various measures of subjective well-being does not support a wealth mechanism. However, much of this evidence is visual in nature, with regression analysis confined to aggregate relationships between the stock market and well-being. The current study makes several contributions to the literature. We are among the first to examine the relationship between share prices and mental well-being using individual level data, which is made possible by the availability of interview dates in the British Household Panel Survey. Hence, we can explore the existence of wealth effects versus an economic barometer mechanism by examining the relationship between share prices and mental health across various groups in the population, while taking into account detailed socio-economic and demographic information. Moreover, our analysis is not confined to the period of the recent crisis. Our data starts in 1991 and ends in 2008, and therefore covers the late 1990/early 2000 boom and bust as well as the onset of the financial crisis. Finally, this paper contributes to the wider literature on the effect of macroeconomic conditions, and in particular of asset prices, on mental health. To preview our results, we find evidence of a positive correlation between changes in share 1 prices and mental health. Conversely, our results suggest that greater uncertainty, as measured by increased volatility in the share price index, is associated with lower mental well-being. Finally, using several proxies of asset ownership, we find that both asset owners and non-owners are sensitive to fluctuations in share prices, suggesting that the observed relationship does not arise via wealth effects. Instead, it appears as though the share price index acts as a barometer of economic performance. Literature There is growing evidence that macroeconomic conditions affect mental health via an 'economic stress' mechanism (Catalano and Dooley, 1983). This posits that actual or anticipated job loss and associated financial insecurity are risk factors in illness. As the prospect of unemployment is greater when unemployment rates rise, much of this literature focuses on the effect of unemployment rates on well-being. Di Tella et al. (2001Tella et al. ( , 2003 present evidence of a negative relationship between national unemployment rates and happiness using cross country data, while Charles and DeCicca (2008) show that local labour markets have a similarly adverse effect on mental health. Since unemployment rates influence happiness or mental health even after taking into account the effect of high unemployment on personal income and labour market status, these findings are consistent with a psychological phenomena. In particular, Di Tella et al. (2001Tella et al. ( , 2003 suggest that high unemployment rates induce a 'fear of unemployment'. More recently researchers have focussed attention on whether asset prices perform a similar role as an economic barometer. However, since rising asset prices makes asset owners wealthier, a positive relationship might exist between asset prices and the well-being of asset owners owing to wealth effects. To distinguish between wealth effects versus the role of economic barometer, it is necessary to consider the relationship between asset prices and well-being among non-asset owners. For example, the wealth of non-owners is unchanged (and lifetime wealth may even decline among aspiring asset owners) when asset prices unexpectedly rise, suggesting a negative, if any, relationship between asset prices and the well-being of non-asset owners. In contrast, if asset prices are viewed as an economic barometer, asset price movements are likely to matter to both asset owners and non-owners. Using the Gallup daily random sample of 1000 Americans, Deaton (2012) presents time-series plots documenting a positive relationship between the daily share price index and daily averages of well-being, as measured by Cantril's Self-Anchoring Scale (Cantril's Ladder). Yet time-series plots of the proportion reporting satisfaction with their standard of living -closely correlated with the ladder -indicate that low income households, who are less likely to own shares, are most sensitive to the evolving crisis. This indicates that rather than providing a reflection of changes in financial resources, the share price index matters via a role as an economic barometer, or at least that the stock market and well-being are responding to the same stream of information. Regressions using daily and monthly averages of the share price index and Cantril's Ladder confirm a positive and statistically significant relationship that is robust to controlling for official measures of income and unemployment (albeit with 36 data points in the latter analysis). Murgea and Reisz (2012) also use the Gallup survey to empirically investigate the relationship between monthly measures of the share price index, the Chicago Board Options Exchange Volatility Index (a measure of the expected stock market volatility over the next 30 days) and the Gallup healthways well-being index (a composite measure of life evaluation, emotional and physical health, healthy behaviour, work and local environment) between January 2008 and March 2011. In separate regressions, they find evidence of a positive relationship between the index and well-being, and a negative relationship between volatility and well-being. However, neither effect is statistically different from zero when both terms are simultaneously considered. To date only one previous study investigates the relationship between the stock market and wellbeing using individual level data. Falk and Jager (2011) match stock market returns over 1, 2 and 3 weeks to individuals in the German Socio-Economic Panel via the interview date. However, given the primary focus of this analysis is to better understand investor utility, the sample is restricted to households containing only one household adult (investment in stock markets is collected at the household level), and in addition, to households completing interviews with the assistance of an interviewer. They do not find much evidence that average returns over short time periods are related to life satisfaction. 1 Finally, in a related study investigating the relationship between asset prices and mental health, Ratcliffe (2012) presents evidence that local house prices are positively correlated with the mental health of homeowners and non-homeowners using the British Household Panel Survey. This correlation, which is inconsistent with wealth effects, is robust to controlling for proxies of local area amenities, and local unemployment and earnings, and suggests that house prices are a barometer of economic prospects. This study focuses on the relationship between share price fluctuations and mental health in Great Britain. Our main contribution to the literature is a detailed analysis of this relationship using individual level data but we are also the first to look at this issue with British data. Few Britons are invested in shares, either directly or indirectly through pension schemes, with asset portfolios dominated by housing wealth (Banks et al., 2004). As a result, share price fluctuations may register to a lesser extent with the British public. On the other hand, fluctuations in the FTSE 100 are reported on a daily basis in the media such that movements in share prices are quickly transmitted to the public. If frequency of information is an important characteristic of any indicator assuming the role of economic barometer, as the most frequently published indicator, the share price index may nevertheless shape mental health outcomes. Empirical Model We estimate the following regression specification: where H it is a measure of the mental health of individual i at time t and FTSE it measures the FTSE All Share price index on the date that individual i is interviewed. Initially we explore the influence of index levels, and high (1 day, 1 week and 1 month) and low frequency (6 months, 1 year) changes in the index on mental health. The vector z contains demographic characteristics such as age, household composition, education level, labour market status, monthly household income and region of residence. We also include dummy variables to capture the day of the week (θ t 1 ), the survey week (θ t 2 ) and the survey year (θ t 3 ). Finally, v it is a random error term, clustered at the individual level. Data Data are taken from the British Household Panel Survey 2 (BHPS) between 1991 and 2008. The BHPS is a nationally representative survey of 5 500 households 3 (over 10 000 individuals) that collects wide ranging socio-economic and demographic information on household members. BHPS interviews begin on the 1st September each year with around 85% of interviews completed by early November, and crucially for this study, interview dates are publicly available. The BHPS contains a standard measure of mental well-being, the General Health Questionnaire (GHQ), which is frequently used to assess psychological health (see inter alia Clark, 2003;Gardner and Oswald, 2007;Roberts et al., 2011) and appears as part of the self-completed questionnaire administered to all household adults. The version of the GHQ in the BHPS has twelve questions, which focus on positive and negative emotions and answers to these questions are aggregated to produce a 0-36 point Likert index of mental well-being that is recoded so that higher scores reflect better psychological health. 4 Levels and growth rates in the FTSE All Share price index are matched to respondents via the interview date, 5 thus providing variation in this aggregate index across respondents within each survey wave. These data are taken from Thomson Reuters Datastream, and have been adjusted for inflation using the retail price index. We concentrate on the FTSE All Share price index as opposed to the FTSE 100 in our analysis because the latter is an index of the 100 largest companies listed on the London Stock Exchange whereas the former combines the FTSE 100, the FTSE 250 (the next 250 largest companies after the FTSE 100) and the FTSE SmallCap (smaller companies). Compared to the FTSE 100, the FTSE All Share price index therefore provides a broader reflection of economic activity. In practice, however, both series produce similar results, which we discuss further in robustness analysis. Figure 1 plots the evolution of levels and the annual percent change in the index over the past year for the period analysed, which covers two boom and bust phases (late 1990/early 2000 and mid 2000/late 2000) in the stock market. By using interview dates to create variation in the share price index across respondents within each survey year, we desire that interview dates are random, such that variation in share prices is exogenous to observed and unobserved characteristics that influence mental health. However, when we look at the distribution of characteristics of people interviewed across different weeks of the BHPS survey period, there is some evidence that people interviewed in the first two weeks of September are different to others. Table 1 reports normalised differences in the characteristics of people interviewed in each of the first 5 weeks of the BHPS survey period compared to the characteristics of people interviewed afterwards. 6 The normalised difference is calculated as x 0 is the mean characteristic of people interviewed in week t and x 1 is mean characteristic of people interviewed in weeks t+1 to T (where T is the final week in which interviews occur), and where s 2 is the variance of the relevant sample. It is evident that early interviewees are more likely to be older and retired, and hence to work fewer hours and have lower income, compared to others. This is perhaps unsurprising given the retired have fewer demands on their time and as such are more likely to be available for interview. In terms of the empirical analysis, this feature may be problematic for two reasons. Firstly, share prices are fairly persistent suggesting that people interviewed later in the year may be subject to higher/lower values or larger positive/negative changes depressed?', (j) 'losing confidence in yourself ?', (k) 'been thinking of yourself as a worthless person?' with answers 'Not at all...1', 'No more than usual...2', 'Rather more than usual...3' and 'Much more than usual...4' and Questions (c) 'felt that you were playing a useful part in things?', (d) 'felt capable of making decisions about things?', (g) 'been able to enjoy your day-to-day activities?', (h) 'been able to face up to your problems?', (l) 'been feeling reasonably happy, all things considered?' with answers 'More than usual...1', 'Same as usual...2', 'Less so than usual...3', 'Much less than usual...4'. The Likert scale (36-point) aggregation incorporates the severity of symptoms experienced by subtracting one from each response score (i.e. 1=0,2=1,3=2,4=3) and summing. The Likert scale is reversed so that higher scores reflect better mental well-being. 5 For individuals interviewed at the weekend (just over 10% of the sample), we match the level and change of the index as measured on the Friday preceding the weekend to these respondents. This does mean that share prices are measured with a lag for some respondents but we obtain similar results if we exclude respondents interviewed at the weekend from our analysis. 6 We focus on the first 5 weeks because differences in the composition of the sample occur in the first couple of weeks. in share prices, which increases the likelihood that share prices are correlated with observed and unobserved characteristics. Even though it is possible to control for observed characteristics via regression methods, Imbens and Wooldridge (2009) suggest -as a rule of thumb -that normalised differences exceeding 0.25 make regression estimates of the effect of interest sensitive to the specification when the linearity approximation is not accurate globally. Moreover, we cannot control for unobserved time-varying characteristics, although we can take into account unobserved time invariant characteristics via individual fixed effects. Secondly, if there are heterogenous effects across different groups in the population and these groups experience levels and changes in share prices of different magnitudes as a result of when they are interviewed, we would not be able to identify the effect of interest. However, in robustness analysis we show that, in practice, this feature of the sample has little influence on our estimates. Summary statistics for the sample used in analysis are presented in Table 2. For GHQ, FTSE levels, and high and low frequency changes, we consider whether each process contains a unit root or whether they are stationary. This is important in order to avoid potential spurious correlations between share prices and mental health. Throughout we find that each data series are stationary processes (see the Appendix for further details). Table 3 presents various estimation results on the effect of share prices and mental health. For brevity we report only the estimated coefficient on the share price terms but a selection of extended results are available in Table 10 in the Appendix. For all estimates reported we multiply coefficients and standard errors by 100. Column 1 reports the estimated effect of daily share price index level on mental health. This result suggests that a 100 point increase in the share price index increases mental well-being by 0.04 units, equivalent to a 0.16% change of the mean GHQ score. However, high frequency changes in the share price index have no discernable effect on mental health despite widespread reporting of daily changes in the FTSE 100 in the media. On the other hand low frequency changes do matter. Columns 5 and 6 indicate that a one percentage point increase in half yearly and yearly growth rates increase mental well-being by 0.0081-0.0089. Given the average annual change in the share price index is 3.87 percent, share price fluctuations would typically generate a 0.13% change of the mean GHQ score. The association between share prices and mental health In all specifications we take into account household income, indicators for the amount of dividend/payments received in the past year and labour market status. Hence, it appears as though the share price index matters to mental health after taking into account the effect of a booming stock market on current economic outcomes. However, it remains possible that the observed relationship arises because we are unable to effectively capture financial resources and hence consumption patterns. For example, it may be the case that people are simply adjusting their consumption in response to new information concerning economic prospects, so that unmeasured changes in consumption -as opposed to mental distress over future outcomes -drive the observed relationship. We cannot include further measures of financial resources or consumption but we have tried including self-assessments of current financial situation and the change in financial situation over the past year. 7 These measures may capture unobserved fluctuations in financial resources although it is likely that there is some reverse causality between financial self-assessments and mental health, which is why we do not use these variables in our main analysis. While there is a robust correlation between financial self-assessments and mental health, we still find evidence of a very similar relationship between share prices and mental health (for example the estimated coefficient on the annual change in share prices is 0.0094 with standard error 0.0032). We would argue this finding further supports the argument that fluctuations in share prices do not reflect unmeasured financial or economic circumstances. Finally in column 7 we present results estimating the model in column 6 including individual fixed effects, since it is possible that systematic differences exist across respondents interviewed at different time points, and as a result facing different values of share prices. While we control for several observed characteristics of each respondent it may still be the case that unmeasured characteristics drive our results. The results presented in column 7 control for time-invariant unmeasured characteristics through individual fixed effects. The estimated coefficient is reasonably similar and remains statistically significant at conventional levels. For the daily share price index, the estimated coefficient is reduced by around 40% but remains statistically significant at conventional levels (the estimated coefficient is 0.0025 with standard error 0.0015). In the remaining analysis, we focus on changes in share prices. One reason why high frequency changes in share prices have such little influence on mental health outcomes is that they are generally too small to have any significant impact on economic outcomes or perceptions of future economic outcomes. Furthermore, low frequency changes are volatile and any changes in stock prices over short periods are readily reversed. If this is the case, we might expect to observe a correlation between high frequency stock market movements and mental health once we measure the degree to which changes in share prices are perceived as temporary. We use the standard deviation in share prices to measure the extent to which share prices are fluctuating, and therefore the degree to which movements in share prices may be perceived as temporary. Of course there are other reasons to expect the volatility of share prices to matter. For example, it is well known from portfolio theory that investors are not only concerned with the mean returns but also 7 For a measure of financial situation respondents are asked 'How well would you say you yourself are managing financially these days? Would you say you are' with responses 'Living comfortably', 'Doing alright', 'Just about getting by', 'Finding it quite difficult' and 'Finding it very difficult'. For a measure of financial change respondents are asked 'Would you say that you yourself are better off or worse off financially than you were a year ago?' with responses 'better', 'about the same' and 'worse off '. For a measure of financial expectations respondents are asked 'Looking ahead, how do you think you yourself will be financially a year from now, will you be' where respondents can select 'Better than now', 'Worse than now', 'About the same'. We introduce these measures as continuous variables. the risk associated with investments i.e. the spread of returns around the mean (see Elton et al., 2007). Share price volatility increases the uncertainty of investor returns but it is also a reflection of greater uncertainty about the future. This would imply a negative relationship between volatility and mental health. Table 4 presents results where we add the standard deviation in share prices over the period in which the change in share prices is calculated. Clearly, this is only possible where the change is share prices is calculated over the past week or longer. For the most part, adding the standard deviation of share prices to the analysis makes very little difference to previously reported results. The standard deviation is generally negative and in column 4 it is statistically different from zero. The 1 year standard deviation is also of similar magnitude and statistically different from zero when included alongside changes in share prices measured over shorter horizons (not reported for reasons of space). In column 5, we add individual fixed effects to the model estimated in column 4. Adding the standard deviation slightly reduces the estimated magnitude of changes in share prices on mental health compared with Table 3 but the effect is remains statistically significant. Interestingly, the estimated standard deviation is barely changed in column 5 when we add individual fixed effects. Evidence of wealth effects? Thus far we document a positive association between changes in share prices and mental health, and conversely a negative association between stock market volatility and mental health. However, there are two competing explanations as to why these associations emerge. The first explanation suggests that these relationships are driven by people with investments in stock markets who experience unexpected wealth shocks in booming or tumbling stock markets, and who would likely care most about stock market volatility given the difficulty that uncertainty presents in identifying the best investment strategies. The second explanation suggests that, by aggregating the beliefs of many forward looking individuals/firms, the stock market may be a barometer of economic prospects. The key difference between these explanations is that the latter suggests people without stock market investments would also care about share price fluctuations. Since 1992 the BHPS asks respondents whether they have contributed to a personal pension scheme, and the year they began making contributions. We use this information to identify people with defined contribution pension arrangements, who are indirectly invested in the stock market via their pension scheme. 8 In 1995, 2000 and 2005, detailed information is available on financial assets. We use ownership of investment trusts, personal equity plans, shares and company stocks to measure who is directly invested in stock markets, matching this information to other years using an imputation procedure described in the Appendix. By combining information on DC pension and equity investments, we are able to create a proxy of investor status. 9 Results of separate regressions by investor status are presented in the first two columns of Table 5. There is little evidence that investors are more sensitive to share price movements compared to others. Estimated effects are similar across both groups even if insignificantly different from zero owing to smaller sample sizes. 10 Our measure of investor status is far from ideal as this information is solicited in some, but not all, waves. As an alternative proxy of investor status we split the sample by education level (where high education refers to degree level or similar qualifications). Individuals with higher education are more likely to be invested in stock markets and have more valuable assets conditional on investment (see Guiso et al., 2008). However, we again observe similar effects across high and low educated individuals. One issue with this proxy of investor status is the large expansion in higher qualifications over the period observed, although we also find similar results when we restrict our higher education measure to degree level qualifications, which expanded less dramatically. As a third proxy of investor status, we split the sample by age (<35, 35-49 and 50+). Using information on the investment patterns in 1995, 2000 and 2005, 13% of those aged <35, 27% of those aged 35-49 and 32% of those aged 50+ are invested in stock markets via the financial assets listed above, with the value of these investments also increasing monotonically by age. A slightly different picture emerges for indirect investments via pension schemes where we measure 22% of those aged <35, 41% of those aged 35-49 and 35% of those aged 50+ to have DC pension arrangements. Overall, we would argue that younger persons would be less affected by wealth considerations given a lower propensity to be invested in stock markets. However, the evidence presented in the final three columns of Table 5 provides no indication that younger persons are any less affected by share price movements than others. Robustness analysis In this paper we provide evidence that changes and volatility of share prices affect mental health outcomes, and moreover, that the relationship observed is inconsistent with wealth effects. An alternative explanation that share prices are informative of economic prospects is better supported by the evidence. In this section we present various sensitivity analyses focusing first on the estimated magnitude of the share price effect followed by an investigation of alternative methods to estimate the standard error. In this analysis, we have used the date of interview to create variation in share prices across respondents interviewed in the same survey year. However, as noted earlier, there is some evidence of systematic differences among respondents interviewed in the first two weeks of September compared to those interviewed later. We pursue a number of strategies in order to investigate whether our results are sensitive to this feature of our sample. Firstly, we re-estimate our model excluding individuals interviewed in these first two weeks (7% of the sample). The estimated coefficients remain similar, for example the coefficient on the annual change in share prices is estimated to be 0.0065 with estimated standard error 0.0034, and the coefficient on the standard deviation of annual changes over the previous year is -0.0021 with standard error 0.001. Secondly, we drop the retired from our sample because the differences in age, employment and income variables are largely driven by the retired are being interviewed earlier than others. Again we find similar effects of annual changes in share prices (coefficient 0.0075 with standard error 0.0036) and the standard deviation term (coefficient -0.0022 with standard error 0.001). Thirdly, we split the sample according to labour market status, since differences in characteristics across those interviewed earlier and later in September can, for the most part, be attributed to differences in labour market activity. Table 6 presents normalised differences in the characteristics of people interviewed in weeks 1 and 2 compared with later weeks for the employed, self-employed, unemployed, family carers, students, long-term sick and the retired. There are no discernable differences in the characteristics of the employed, although there is some evidence that the self-employed interviewed in earlier weeks are less wealthy than those interviewed later, and that the unemployed and the long term sick interviewed in earlier weeks are less likely to be the household head. 11 There are other reasons to split the sample by labour market status. For example, if share prices are informative of economic prospects, we might expect that employees care about personal economic outcomes and the outcomes of close family members whereas those staying at home to look after family might only care about the economic outcomes of significant others. Results are presented in Table 7. Among employees the estimated effect of share prices is similar to previous estimates presented in Table 3 and Table 4 but the magnitude and precision of the share price effect varies considerably among others, highlighting the difficulty in estimating relevant effects without very large samples. Interestingly, we consistently find that the effect of increased volatility is larger for employees, the young or samples that exclude the retired, although it is not possible to say that these effects are larger from a statistical viewpoint. However, across each of the sub-samples split by labour market status we are unable to reject the null hypothesis that the parameter estimates associated with both the change in the FTSE and the volatility of share prices are equal to those estimated for the full sample (reported in column 4 of Table 4). Fluctuations in share prices are clearly correlated with macroeconomic activity, and it may be the case that changes in share prices simply reflect the effect of general economic conditions on mental health. Since it is well documented that unemployment rates affect mental health, we augment our specification to include seasonally adjusted International Labour Organsiation (ILO) male regional unemployment rates (there are 11 regions in the BHPS sample). These data are taken from the Labour Force Survey (LFS) and are available on a quarterly basis from 1992 through the Office for National Statistics (ONS). 12 Results presented in the first column of Table 8 suggest share prices have an independent influence on mental health outcomes. Following Di Tella et al. (2003) we also control for other macro economic indicators -explicitly quarterly GDP per capita, monthly industrial production and the monthly inflation rate as mesured by the rate of change in consumer prices (data are from the OECD national data base). The results are shown in columns two through to four respectively of Table 8 and reveal that the influence of share prices remains over and above macro economic indicators. Indeed, the results show that macro economic indicators such as GDP per capita have an insignificant effect on mental health, consistent with Di Tella et al. (2003), only regional unemployment rates matter in addtion to share prices. In this paper we document the relationship between share prices and mental health but it is possible that the general mood in the population affects share prices rather than the converse. Since lagged stock market outcomes are correlated with current stock market values but we can be more confident that current mental health does not influence changes in share prices in the past, we replace contemporaneous values with lagged values from the previous week. Results reported in column 1 of Table 9 confirms a relationship exists when using lagged changes of the share price index. We also replace the FTSE All Share price index with the FTSE 100 price index. As discussed earlier, the former is a broader measure of economic activity whereas the latter is more widely reported in the media. In practice both series exhibit a correlation of 98% so it is perhaps not surprising that the FTSE 100 is also correlated with mental health (see column 2 of Table 9). Moreover, both series have almost identical effects when variables are standardised. However, there are some instances, particularly when using a fixed effects estimator, where the statistical precision associated with the FTSE 100 is lower. In terms of employing alternative estimators for the standard errors, we consider explicitly modelling an AR(1) process in the error term in a fixed effects model following Baltagi and Wu (1999) and twoway clustering of standard errors following Cameron et al. (2011). The former approach may be relevant if unobserved shocks during the current period influence future outcomes. The latter approach may be relevant because we match daily price movements to the date that the individual is interviewed and we therefore may need to take into account possible clustering at the level of aggregation of our explanatory variable i.e. date of interview in addition to individual level clustering. However, in both cases, we generally find that the results are largely unaffected both in terms of economic magnitude and statistical significance (see columns 3 and 4 of Table 9). Conclusion In this paper we have investigated the relationship between psychological health and share prices, as measured by the FTSE All Share price index, in the UK over a relatively long time period which encapsulates both economic boom and bust. As far as we are aware this is the first paper for the UK to match daily share price fluctuations to dates of interview in a panel data set. Our empirical findings are robust to a number of alternative estimation strategies and reveal that the daily level of FTSE index and low frequency changes, specifically six monthly and annual, are positively correlated with mental health, while annual volatility in share prices reduces mental health. We investigate whether this relationship arises via a wealth effect by splitting the data into a variety of sub samples where a priori it might expected that wealth effects would be apparent e.g. by investor status (which we proxy by age, education and also whether individuals report they are invested in the stock market). Interestingly, throughout there is no strong evidence found in support of a wealth mechanism. Consequently, we would argue that the association between share prices and mental health is due to the possibility that the stock market is revealing additional information about the prevailing economic climate, where this 'economic barometer' effect exists after controlling for day, week and year fixed effects (in order to control for unobserved macro shocks) as well as conditioning upon unemployment rates and other macro economic indicators. The normalised difference is calculated as x1−x0 √ s 2 0 +s 2 1 where x 0 is the mean characteristic of people interviewed in week t and x 1 is mean characteristic of people interviewed in weeks t+1 to T (where T is the final week in which interviews occur), and where s 2 is the variance of the relevant sample. The normalised difference is calculated as x1−x0 √ s 2 0 +s 2 1 where x 0 is the mean characteristic of people interviewed in week t and x 1 is mean characteristic of people interviewed in weeks t+1 to T (where T is the final week in which interviews occur), and where s 2 is the variance of the relevant sample. Table 3. Employee refers to working for a firm, Self-emp refers to working for oneself, Unemployed refers to not in employment but looking for work, Family refers to staying at home to provide care for family members, Student refers to full-time education, LT sick refers to long-term sick and retired refers to the retired. Table 3. Columns 1 and 4 includes regional unemployment rates (available from 1992), column 2 includes quarterly gdp per capita, column 3 includes monthly industrial production, and column 4 uses the monthly consumer price index. Estimates in columns 1 to 5 are by OLS. Table 3. Column 1 uses share prices lagged 1 week, column 2 uses the FTSE 100 price index, column 3 imposes an AR(1) error structure and column 4 uses twoway clustering of standard errors. -0.48*** (0.14) -0.48*** (0.14) -0.48*** (0.14) -0.41*** (0.11) 4+ adults -0.51*** (0.15) -0.51*** (0.15) -0.51*** (0.15) -0.50*** (0.12) 1 child -0.14 (0.12) -0.14 (0.12) -0.14 (0. 0.77*** (0.14) 0.77*** (0.14) 0.77*** (0.14) 0.53*** (0.11) unemployed -1.12*** (0.14) -1.12*** (0.14) -1.12*** (0.14) -1.34*** (0.12) student 0.75*** (0.14) 0.75*** (0.14) 0.75*** (0.14) 0.57*** (0.12) long-term sick Table 3. Column 1 replicates column 1 of Table 3, column 2 replicates column 6 of Table 3, column 3 replicates column 4 of Table 4 and column 4 replicates column 5 of Table 4. Unit Toot Tests Given that there is a relatively long time series dimension to the BHPS one possibility is that any significant correlation found between well being and share prices is potentially spurious. Hence we investigate whether the GHQ and share prices are stationary processes. If both variables are non stationary, i.e. not integrated to I(0), and integrated to the same order, e.g. I(1) so stationary after first differencing, then unless there is a cointegrating vector any correlation will be spurious. Conversely, if the two variables are integrated to different orders, e.g. I(0) and I(1), then regression analysis is meaningless as one variable has a constant mean whilst the other drifts over time. Since we have panel data the most flexible approach to testing for a unit root in a variable y across individuals i and time t is as follows based upon Im et al. (2003) (IPS) where the autoregressive parameter is not held constant across cross sectional units: where ∆ denotes a first difference (by year), d is a vector of deterministic components e.g. constant and time trend, and u is a white noise error term. The null hypothesis is that the series is non stationary, i.e. H 0 : ρ i = 0∀ i . For some of the tests that we implement the autoregressive parameter is assumed to be constant over cross sectional units, i.e. ρ i = ρ. As in common in panel unit root testing we allow for cross sectional dependence, i.e. the error terms are not independent across cross sections, by including the lagged cross sectional average, y, and its first difference, ∆y, following Pesaran (2007). The data are unbalanced where the minimum time period an individual is in the data is 1 year through to a maximum of 18 years. Consequently in order to ensure white noise in the error terms u after including extra lagged terms of ∆y we conduct the unit root tests on two sub samples: (i) for those individuals present for at least 6 periods NT=102,938 (T=6 years is the minimum in order to be able to include lags where the optimal lag length is chosen by the AIC); and (ii) a subset of individuals present for all periods, i.e. a balanced data set NT=28,764. For the unbalanced sub sample we use the IPS approach for unit roots and for the balanced sub sample the IPS, Fisher ADF, Fisher Phillips-Perron and Harris-Tsavalis tests. See Baltagi (2008) for further details. For each test we also restrict the deterministic component, d, to include a constant only i.e. drift term, and alternatively a constant and time trend. For the FTSE we focus on the level and also low frequency changes in the variable for stationarity (since we find no evidence that high frequency changes in the FTSE affect mental health). Each test is implemented across both sub samples, both including and excluding a time trend. The null hypothesis is always rejected at either the 1 or 5 per cent level which implies that the data are stationary for GHQ, FTSE level and FTSE low frequency changes.
9,434.6
2015-07-01T00:00:00.000
[ "Economics" ]
Application of Raman Spectroscopy for Characterizing Synthetic Non-Metallic Inclusions Consisting of Calcium Sulphide and Oxides : The presence of non-metallic inclusions (NMI) such as sulphides and oxides may be detrimental to the control of the steel casting process and product quality. The need for their identification and characterization is, therefore, urgent. This study uses time-gated Raman spectroscopy for the characterization of synthetic duplex oxide-sulphide phases that contain CaS and the oxide phases of Al 2 O 3 , CA, C12A7, C3A, and MgO · Al 2 O 3 (MA). Binary phase samples of CaS–MA, C3A–CaS, C12A7–CaS, Al 2 O 3 –CaS, and MA–CaS were prepared with varying phase contents. The relative intensities of the Raman peaks were used to estimate the samples’ phase content. For a quantitative estimation, linear regression calibration models were used to evaluate the change in phase content in the samples. The most suitable Raman peak ratios had mean absolute error (MAE) values ranging from 3 to 7 wt. % for the external validation error, and coe ffi cients of determination (R 2 ) values between 0.94 and 0.98. This study demonstrated the use of Raman spectroscopy for the characterization of the calcium sulphide, magnesium aluminate spinel, Al 2 O 3 , and calcium aluminate phases of CA, C3A, and C12A7 in a duplex oxide-sulphide system, and it o ff ers potential for inclusion characterization in steel. Introduction The calcium treatment for aluminum-killed steels is commonly used to modify non-metallic inclusion (NMI) such as Al 2 O 3 to less detrimental inclusions [1,2]. The main aim of the modification of inclusions is the control and transformation of the solid inclusions into fully or partially liquid inclusions by changing their chemical or phase composition. Calcium treatment of Al-killed steels can be used to obtain calcium aluminate inclusions with lower liquidus temperature compared to the steel melt. The use of calcium treatment to achieve more liquid inclusions has the potential to control castability challenges such as submerged entry nozzle (SEN) clogging [3,4] at casting temperatures. The preferred inclusions are either low-melting calcium aluminate inclusions such as C12A7 or C3A and CA, which may be present in partial liquid inclusions [5,6]. However, the occurrence of solid CA2 and CA6 phases should be avoided [5]. Although calcium treatment plays a role in modifying high melting point inclusions, with low Al 2 O 3 activity in the inclusions, elemental sulphur can react with calcium to form CaS inclusions. They are solid at casting temperatures and can be very detrimental to the steel quality [7,8]. For example, The required proportions of the phases (C12A7, CA, C3A, Al 2 O 3 , CaS, and MgO·Al 2 O 3 ) were used to prepare the synthetic binary phase samples. Thorough mixing was carried out to ensure that the binary phase samples were homogenous. Sintering at a higher temperature is unsuitable for preparing the synthetic phase samples because CaS is very sensitive to heat. Careful mixing was therefore carried out several times to achieve homogenous samples. The homogeneity of the prepared binary samples was verified using both X-ray diffraction (XRD) and X-ray fluorescence (XRF) analysis by measuring replicates of the same sample. Analytical Techniques XRD and XRF were both used to verify the prepared samples to estimate the phase weight percentage and for elemental composition analysis. X-ray Diffraction (XRD) The XRD instrument used for the sample phase identification in this study was the Rigaku SmartLab 9 KW model. The instrument setup included a Cu source lamp with 45 kV and 200 mA settings (9 kW rotating anode) and Bragg-Brentano para-focusing geometry (300 mm goniometer) and had an acquisition speed of 3 degrees per min with 0.02 degrees per step. Other features were 5-degree Soller slits used on both sources and a 10-mm limiting slit located at the source side of the samples in standard glass holders and on the analyser side. X-ray Fluorescence (XRF) The University of Oulu's Centre for Material Analysis provided XRF to conduct an elemental analysis on the prepared samples to estimate the elemental composition. The XRF instrument used for this study is Panalytical, with a maximum power of 4 kW, Axios Max model, with a setup consisting of an X-ray generator Rh tube. SuperQ was the analysis software used for elemental analysis. Raman Spectroscopy Raman spectroscopy is a vibrational spectroscopic analytical technique that operates on a principle based on an inelastic scattering of monochromatic light which creates a change in energy to study the vibrational and rotational modes of the excited molecules of a sample [26,27]. This change in energy in a sample during Raman spectroscopy measurement is considered a characteristic of the vibrational modes of the molecule in the material. The Raman spectrum thus acquired from a sample is regarded as the fingerprint of the individual component present in the sample. The Raman spectrum obtained from a sample has some features, such as peak intensity, peak or shift (band) position, and full width at half maximum (FWHM), which provide information concerning the components in the material measured with Raman spectroscopy. For example, the Raman shift (band) position illustrates the material's phase or stoichiometric content, while the peak intensity (I) or area shows the sample's phase component concentration. The use of Raman spectroscopy as a characterization technique has also gained application in various fields, such as medicine, steelmaking for slag studies [16], and other research sectors. A TimeGated ® Raman spectrometer (TG532 M1) supplied by TimeGated Instruments Ltd., Oulu, Finland was used in this study. Some of the device's features include a spectrometer equipped with a pulsed laser of 532 nm, with a pulse width of 150 picosecond (ps) and a fiber-coupled frequency range from 40 to 100 kHz. The instrument setup also had a probe head made of 200 µm collection fibre with a spot size of 1 mm, a laser with a spectral width of less than 0.1 nm, a complementary metal oxide semiconductor (CMOS) single-photon avalanche diode (SPAD) CMOS-SPAD array detector, a Photonics RPB532 w/105 µm excitation fibre, a fibre-coupled spectrograph, and delay electronics. The Raman spectroscopy measurements were made at ambient conditions with a wavenumbers range of 100-1100 cm −1 , a spectra acquisition time of 1-3 min, and a resolution of 10 cm −1 . Each sample was measured five times with the Raman spectroscopy, with mixing done (for homogeneity) on the same after each measurement. A rotating sample holder stage to obtain an average spectrum for each batch of the measured sample was used. These steps were considered to ensure good repeatability and to reduce the effect of sample inhomogeneity. The Raman data used for this study was obtained from the averaged signal of five measurements per sample. The Raman spectra obtained from each sample were relatively stable; therefore, the variability of the Raman signal in the Raman data within each sample measured was found insignificant. The pre-processing software package provided by TimeGated Instrument treated the signal with the background subtraction. The time-gated Raman spectroscopy (TimeGated ® ) used for this study was designed for effective fluorescence suppression compared to conventional Raman spectroscopy. Calibration Model The study used a calibration model to establish a relationship between different Raman peak ratios and the samples' phase content. The calibration model was used to examine variance in the phase content in a sample by using the total Raman spectrum to describe the relative intensities of the peaks present in the Raman spectrum. A calibration feature candidate for this study is expressed as: where I k is the intensity for the Raman shift k, x c is the calibration feature candidate, and In represents the intensity corresponding to the Raman shift n. The use of relative intensities treats the signal similarly as does the unit normalization, and thus additional normalization was not carried out. The average mean absolute error (MAE) was calculated using 4N repetitions for the cross-validation and a detailed description of how the calibration model identification and the selection process were conducted can be found in [18]. These two analytical techniques were considered in this study as complementary methods, where XRD provided information concerning individual phase weight percentage in the sample, and XRF for estimating the elemental composition of the prepared samples [28]. Tables 1 and 2 illustrate some examples of binary samples (MA-CaS and CaS-Al 2 O 3 ) used to establish a relationship between the initial sample composition and composition by XRD and XRF analysis. The XRD and XRF data were obtained from the samples where the phases estimated based on elemental analysis from XRF were compared with the phase weight percentages for the samples' XRD composition. Table 1 shows the values for the initial sample composition, XRD, and XRF analysis for MA-CaS, and Table 2 represents a binary sample for Al 2 O 3 -CaS, which served as a comparison for initial sample composition and XRD and XRF results. Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 19 Analysis of Phase Content Based on Raman Spectra The main Raman band (shift) identified from the Raman spectroscopy measurements for each specified phase used to prepare the samples was compared with previous studies [11][12][13][14][15]. Table 3 shows the Raman shift or band (cm −1 ) for measured samples in this work and the published reference materials. Figure 3 illustrates the Raman spectra for the initial phases used to prepare the binary samples. Analysis of Phase Content Based on Raman Spectra The main Raman band (shift) identified from the Raman spectroscopy measurements for each specified phase used to prepare the samples was compared with previous studies [11][12][13][14][15]. Table 3 shows the Raman shift or band (cm −1 ) for measured samples in this work and the published reference materials. Figure 3 illustrates the Raman spectra for the initial phases used to prepare the binary samples. therefore, used to qualitatively explain how the relative intensity of the peaks associated with a specific phase can be used to estimate the particular phase content present in the samples. The observations made for the samples measured with Raman spectroscopy show that a change in phase content in the binary samples system may have a corresponding effect on the relative intensity of the peaks. They are presented in Figures 4-8. Section 2.3.3 briefly describes the information that can be obtained from the intensity of a Raman peak for Raman spectroscopy measurements. Figures 4-8 are, therefore, used to qualitatively explain how the relative intensity of the peaks associated with a specific phase can be used to estimate the particular phase content present in the samples. The observations made for the samples measured with Raman spectroscopy show that a change in phase content in the binary samples system may have a corresponding effect on the relative intensity of the peaks. They are presented in Figures 4-8. Section 2.3.3 briefly describes the information that can be obtained from the intensity of a Raman peak for Raman spectroscopy measurements. Figures 4-8 are, therefore, used to qualitatively explain how the relative intensity of the peaks associated with a specific phase can be used to estimate the particular phase content present in the samples. The Raman spectra shown in Figure 4 are for samples containing CaS and magnesium aluminate (MA) spinel, and are used to explain how a change in a sample's phase content can affect the relative Raman intensity. Figure 4 shows that an increment in the CaS phase fraction in the MA-CaS samples showed a corresponding rise in the Raman shift (band) within a region of 157-162 cm −1 . This increase in intensity within this Raman shift region relates to when the phase content for CaS increased, because the most intense peak for this phase was around 157 cm −1 . Additionally, for the MA-CaS samples, an increase in the MA phase content showed a corresponding increase in the Raman shift in the region of 410-420 cm −1 and can be attributed to this phase, because the most intense Raman shift located at 412 cm −1 was a Raman peak feature associated with the MA phase. Figure 5 presents the Raman spectra obtained from the binary samples consisting of CA and CaS, where it can be observed that an increase in the CA phase fraction had Raman spectra, illustrating an increase in peak intensity in the region of 520-524 cm −1 . CA's most intense Raman shift (band) was observed at approximately 524 cm −1 , and the Raman peak within the range of 520-524 cm −1 only increased with a corresponding increase in the phase fraction of CA in the CA-CaS samples. Furthermore, an increase in the CA phase content illustrated a characteristic feature, with the appearance of a peak shoulder in the 545-549 cm −1 region. Varying the phase content in the CA-CaS samples provided a similar observation to the one made in Figure 4, where an increment in the phase fraction of CaS demonstrated a rising peak intensity in the Raman shift in the region of 157-162 cm −1 . Figure 6 and Figure 7 show the Raman spectra for binary samples for the calcium aluminate phases of C12A7, C3A, and calcium sulphide (CaS). Figure 6 presents the samples that contain CaS and C12A7, showing that when the phase fraction of C12A7 in the sample increased, a corresponding increase in the Raman shift in the region of 517-520 cm −1 was observed. This rise in intensity may be associated with the increasing phase fraction of C12A7, because the most intense Raman peak for this phase was located around 517 cm −1 . C12A7-CaS and C3A-CaS Samples For the samples comprising C3A and CaS, an increase in the peak intensity within the Raman shift in the region of 756-765 cm −1 could also be observed. This may be related to the increase in the phase content of C3A. The peak within the Raman shift in the region of 756-766 cm −1 also increased when the phase fraction of C3A in the sample increased. These phenomena can also be observed in CA-CaS and MgO·Al 2 O 3 (MA)-CaS Samples The Raman spectra shown in Figure 4 are for samples containing CaS and magnesium aluminate (MA) spinel, and are used to explain how a change in a sample's phase content can affect the relative Raman intensity. Figure 4 shows that an increment in the CaS phase fraction in the MA-CaS samples showed a corresponding rise in the Raman shift (band) within a region of 157-162 cm −1 . This increase in intensity within this Raman shift region relates to when the phase content for CaS increased, because the most intense peak for this phase was around 157 cm −1 . Additionally, for the MA-CaS samples, an increase in the MA phase content showed a corresponding increase in the Raman shift in the region of 410-420 cm −1 and can be attributed to this phase, because the most intense Raman shift located at 412 cm −1 was a Raman peak feature associated with the MA phase. Figure 5 presents the Raman spectra obtained from the binary samples consisting of CA and CaS, where it can be observed that an increase in the CA phase fraction had Raman spectra, illustrating an increase in peak intensity in the region of 520-524 cm −1 . CA's most intense Raman shift (band) was observed at approximately 524 cm −1 , and the Raman peak within the range of 520-524 cm −1 only increased with a corresponding increase in the phase fraction of CA in the CA-CaS samples. Furthermore, an increase in the CA phase content illustrated a characteristic feature, with the appearance of a peak shoulder in the 545-549 cm −1 region. Varying the phase content in the CA-CaS samples provided a similar observation to the one made in Figure 4, where an increment in the phase fraction of CaS demonstrated a rising peak intensity in the Raman shift in the region of 157-162 cm −1 . Figures 6 and 7 show the Raman spectra for binary samples for the calcium aluminate phases of C12A7, C3A, and calcium sulphide (CaS). Figure 6 presents the samples that contain CaS and C12A7, showing that when the phase fraction of C12A7 in the sample increased, a corresponding increase in the Raman shift in the region of 517-520 cm −1 was observed. This rise in intensity may be associated with the increasing phase fraction of C12A7, because the most intense Raman peak for this phase was located around 517 cm −1 . C12A7-CaS and C3A-CaS Samples For the samples comprising C3A and CaS, an increase in the peak intensity within the Raman shift in the region of 756-765 cm −1 could also be observed. This may be related to the increase in the phase content of C3A. The peak within the Raman shift in the region of 756-766 cm −1 also increased when the phase fraction of C3A in the sample increased. These phenomena can also be observed in Figure 7 for the C3A-CaS samples. Additionally, the most intense peak for C3A in this study was located at approximately 766 cm −1 . The comparison between the change in phase fraction for CaS and relative Raman intensity for the spectra presented in Figure 6 for C12A7-CaS and Figure 7 for C3A-CaS all showed that an increase in CaS content corresponded to an increase in the Raman shift in the region of 157-162 cm −1 . For the CaS-Al 2 O 3 samples, the Raman spectra shown in Figure 8 Calibration Model for Quantitative Estimation The individual phases present in the binary samples system were estimated using linear calibration models to establish the relationship between the relative intensities of the Raman shift (band) and the phase fractions. The sample phase content or fraction analyzed using XRD for phase identification and XRF for elemental evaluation were used as the dependent variable for the calibration model. Furthermore, the average values of the coefficient of determination, the mean absolute error values (MAE) and the relative stabilities of the calibration variable candidates were estimated as presented in Tables 4-8. The relationship between the relative stabilities of the calibration variable and the mean absolute error values (MAE) is shown in Figures 9, 11, 13, 15 and 17. Section 2.3 provides a detailed description of how these parameters were evaluated. Table 4. Evaluation of the coefficient of determination (R 2 ) and mean absolute error (MAE) for the prediction and validation between the phase content and relative intensity of the Raman peaks and phase content for CaS-CA. Table 6. Evaluation of the coefficient of determination (R 2 ) and mean absolute error (MAE) for the prediction and validation between the phase content and relative intensity of the Raman peaks and phase content for C12A7-CaS. Table 8. Evaluation of the coefficient of determination (R 2 ) and mean absolute error (MAE) for the prediction and validation between the phase content and relative intensity of the Raman peaks and phase content for CaS-Al 2 O 3 . Table 4 shows the results for CA-CaS samples, where the relative intensity ratio for CaS at 157 cm −1 and at 524 cm −1 for CA gave the highest linear regression coefficient of determination value and the lowest mean absolute error value was better than the other peak ratios. The peak ratio of 157/524 also provided generally better relative stability estimated values, as illustrated in Figure 9. A calibration curve constructed between the relative intensity ratios of the measured phase fractions for CaS in the CA-CaS samples is presented in Figure 10. Therefore, based on the evaluations for CA-CaS shown in Figures 9 and 10 and Table 4, the relative intensity ratio, between 157 cm −1 for CaS and 524 cm −1 for CA, demonstrated the most suitable Raman peaks for a phase content analysis of binary CA-CaS samples. Additionally, these peaks, according to Figure 5, were the phases' most intense peaks. Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 19 Table 5 shows estimated values for the coefficient of determination and mean absolute error (MAE) relative stability for samples of MA-CaS. The ratio of the intensities for Raman shifts at 412 cm −1 for MA, and at 157 cm −1 for CaS, showed a better coefficient of determination (R 2 ), and the lowest MAE value. Figure 11 also demonstrates from the relative stability analysis that a relative intensity ratio between 157 cm −1 and 412 cm −1 performed better than the other peaks used in this study. A linear regression constructed between a relative intensity of 412/157 cm −1 and a phase fraction for MA in the MA-CaS system is presented in Figure 12. In this study, the most intense Raman peaks at 416 cm −1 for MA and 157 cm −1 for CaS were the most suitable peaks for Raman spectroscopy quantitative analysis for samples containing only MA-CaS. Table 5 shows estimated values for the coefficient of determination and mean absolute error (MAE) relative stability for samples of MA-CaS. The ratio of the intensities for Raman shifts at 412 cm −1 for MA, and at 157 cm −1 for CaS, showed a better coefficient of determination (R 2 ), and the lowest MAE value. Figure 11 also demonstrates from the relative stability analysis that a relative intensity ratio between 157 cm −1 and 412 cm −1 performed better than the other peaks used in this study. A linear regression constructed between a relative intensity of 412/157 cm −1 and a phase fraction for MA in the MA-CaS system is presented in Figure 12. In this study, the most intense Raman peaks at 416 cm −1 for MA and 157 cm −1 for CaS were the most suitable peaks for Raman spectroscopy quantitative analysis for samples containing only MA-CaS. Table 5 shows estimated values for the coefficient of determination and mean absolute error (MAE) relative stability for samples of MA-CaS. The ratio of the intensities for Raman shifts at 412 cm −1 for MA, and at 157 cm −1 for CaS, showed a better coefficient of determination (R 2 ), and the lowest MAE value. Figure 11 also demonstrates from the relative stability analysis that a relative intensity ratio between 157 cm −1 and 412 cm −1 performed better than the other peaks used in this study. A linear regression constructed between a relative intensity of 412/157 cm −1 and a phase fraction for MA in the MA-CaS system is presented in Figure 12. In this study, the most intense Raman peaks at 416 cm −1 for MA and 157 cm −1 for CaS were the most suitable peaks for Raman spectroscopy quantitative analysis for samples containing only MA-CaS. C12A7-CaS Samples According to the analysis presented in Table 6, C12A7-CaS binary samples that indicated a peak intensity ratio for the Raman peak at 517 cm −1 for C12A7 and 416 cm −1 for MA were the most suitable selection in this study, compared to the other identified peaks ratios. The coefficient of determination values and the MAE for a peak ratio of 517/157 were the best values. Additionally, the ratio of the suggested peaks (517/157) with better values in Table 6 can be observed as the most intense Raman peaks for C12A7 and CaS, as illustrated in the previous figure (Figure 6), showing the Raman spectra for C12A7-CaS samples. Furthermore, Figure 13 may explain how the stability of the Raman shift ratio 517/157 performed, compared to other Raman peaks ratios. Therefore, using a relative intensity ratio of 517/157 for the C12A7-CaS samples provided a potential indicator for the analysis phase fraction changes in C12A7-CaS samples, based on the C12A7/CaS ratio. Figure 14 shows a linear regression plot between the phase content for C12A7 as a function of the relative intensities of 517/157 for C12A7-CaS samples. Table 7 and Table 8 provide estimated values for binary samples of C3A-CaS and Al2O3-CaS, respectively. In Table 7, the ratio of the intensities of Raman shifts at 756 cm −1 for C3A, and 416 cm −1 for MA, for samples of C3A-CaS had the best coefficient of determination, and better MAE values. Similarly, Table 8 shows that the intensities for a Raman shift at 157 cm −1 for CaS, and 420 cm −1 for Al2O3, had the best value for the coefficient of determination values, with the lowest MAE compared to the other peaks' ratios for these phases. Figure 15 and Figure 16 also indicate that for all the Raman shift (peaks) identified for CaS, C3A, and Al2O3, the relative stability of the peak ratios for the most Table 7, the ratio of the intensities of Raman shifts at 756 cm −1 for C3A, and 416 cm −1 for MA, for samples of C3A-CaS had the best coefficient of determination, and better MAE values. Similarly, Table 8 shows that the intensities for a Raman shift at 157 cm −1 for CaS, and 420 cm −1 for Al 2 O 3 , had the best value for the coefficient of determination values, with the lowest MAE compared to the other peaks' ratios for these phases. Figures 15 and 16 also indicate that for all the Raman shift (peaks) identified for CaS, C3A, and Al 2 O 3 , the relative stability of the peak ratios for the most intense peaks performed better. Therefore, based on this study, the most intense Raman peaks ratio for C3A-CaS and CaS-Al 2 O 3 with a Raman peak at 756 cm −1 for C3A, Al 2 O 3 at 416 cm −1 , and CaS at 157 cm −1 was shown to be most suitable for the quantitative parameters for these samples. Figures 17 and 18 illustrate a linear regression constructed between the relative intensity and phase fraction for C3A in C3A-CaS and the relative intensity and phase fraction for CaS in CaS-Al 2 O 3 , respectively. Potential Limitations of the Study The use of Raman spectroscopy as a characterization technique potentially has0020some disadvantages. For example, the delay in response to the detection system and variation in incident laser power may induce measurement noise and decrease the repeatability of the measurements. In addition, the effect of fluorescence could affect the quality of the measured Raman spectra for sensitive samples. However, the samples used for this study were not fluorescence sensitive. Sample inhomogeneity could also contribute to the total error. Potential Limitations of the Study The use of Raman spectroscopy as a characterization technique potentially has0020some disadvantages. For example, the delay in response to the detection system and variation in incident laser power may induce measurement noise and decrease the repeatability of the measurements. In addition, the effect of fluorescence could affect the quality of the measured Raman spectra for sensitive samples. However, the samples used for this study were not fluorescence sensitive. Sample inhomogeneity could also contribute to the total error. Potential Limitations of the Study The use of Raman spectroscopy as a characterization technique potentially has0020some disadvantages. For example, the delay in response to the detection system and variation in incident laser power may induce measurement noise and decrease the repeatability of the measurements. In addition, the effect of fluorescence could affect the quality of the measured Raman spectra for sensitive samples. However, the samples used for this study were not fluorescence sensitive. Sample inhomogeneity could also contribute to the total error.
6,361.6
2020-03-20T00:00:00.000
[ "Materials Science" ]
Power Management in Ultra-low Power Systems The evolving vision of the Internet-of-Things (IoT) will revolutionize various applications such as remote health monitoring, home automation and remote surveillance. It has been projected that by 2025, there will be 1 trillion IoT devices influencing our daily lives. This will result in the generation of an enormous amount of data, which will have to be stored, processed and transmitted efficiently and reliably. Although advancements in Integrated Circuit (IC) design and the availability of various Ultralow Power (ULP) circuit components have helped us to visualize an ecosystem of numerous internetconnected devices, the overall system integration will become a major challenge. A System-on-Chip (SoC), catering to IoT applications is expected to contain many different circuit components such as sensors and Analog-Front-Ends (AFEs) for real-time signal acquisition, analog-to-digital converters, digital signal processors, memories, wireless transceivers etc. All these components have different supply voltage requirements and power profiles. Hence, power delivery to such components in an SoC will play an important role in the overall system architecture. Although, battery-powered systems have traditionally worked well in portable electronics but in an IoT ecosystem, the cost of battery replacement in a trillion-sensor node network will be enormous. In many applications, such as remote surveillance, systems require a long operational lifetime. Moreover, system deployment should be unobtrusive and hence such systems should have small form factors. The above requirements are hard to meet using conventional battery-powered systems. Hence, in most IoT SoCs, there is a strong motivation for an integrated Power Management Unit (PMU), with energy harvesting capability for near-perpetual battery-less operation, which can provide a range of supply voltage rails to satisfy the electrical specifications of different functional units. This dissertation will address the design challenges related to energy autonomy and power-delivery in a wireless sensor node. We propose a fully integrated energy harvesting platform with a capability to harvest from multiple sources of energy such as indoor solar and thermoelectric generators (TEGs). Additionally, we also propose a power-efficient supply regulation scheme to meet the electrical specifications of the various components of a self-powered, battery-less SoC. Finally, we demonstrate several ULP digital and mixed-signal circuit components, which can be leveraged in an energyautonomous system. The proposed solutions to power delivery will enhance the operational lifetime, reduce the overall form factor and contribute towards attaining energy-autonomy to facilitate a wide range of applications related to the IoT. Introduction 1.1 Motivation for energy harvesting and power management Technology scaling and shrinking device dimensions have helped a circuit designer to implement batteryoperated computing systems such as laptops, smartphones etc. with a high level of system integration.However, applications such as surveillance, remote health monitoring etc. require non-invasive, unobtrusive systems with extremely small form factors and a longer shelf life.Hence, if such systems are designed to be battery-operated, then they will be limited by the size of the battery.Moreover, battery replacement will be expensive, if needed in large numbers or in applications such as remote surveillance.In such scenarios, energy harvesting from ambient sources, such as solar, thermal and vibration energy provides a viable solution.The harvested energy can be stored in an energy reservoir, such as a supercapacitor and can be used by the system, when required.Thus, harvesting energy from the ambient sources can theoretically provide a near-perpetual system lifetime and enable further shrinking of system form factors.However, an energyharvesting self-powered system comes with the following design challenges, which will be addressed in this work: § System sustainability Energy harvesting systems, which can harvest from only one ambient source need to address a major limitation that how the system will operate when the source is unavailable.In such a scenario, a system can limit the range of applications and supported features, in order to prevent the complete discharge of the energy buffer or reservoir.Another approach to resolve this limitation would be to harvest from multiple sources of ambient energy.If a system can determine the dominant source of energy and utilize that source for harvesting and storing energy in an energy reservoir, then that system can operate reliably with varying environmental conditions.In this work, we will investigate multi-modal harvesting, explore circuits or methods to determine which source provides peak power and harvest energy from that source.§ System start-up In a self-powered system, a self-starting mechanism is necessary to generate a power-on-reset (POR) signal and kick-start system operation.The control logic of the energy harvester, as well as other circuit components, which are usually implemented using CMOS technology can only operate at a certain minimum voltage. Assuming the worst-case scenario, the start-up mechanism needs to be designed, assuming that the storage reservoir is completely empty.If the energy harvester can cold-start at a low input voltage, then the overall system can be more autonomous.In this work, we will investigate ultra-low voltage start-up circuits to enable energy harvesting at ultra-low voltage levels.§ Power-efficiency Achieving high end-to-end power efficiency is a key requirement especially in a scenario where the ambient source of energy such as thermal, can provide only 10s of µW of power.It is essential to minimize the power loss in the energy harvester and utilize nearly all of the available power for storing energy in the energy reservoir or provide power for system operation.Hence, the powertrain architecture and the control circuits of the energy harvester need to be designed to minimize power loss.Another method to maximize power efficiency is to track the maximum power point of the harvester for a given environmental condition using a Maximum-Power-Point-Tracking (MPPT) scheme.In this work, apart from designing the powertrain architecture and control circuits to be power-efficient, we will explore adaptive MPPT schemes, which can work for multiple sources of ambient energy. Motivation for integrated supply voltage regulation Supply regulation plays a major role in delivering power to various hardware components in an SoC such as microprocessor cores, memories, I/O interfaces, wireless transceivers and other analog and mixed-signal circuits.Since a modern SoC involves a high degree of system integration, integrated voltage regulators have become common and with technology scaling, the efficiency and performance of such integrated regulators have improved.However, designing an integrated voltage regulator, especially for an ultra-low-power IoT SoC involves the following key challenges: Goals The overall goal of this work is to evaluate, design and demonstrate a complete power management system encompassing energy harvesting from multiple sources, supply voltage regulation along with Ultra-low Power (ULP) circuit components such as comparators, ULP processors etc., implemented using novel circuit topologies or in new process technologies.The individual goals of this work will include: § A model and framework to understand various loss mechanisms in different power converter topologies § A hybrid Maximum-Power-Point-Tracking (MPPT) circuit, which can autonomously adapt to changing environmental conditions as well as support different ambient sources such as solar and thermal energy § A mechanism for low-voltage start-up § A low power voltage reference circuit § A complete energy harvesting and supply regulation platform § Low power supply voltage droop measurement scheme using all-digital circuits § Evaluation of latch and register-based circuits to power supply variation § Establish a design flow for dynamic IR-drop analysis and decoupling capacitor insertion.§ Ultra-low power comparator design with low input-referred offset with threshold control § Evaluate new process technologies for energy-efficient implementations of an MSP430 digital processor and an FIR filter Energy Harvesting from multiple ambient sources 2.1 Motivation: Advancements in integrated circuit design have led to the development of Ultra-low Power (ULP) electronics such as wireless sensor nodes for surveillance, health monitoring and home automation applications.This new generation of smart electronic sensors and devices need to have small form factors, especially in biomedical applications such that they are non-invasive.Today, batteries represent the dominant source of energy in electronic systems but they largely dictate the overall size, making the system bulky and not scalable.In the case of surveillance applications, such systems need to be deployed in large numbers and in remote locations.Hence, the cost of battery replacement is high.Thus, such systems need a compact, low-cost, lightweight and nearperpetual energy for a long operational lifetime.Energy harvesting from ambient sources, such as solar and thermoelectric energy, vibration/motion and RF, provides a viable alternative to battery-powered systems.The reliability and the operational lifetime can be further improved if the system has the ability and the necessary electronics to harvest from multiple harvesting modalities. Background and Prior Art: The primary goal of any energy scavenging system is to harvest energy from ambient sources, such as light, motion, thermoelectric energy etc. and store it in a storage device or an energy buffer, such as a supercapacitor. Another approach is to use the harvested energy to charge a re-chargeable battery.However, most state-of-the-art high energy-density rechargeable batteries have limited charge-discharge cycles, making battery-replacement unavoidable and thus restricting the system lifetime.A good supercapacitor can support more than 10000 chargedischarge cycles [1] and thus can be leveraged in energy-autonomous systems, provided that the supercapacitor has low leakage and has a small form factor to meet the size restrictions of a wireless sensing node.Energy storage in a wireless sensor node is necessary because the peak currents needed during wireless transmission cannot be supported directly by an energy harvester.Hence, in such scenarios, a storage device such as a supercapacitor or a re-chargeable battery acts as a buffer to support the peak current requirements of the system.Based on the overall powertrain architecture, integrated energy harvesters can be broadly classified into two categories: 1. Inductor-based Boost or Boost-Buck converters. 2. Voltage multipliers or charge-pumps based on switched-capacitor topologies.Inductor-based topologies have been found to be more power-efficient in systems where a wide range of input voltage is available from Thermoelectric Generators (TEGs), solar cells etc. Inductor-based topologies also provide a better efficiency than a charge-pump for a wide range of load currents.However, inductor-based switching converters need off-chip passives such as high-Q inductors and extra package pins, which increases the cost.Charge-pump circuits can be fully integrated and hence can be incorporated in systems requiring smaller form factors. Sources of energy harvesting in micropower systems Most self-powered wireless sensing systems are designed to harvest energy broadly from three different ambient sources: thermal energy, indoor/outdoor light energy and energy from vibration/motion/RF.In this work, we will focus on energy harvesting from solar and thermoelectric energy and hence we will only discuss the physics and the operating principles of thermoelectric generators and photovoltaic cells. Thermoelectric energy/Thermoelectric generators (TEG) Thermal energy harvesters are based on the principle of Seebeck effect i.e. when two junctions, made of two different conductors, are kept at different temperatures; an open circuit voltage develops between them.Fig. 2.1(a) shows a diagram of a thermocouple, which is the most basic voltage generator based on the Seebeck effect.The two pillars, or legs, are made of two different materials and connected by a metallic interconnect.When a temperature differential, ΔT is established between the bottom and the top pillars, a voltage, V develops between the points A and B. This voltage is given by: where S is the overall Seebeck coefficient. The primary component inside a TEG is a thermopile (shown in Fig. 2.1(b)), which is constructed by connecting a large number of thermocouples electrically in series such that the contribution of each thermocouple to the voltage adds up.Other components of a TEG may include a radiator or a heat sink for efficient heat dissipation into the ambient or structures such as thermal shunts to direct the heat absorbed into the legs of a thermocouple for higher efficiency.Fig. 2.2 shows the equivalent electrical model of a TEG.The electrical resistance R EL of the thermopile is proportional to the resistivity ρ of thermoelectric material and to the number of thermocouples.Hence, where, n is the number of thermocouples connected in series, h is the height of the legs and a is the lateral dimension of the pillars.The maximum available output power on a matched load (Z LOAD = R EL ) is thus given by Light or solar power panels provide an inexhaustible source of energy, especially in outdoor conditions.The principle of photovoltaic energy harvesters is based on the photoelectric effect, which is the ability of photovoltaic materials such as crystalline and amorphous silicon, to emit electrons after absorbing light.The number of photons depends on the light intensity and if there are a sufficient number of photons incident on a photovoltaic material, electricity can be obtained.Hence, the power, which can be harvested from a solar cell, depends on the light intensity.However, the main disadvantage of using a photovoltaic source is the reduced output power in indoor light conditions or in conditions where the light intensity is not consistent.Table 2.1 describes the comparison of both photovoltaic, thermoelectric, and wind/motion power sources in outdoor/industrial and indoor conditions. Table 2.1: Comparison of solar, TEG and other power harvesting sources in indoor and outdoor conditions [2] The power-efficiency of indoor photovoltaic cells reduces drastically in indoor conditions.Connecting multiple solar cells electrically in series can increase the power generated from a solar panel but also increases the output impedance and limits the total available power.Fig. 2.3 shows the equivalent electrical model of a solar cell [2].The current source, I L , models the generated photoelectric current, which depends on the light intensity.I D denotes the current due to recombination of carriers.The shunt (R SH ) and series (R S ) resistance accounts for the solar cell non-idealities and second-order effects such as leakage currents around the edge of the cell, contact resistance and resistance of the material.I PV is the equivalent photovoltaic current and V PV is the equivalent output open-circuit voltage.Hence, the available power from a solar cell is given by: Outdoor condition Solar Panel 100µW/cm 2 @10W/cm 2 10mW/cm 2 @STC Wind turbinegenerator 35µW/cm 2 @ <1m/s 3.5mW/cm 2 @8.4m/sThermoelectric generator 100µW/cm 2 @ 5°C gradient 3.5mW/cm 2 @30°C gradient Electromagnetic generator 4µW/cm 3 @ human motion-Hz 800µW/cm 3 @ machine-kHz current ramps up, thereby storing energy in the inductor.As a first order approximation, neglecting the parasitic DC resistance of the inductor and assuming that M LS has negligible voltage drop, we have: Where L = L BOOST I PEAK = peak inductor current in the inductor T L = ON-time of the LS power transistor which is governed by the pulse width of the LS pulse. In the HS phase, the peak inductor current ramps down to zero and the stored energy in the inductor is delivered to the load through M HS by synchronous rectification.In the HS phase, it is important that M HS turns off when the inductor current reaches zero.If M HS turns off when the inductor current changes direction, then V STORE is discharged due to reverse conduction.If M HS turns off early then the node, V X goes high turning on the p-n junction diode of M HS and the extra energy is dumped across the diode.In either case, there is a loss in efficiency as some amount of energy is lost either during reverse conduction or wasted across the diode.Ignoring parasitic DC resistance of the inductor and assuming negligible voltage drop across M HS we have: Thus in an ideal case, the boost conversion factor is given by: where, T L is the ON-time of M LS , which is governed by the pulse width of the LS pulse and T H is the ON-time of M HS , which is governed by the pulse width of the HS pulse Hence, by modulating T L and T H, the required voltage gain can be achieved.However, in (1), the conduction losses in the inductor and power transistors as well as the switching losses are not accounted.The total conduction loss during the LS cycle, !",! as given by [8]: where, ! is the total resistance including the parasitic resistance of the inductor and the ON resistance of M LS. Similarly, for the HS cycle, the total conduction loss, !",! is given by: where, ! is the total resistance including the parasitic resistance of the inductor and the ON resistance of M HS.The switching loss, !" and leakage, !"# is constant for a given control scheme and depends on the dimensions of M LS and M HS [8].Hence, the total loss is given by: In DCM mode, the sources of energy loss are due to conduction loss in the inductor and power FETs as well as switching loss due to charging-discharging of the gate capacitance and gate-drive circuits of the power FETs. Subthreshold leakage also contributes significantly to the loss, especially when the load currents are extremely small.For ultra-light load systems, such as [5], the leakage and switching loss are more dominant than the conduction loss.Hence in [5], a charge-pump based voltage doubler circuit is proposed in the control scheme to super cut-off the power FETs, resulting in 53% efficiency at 1.2nW load with 544pW of quiescent power being consumed by the converter.In [6], the boost converter, operating in DCM can harvest energy from a TEG with an open-circuit voltage as low as 20mV.To achieve zero crossing detection, a comparator is used to monitor the Vx node and a counter keeps track of the ON-time of the HS power FET.In [7], a multi-modal energy harvesting scheme is proposed which can harvest from TEG, solar or piezoelectric energy harvesting modalities by using a shared inductor scheme.The inductor is multiplexed among multiple harvesting modalities and a dual-path approach is implemented in the powertrain architecture to support a wide range of load currents.In [8], a peak inductor current control scheme is implemented to optimize conduction and switching loss.A fast zero crossing detector with offset compensation is implemented for synchronous rectification.The boost converter in [8] can harvest from a TEG with an open-circuit voltage as low as 10mV and achieves a peak efficiency of 83%. Another important requirement in self-powered energy harvesting systems is that the system needs to be selfstarting.Since the output voltage provided by a TEG is below 100mV under ambient conditions, a start-up scheme is necessary to power the control circuits and enable energy harvesting.The start-up scheme does not need to have a very high efficiency as it is only needed to start the control circuits.There are several start-up techniques discussed in the literature, which leverage technology, process, external kick-start and ambient RF energy to enable start-up.In [8], an on-chip cold-start circuit and an external RF-kickstart mechanism are leveraged to power the control circuits during start-up.The cold start circuit consists of a ring oscillator and a voltage-doubler to generate the control signals for the boost converter to start energy harvesting.The RF-kickstart circuit consists of an RF switch and a broadband rectifier implemented using the Dickson topology and operates in the subthreshold regime.A similar RF-kickstart mechanism is described in [5].In [14], a mechanically assisted switch is used in an auxiliary boost converter topology to begin energy harvesting and charge a storage capacitor. The auxiliary boost converter with the mechanically assisted switch is disabled when the voltage on the stored capacitor is high enough to power the control circuits of the primary boost converter.In [15], an external transformer and a low-V T NMOS transistor is connected to incorporate positive feedback, such that device noise is able to start oscillations, which are used to transfer and build-up energy on a storage capacitor.In [13], an LC tank oscillator is used for low-voltage DC to AC conversion followed by a voltage multiplier to boost and rectify the AC signal to a higher DC voltage for start-up. Charge Pumps and Switched-Capacitor topologies: Charge pumps and switched-capacitor based architectures provide a fully integrated solution.An arrangement of CMOS switches, controlled by clock signals (which are mostly out-of-phase but can be poly-phase) along with charge storage and transfer capacitors form a network known as a switched capacitor network (SCN).One of the key goals is to optimize the overall output impedance of a switched-capacitor based converter.Fig. 2.6 shows a simple first-order model of a switched-capacitor converter with a DC voltage gain of N. The voltage drop across the output impedance, R O models all the conversion losses.The resistive output impedance accounts for the switching and conduction.Additional loss due to gate-drive in the control scheme, short-circuit currents due to overlapping control signals and bottom plate parasitic capacitances can be incorporated into this model.There are two asymptotic limits to the output impedance based on the switching frequency of the control signals.The slow switching limit (SSL) impedance is calculated under the assumption that the switch and interconnect resistances are negligible and accounts for the loss due to charge transfer through the capacitors.The fast switching limit (FSL) impedance accounts for the conduction loss through the switch and other resistive components.The switched-capacitor network (SCN) topology plays a major role in both the SSL and FSL impedance estimations.The conduction losses due to SSL and FSL impedances [10] are denoted by: where, I LOAD is the load current; F SW denotes the switching frequency of the control signals; M CAP and M SW are constants determined by the topology; R ON is the ON resistance density measured in Ω.m and W SW denotes the total width ( in m) for all switches. Apart from conduction loss in the switches and transfer capacitors, there are shunt losses due to switching of bottom plate parasitic capacitance associated with the flying capacitors.Generally, metal-insulator-metal (MiM) capacitors have lower bottom plate parasitics as compared to the gate capacitance of devices.Hence, loss due to bottom-plate capacitor (P BOTT ) is given by [10]: where, M BOTT is determined from the topology, V O is the voltage swing across the bottom plate parasitic capacitor and C BOTT is the total bottom plate parasitic capacitance.There are also switching losses (P GATE ) associated with the gate capacitance of transistors in the clocked-control circuit [10] which generate out-of-phase non-overlapping clocks for charge transfer and are expressed by: Where, V SW denotes the voltage swing; C GATE is the gate capacitance density (F/m).Thus, the total loss (P LOSS ) in any switched-capacitor based converter that needs to be minimized is given by: Thus, for a given input voltage (V IN ), load current (I LOAD ), output ripple and the desired conversion ratio, it is important to select an appropriate topology, switching frequency and the number of clock phases for maximum efficiency.The area allocation for the switches and capacitors along with parameters such as bottom plate parasitic capacitance and the switch resistance per unit width, play an important role in realizing the peak efficiency of a switched-capacitor power converter.In [11], an integrated charge pump with a variable number of stages and a constant switching frequency per stage is used to obtain a peak efficiency of 70% and support a wide range of input power levels ranging from 10-1000µW.In [12], the authors have proposed a fully integrated selfoscillating switched-capacitor based energy harvester with 9X-23X configurable voltage conversion ratios.In [12], voltage doublers have been cascaded.Clock generation and level-shifting functions of the control scheme within each doubler are implemented using a self-oscillating architecture, eliminating the need for power-hungry ring oscillators and clock generation circuits.A leakage-based delay element allows frequency control for a wide range of load varying from 5nW-5µW with 40% efficiency and less than 3nW static power consumption. Maximum-Power-Point-Tracking High end-to-end power efficiency in self-powered systems across a wide range of environmental conditions is a necessity.Since the maximum power available from TEGs and solar cells varies significantly with environmental conditions, a built-in method, which keeps track of the Maximum Power Point (MPP) with changing conditions, is extremely useful.By keeping track of the MPP, which is roughly around 50% of the open-circuit voltage [3] of a TEG or around 73-80% [3] of the open-circuit voltage of an indoor solar cell, the system can extract the maximum power available in any condition.A Maximum Power Point Tracking (MPPT) scheme is even more useful if the system needs the flexibility to choose and harvest energy from multiple modalities such as TEG, solar, piezo etc.The basic idea behind MPPT is that a boost converter or a charge-pump needs to provide an optimal input impedance such that the source operates at its MPP under different environmental conditions.In case, the system needs the flexibility to choose and harvest from multiple modalities at the same time, the range of input impedance required for MPP varies significantly.For instance, in [2] the output impedance of a solar cell at different MPPs, subject to varying degrees of illumination, varies between 27-68kΩ whereas the output impedance of a TEG at MPP is roughly fixed at 82 kΩ.Thus, if the system needs the capability to harvest maximum power from diverse harvesting modalities, the MPPT circuit needs to tune the input impedance of the boost converter or the charge pump across a wide range.Several techniques to implement MPPT have been discussed in the literature.We will discuss the theory behind some of the more common methods, which are implemented in ULP systems. Hill Climbing/Perturb and Observe Hill Climbing (HC) involves perturbing the duty-cycle of a power converter (For instance T H and T L for the boost converter in Section 2.2.2) while Perturb and Observe (P&O) involves disturbing the voltage provided by a TEG or a solar cell.By allowing the output voltage from a TEG or a solar cell to increase or decrease, the output power is monitored using a voltage and/or a current sensor.With the increase in voltage, if the power increases then the perturbation is continued in the same direction (voltage is increased by a finite step) but if with the increase in voltage, the power decreases, then the direction of perturbation is reversed (voltage is decreased by a finite step).This process is repeated iteratively, such that the final operating point oscillates around the MPP.The degree of oscillation can be controlled using a smaller step size but this usually results in a longer response time to achieve MPP operation.However, under sudden changes in environmental conditions, especially when conditions change rapidly before the MPPT circuit responds, the HC/P&O methods do not provide an optimal solution. Incremental Conductance The theory behind the incremental conductance method is that the slope of the P-V curve of a TEG or a solar cell is zero at the MPP.The slope is positive toward the left of the P-V curve and changes direction to the right of the P-V curve. where, V and I are the instantaneous output voltage and current and P is the instantaneous power from a TEG or solar cell.ΔP and ΔI represents the change in instantaneous power and current subject to an instantaneous change in output voltage, ΔV.Hence, by keeping track of instantaneous conductance, !! and incremental conductance, , MPP operation can be achieved. Fractional Open-Circuit Voltage From the P-V curves in Fig 2 .3(Section 2.2.1), it is evident that the output voltage at MPP (V MPP ) is a fraction of the open-circuit voltage (V OC ) of a solar cell.This fraction varies roughly between 0.71 and 0.78 [3] with varying solar irradiance conditions, since V OC and the output power changes with light intensity.For a TEG, with varying degrees of temperature differential (ΔT), V MPP is roughly 50% of V OC [2].Hence for MPP operation, Where, k varies from 0.71-0.78 in the case of solar cells while it is approximately 0.5 for TEGs.Thus, k needs to be determined empirically by characterizing a TEG or a solar cell under varying environmental conditions.Once k is known, V MPP can be computed and the output voltage of a TEG/solar cell can be compared with V MPP using an on-chip comparator to determine whether the system operates at MPP.Although this method provides a low-cost, low-power solution, it is not accurate with changing environmental conditions.For instance, in solar-energy harvesting, k varies significantly with environmental conditions, such that the system operates at near-MPP but not at the actual MPP.Additionally, if the system needs the capability to harvest from multiple harvesting modalities, different values of k are necessary which need to be adjusted dynamically resulting in a more complicated implementation, which might consume higher power. Fractional Short-Circuit Current This method is similar to the fractional open-circuit voltage but this scheme leverages short circuit current (I SC ) instead of open-circuit voltage (V OC ) to estimate the MPP.Just like the fractional open circuit voltage method, the current supplied during MPP (I MPP ) is a fraction of I SC and this fraction needs to be empirically evaluated. However, measuring I SC during operation can be difficult because a separate control scheme is needed to periodically short-circuit the harvester and a current sensor is needed to measure I SC, which increases the number of components and cost.Most of the above techniques have been implemented in ULP energy harvesting systems.The MPPT circuit in [8] uses a fractional open-circuit voltage method and assumes that the MPP of a TEG is 50% of the open-circuit voltage [2] while the MPP of a solar cell is 73-80% of the open-circuit voltage [3].The MPPT circuit in [8] uses an external resistive divider to sample the MPP voltage (V MPP ).When the boost converter is functional, the energy source is loaded and its output voltage, V IN goes down.An on-chip comparator monitors V IN and compares it with V MPP .As soon as V IN is less than V MPP , the comparator issues a signal to disable the boost converter, such that the energy source is again unloaded and V IN rises.Again, when V IN is greater than V MPP, the comparator issues a pulse to engage the boost converter and the cycle is repeated.A similar method for MPPT is proposed in [18].In [14], the switching frequency is tuned using digital circuits to modulate the input impedance of the boost converter.The disadvantage of this method is that the frequency range is limited.Hence, the range over which the input impedance of the converter can be tuned, is limited. Research Questions: § Assuming the system needs the capability to harvest from both solar as well as thermal energy, what kind of baseline energy harvesting architecture, delivers the peak power efficiency?What factors will affect this decision and to what degree?§ On what factors will the multiplexing scheme to choose between solar and thermal energy harvesting depend on and how will they affect the implementation?§ What kind of Maximum-Power-Point-Tracking scheme will work in a hybrid energy harvesting system?Will one global scheme for MPPT work for both solar and thermal energy harvesting or separate independent schemes be needed?§ What kind of start-up schemes would work best in a hybrid energy harvesting system?What will be the architecture for start-up and what factors will contribute to this decision? Approach: In this work, we will attempt to answer the research questions by focusing on three major components of an energy-harvesting system: Powertrain Architecture, Maximum-Power-Point-Tracking (MPPT) and startup techniques.§ Energy delivery or powertrain architecture § Power Delivery Modeling: To achieve peak efficiency in a power delivery system, it is important to investigate what are the sources of power loss in that system.Since the various loss mechanisms such as conduction, switching loss, leakage etc. are heavily dependent on many variables such as load currents, output voltages, input voltages, biasing in the control circuits etc., it is important to evaluate the performance trends with respect to these variables.To achieve this goal, we will develop first-order models to describe the loss mechanisms of various powertrain topologies described in Section 2.2 using mathematical equations.Hence, based on design specifications (such as load current, input voltage), the model will help the designer to know what kind of power-loss is dominant.The model will also compare the performance of multiple converter topologies (such as inductor-based boost converter or a switched-capacitor based charge pump) to a firstorder.The goal for the model is not to achieve SPICE-level accuracy for a circuit topology but to aid the designer in design-space exploration.§ Control Scheme: Once we have a fair estimate of what kind of topology we should implement, we will focus on designing the control system architecture for the selected powertrain topology.The control system will include the following circuits and components: • Power-efficient, ULP comparators for making decisions • Ring-Oscillator, which can be current-starved or ULP relaxation oscillators for providing control either to the clocked-comparators or to the switches present in the powertrain.• Level Converters for signals crossing voltage domains or for providing sufficient gate drive to the switches.• Digital control logic.For instance, assuming a boost converter topology, if a single inductor needs to be shared across different harvesters, resource multiplexing might be necessary.• We will also design and evaluate efficient power-on-reset (POR) schemes for startup.§ Maximum Power Point Tracking (MPPT) Algorithms: In a system, which needs to harvest energy from two or more sources, the MPPT circuit needs to be flexible and adaptive.To investigate this, we will incorporate the following methods: § Characterizing state-of-the-art energy harvesters To design an energy harvesting system, a fundamental understanding of the output characteristics (such as open-circuit voltage, short-circuit current, output impedance etc.) of an energy source is important.Moreover, it is important to study how these characteristics change with environmental conditions.This will enable us to estimate design specifications of the energy harvester.We will evaluate several commercially available TEGs and solar cells and study the output power vs. output voltage characteristics subject to different environmental conditions, such as temperature, light intensity etc.We will also study the output power vs. output impedance characteristics for TEGs and solar cells.§ Design Exploration for MPPT algorithms Based on the output characteristics of TEGs and solar cells, we will evaluate some of the MPPT algorithms described in Section 2.2.4.Depending on the range of output impedances at the maximum power point for both TEGs and solar cells, adopting a hybrid approach to MPPT might be a possibility.Hill climbing and incremental conductance methods provide better accuracy at the cost of higher design complexity.To investigate lower power implementations of such algorithms in hardware, we will explore the following components and techniques: • Current sensors: Though traditionally, analog circuit techniques used for sensing currents consume significant power, we will investigate low power current sensing and understand the trade-offs between performance and accuracy. • Digital circuit techniques: We will also investigate digital circuit techniques to quantify the input power as a digital equivalent.For instance, by estimating how fast a capacitor is charged by a TEG or solar cell, an estimate of the input power can be made.A comparator, relaxation or a ring oscillator and simple digital circuits such as counters can be leveraged to determine the charging time of the capacitor.§ Start-Up Techniques: Power-efficiency is not a critical factor during start-up as proper functionality is crucial.We will investigate the following techniques for system start-up: • Oscillator and Voltage Multiplier: We will further investigate the work in [13] and [8].The oscillator can be an LC tank oscillator followed by a Dickson multiplier or using a conventional current-starved ring oscillator and a charge-pump based voltage-doubler. • RF Kickstart: We can use RF as a one-time source and use a rectifier to charge an input capacitor and power ring oscillators, which can generate control signals for an auxiliary boost converter to begin energy harvesting. Evaluation Metrics: A self-powered system, which harvests from multiple sources of energy, is an active area of research.There are very few works in literature, which demonstrates a self-sustaining system, which can intelligently choose between different sources of energy for harvesting.We will evaluate the proposed multi-modal energy-harvesting system based on the following metrics: § Efficiency: We will measure the overall power-efficiency across a range of input voltages and load currents.We will then test the functionality of the system with a TEG and an indoor solar cell and demonstrate energy harvesting with changing environmental conditions as a proof of concept for multimodal energy harvesting.§ Minimum input voltage: We will evaluate the minimum input voltage required from a TEG or a solar cell to enable energy harvesting and compare with the state-of-the-art.If the systems can harvest energy from a low-voltage source, the overall lifetime of the system can increase.As discussed earlier, a lower cold-start voltage will ensure that the system is functional across a wider range of environmental conditions or in the event of total loss of energy in the storage capacitor.§ Maximum output voltage: We will monitor the maximum output voltage while harvesting simultaneously from thermal and solar energy under different environmental conditions.§ Area: A fully integrated energy harvester without any external passives will provide a significant advantage to the overall system size.We will assess the feasibility for a fully integrated on-chip energy harvester Anticipated Results: Using the techniques and evaluation metrics discussed, we hope to extend the state-of-the-art ULP energyharvesting systems such as [5][7] [8].The proposed scheme should demonstrate energy harvesting from both TEGs and solar cells.With an adaptive MPPT approach, the system should intelligently decide which harvesting modality to use for scavenging energy.Biasing the comparators in the subthreshold region along with circuit techniques to reduce standby leakage should reduce power loss in the control circuits.A multi-modal fully integrated energy-harvesting system with a low-voltage start-up scheme has not been demonstrated before in literature.We hope to reduce the minimum voltage for cold-start as compared to [8][13]. Contributions: The contributions from this chapter will be the following: § A first-order model, which will help in design space exploration for various inductor-based and switched capacitor based powertrain topologies.§ A hybrid MPPT control scheme, which will assist in achieving peak power efficiency for both solar and thermal energy harvesting § A start-up circuit/architecture, for enabling low-voltage system startup.§ A fully integrated energy harvesting system with cold-start and MPPT, for thermal and solar energy harvesting Supply regulation plays an important role in delivering power to general-purpose microprocessors and chipsets deployed in smartphones, tablets, laptops, as well as self-powered systems such as wireless and body sensor nodes.Each system has an application specific power profile.For instance, high-performance systems, such as personal computers consume hundreds of mW, depending on the type of application being executed by the operating system.Battery-powered systems, such as smartphones need to conserve energy to ensure the longevity of the battery and thus operate at much lower power levels in the order of 100s of µW.Self-powered systems, which operate from energy harvested from ambient sources, have a much stringent power budget.Various components of a system might have entirely different voltage level specifications.For instance, most analog and mixed-signal components need sufficient voltage headroom for stable operation whereas digital circuits leverage Dynamic Voltage and Frequency Scaling (DVFS) for energy-efficient operation.Hence, an integrated solution for supply regulation and power management is essential for delivering power to various analog and digital components in both high-performance as well as battery-operated or self-powered systems. Background and Prior Art: Technology scaling has allowed the integration of power delivery circuits resulting in fully integrated voltage regulators with higher power efficiencies as compared to off-chip regulators.Power delivery circuits can be broadly classified into two major categories: energy harvesting circuits and voltage regulators.While voltage regulation is needed in almost all systems to provide a stable power supply and to support variations in load currents, integrated energy harvesting circuits are application-specific. Voltage regulation is typically achieved by regulating battery voltage in the case of battery-powered systems or the voltage on a storage capacitor, in case of energy-harvesting systems.Typically, supply regulation involves down-conversion of the battery voltage using buck converters.In some cases, a buck-boost topology is required if the voltage on the storage capacitor or the battery is lower than the desired regulated voltage levels of the system.Buck regulators can be implemented using linear regulators such as low-drop-out (LDO) regulators; switchedcapacitor or inductor-based switching regulators.Buck-boost topologies can be implemented using switching regulators (inductor/switched-cap topologies). Low Drop-Out (LDO) Regulators Fig 3.1: LDO topology An LDO is a type of a linear regulator, which can provide a regulated DC supply with input voltages, higher than or nearly equal to the required regulated output.The main advantages of using an LDO over a switching regulator are that it does not inject switching noise on the supply line and does not require off-chip passives for regulation. Hence, an LDO can be fully integrated on-chip and consumes a smaller area as compared to some of the switching regulators, which require external passives and greater silicon real estate.Fig. 3.1 shows the topology of an LDO.It consists of an error amplifier (EA), a voltage reference circuit whose output is shown as V REF , a pass transistor (M LDO ) and a feedback network shown by resistors R1 and R2.Conventionally, an LDO is mostly used as an output stage of a switching regulator to reduce the ripple and switching noise injected by the switching regulator on the supply line.A low-power bandgap reference circuit, such as [20] or a voltage reference circuit, based on ΔV T of two CMOS transistors [21] can be leveraged to generate V REF .Ideally, V REF should have low sensitivity to the supply voltage (V STORE ) and temperature variations.A fraction of the regulated output voltage, V OUT is fed-back to the error amplifier EA by the resistive feedback network consisting of resistors, R1 and R2. The error amplifier modulates the ON resistance of the pass transistor, M LDO to maintain a regulated V OUT, subject to changes in load current, I LOAD.The response time depends on the bandwidth of the error amplifier, which can be improved by employing compensation techniques, such as dominant pole or lead-lag compensation schemes.The efficiency (η) of an LDO is given by: where, I CONTROL represents the total current consumed by the control circuits, such as the voltage reference, error amplifier, leakage and current loss in the feedback network. Inductor-based Voltage Regulators Fig 3.2: Inductor-based switching regulator topology The advantage of a switching regulator over an LDO is that a switching regulator can provide a wider range of voltage conversion ratios across a wider range of load currents with higher power efficiencies.An inductor-based buck converter is a switching regulator, which uses an inductor as an intermediate storage element to transfer power to the load.The disadvantage of using an inductor-based buck converter is that it needs an off-chip inductor with a high-quality factor (Q) to achieve low conduction loss in the inductor.Although the DC-DC converter proposed in [22], implements an integrated on-chip inductor, it is difficult to achieve high power efficiency for high voltage conversion ratios.Moreover, on-chip inductors are not area-efficient.Hence, most switching regulators in literature, which use an inductor-based approach for voltage regulation use off-chip high-Q inductors [23][24][25] [26].Fig. 3.2 shows the powertrain topology of an inductor-based buck converter.It consists of two power transistors, M HS and M LS , which are used to transfer power to the load through the inductor, L BUCK and regulate the output voltage, V OUT at the desired conversion ratio.Depending on the architecture of the control scheme, either V OUT [26] or the inductor current [24] is sensed through a feedback network (not shown in Fig. 3.2) to generate pulse-width-modulated (PWM) or pulse-frequency-modulated (PFM) non-overlapping gate control signals of M HS and M LS .During the High-Side (HS) phase, the gate control signals ensure that M HS is ON while M LS is OFF.The inductor is charged up by V STORE and M HS .In the Low-Side (LS) phase, the energy stored in the inductor is transferred to the load by M LS and the inductor current ramps down to zero.Assuming Discontinuous Mode (DCM) operation, the inductor current remains at zero until the next switching cycle.It is important that M LS should turn OFF when the inductor current crosses zero.Thus in an ideal case, the voltage conversion ratio is given by: where, T L is the ON-time of M LS , which is governed by the pulse width of the LS pulse and T H is the ON-time of M HS , which is governed by the pulse width of the HS pulse LS Hence, by modulating T and T H, the desired conversion ratio can be achieved.The HS and LS pulses need to be non-overlapping so that there is no short-circuit current through M HS and M LS .A dead-time controller such as [23][24] can be implemented in the control scheme to ensure that HS and LS pulses are non-overlapping and there is no short-circuit current through M HS and M LS .In (1), the conduction losses in the inductor and power transistors, as well as the switching loss, are not considered.The total conduction loss during the HS cycle, !",! as given by: Similarly, for the LS cycle, the total conduction loss, !",! is given by where, ! is the total resistance including the parasitic resistance of the inductor and the ON resistance of M HS and ! is the total resistance including the parasitic resistance of the inductor and the ON resistance of M LS . The switching loss, !" and leakage, !"# is constant for a given control scheme and depends on the dimensions of M LS and M HS .Hence, the total loss, !"## is given by: Thus for a given conversion ratio, , in order to minimize !"## , it is necessary to tune the peak inductor current, I PEAK , or modulate the ON-resistance of M LS and M HS by an appropriate gate-drive control scheme Switched-Capacitor Voltage Regulators Switched-capacitor DC-DC converters are a class of switching regulators, which offer a fully integrated solution to voltage regulation.The arrangement of CMOS switches and transfer capacitors can be reconfigured on-chip to achieve desired conversion ratios.Fig. 3.3 shows the topology of a simple 2:1 switched-capacitor based buck converter. Fig 3.3: Switched capacitor 2:1 buck regulator topology The only disadvantage of implementing a switched-capacitor architecture is that the regulator can be targeted for only a limited range of conversion ratios and load currents as compared to inductor-based DC-DC converters.Moreover, precise control signals are required for the switches to prevent undesirable short-circuit or contention currents, which can lower the power efficiency.The sources of power loss are due to the conduction loss in the switches and transfer capacitors, the switching loss in the control circuits and parasitic bottom-plate capacitance of the transfer capacitors.Depending on the load current and output voltage specifications such as ripple, switching frequency, some sources of power loss may be dominant.At lower load currents, switching loss and bottom-plate parasitic loss are more dominant than conduction loss in the switches.In chapter 2, section 2.2.3, the sources of power loss in switched-capacitor power converters are described in more detail.Existing work in literature, such as [27], implements a reconfigurable switched-capacitor topology and combines interleaved clocking and level shifting in gate-drive circuits.In [28], a hybrid architecture, consisting of switchedcapacitor regulators and LDOs, is implemented.In [29], a capacitance modulation scheme is implemented using digital circuits, which controls the amount of transfer capacitance involved with varying load currents.Will a hybrid-architecture (For instance, switching regulator + LDO) provide a higher power-efficiency or a single switching regulator is sufficient?§ In a hybrid topology, what will be the multiplexing scheme for selecting different powertrain architectures?What factors will govern this multiplexing scheme?§ If a switching regulator is implemented, how much dead-time is sufficient for peak power efficiency?What are the trade-offs between dead-time, switching loss and line regulation?§ How much ripple or power supply variation can be tolerated at the output?In ultra-low power systems, is it necessary to achieve strict line and load regulation?§ What are the trade-offs between achieving a strong line regulation and power-efficiency § For a voltage reference circuit, how much voltage/temperature sensitivity is desired?Does it need to have a high degree of tolerance to power supply variation and temperature? Approach: In order to design a power-efficient voltage regulator, it is important to understand the power profile of the load circuits.We propose the following methods in order to answer some of the research questions § Power analysis of different functional units It is important to understand the power and voltage specifications of each constituent block before the design of the power delivery and supply regulation framework.Based on a pre-defined power budget, which is expected to be 1µW or less, we will analyze the operating conditions of each block that lead to minimum power consumption and assess its feasibility.Apart from power consumption, tolerance to supply voltage variation is important to assess how much line regulation is required.Generally, analog and mixed-signal circuits have less tolerance to ripple and supply variations as compared to digital blocks.Hence, PSRR for each mixed-signal functional unit will be evaluated.We will assess the worst-case load transient for each block, which needs to be supported by the voltage regulator.§ Modeling output voltage variation and its impact on regulator power efficiency We hypothesize that a lightly regulated supply rail at lower load currents will provide higher power efficiency.Aggressive line regulation would theoretically require a greater number of comparisons between the output voltage and the reference, resulting in more switching and higher quiescent current loss.A more relaxed line regulation will reduce the extent of switching and a lower load current will reduce conduction loss, improving overall power efficiency.To better understand and validate this hypothesis, we will model ripple in different converter topologies as a function of switching frequency and load current and assess its impact on overall power efficiency.§ Voltage reference A stable voltage reference with a high PSRR and temperature stability is needed for all voltage regulators.A low-power voltage reference is needed for achieving high power efficiency at ultra-low load currents.A Bandgap reference (BGR) architecture is usually used in most voltage references but consumes higher power and operates at a higher supply voltage.Moreover, it needs a start-up circuit.Ultra-low power BGR circuits [20] and voltage references based on the threshold voltage difference (ΔV T ) of two CMOS transistors [21] have been proposed, which are suitable for supply regulation in self-powered systems.Ultra-low frequency timers and clock sources, which are based on the gate leakage of transistors, have been proposed which provide a stable clock reference for ultra-low power applications [32].We will evaluate a gate leakage-based voltage reference and assess the performance, subject to temperature and supply voltage variations.We will explore compensation techniques to improve temperature and voltage variations and evaluate trade-offs with power consumption.§ Architecture implementation: on the power analysis and load current requirements of each block, we will implement the powertrain architecture of the ultra-low-power system.While feasibility analysis is currently under process, it seems like we will have separate voltage domains for digital and analog components because analog circuits typically need more voltage headroom and are more susceptible to power supply noise etc.Based on the load current and output voltage specifications, a hybrid approach, such as a single switchedcapacitor converter with multiple outputs and an LDO might be incorporated for different voltage domains. Evaluation Metrics: We will assess our approach and methods using the following figures of merit. § Power Efficiency The proposed converter topology should improve or, at least, equal the power-efficiency of state-of-theart power converters at load currents in the order of 10µA or less.We will also assess the powerefficiency at different unregulated input voltage levels and load currents.§ Line/Load Regulation and Settling time We will assess the line and load regulation metrics and their impact on overall power efficiency.The transient response of the converter will be evaluated with changes in load currents and the supply voltage. Although a regulator with a faster response would generally use compensation techniques in the converter to improve the overall bandwidth, such schemes will also consume power and area.§ Operating range We will evaluate the operating range of the proposed converter.The input voltage and maximum load current range will be explored. Anticipated Results: An ultra-low power SoC, which consists of features such as signal acquisition, filtering, analog-to-digital conversion, digital processing, storage and wireless communication, consists of multiple analog and digital macros, which have different power and voltage specifications.Depending on the circuit architecture, different blocks might need a better transient response, immunity to supply noise etc.We envision a hybrid power architecture with dedicated supply rails for analog components and digital macros.The extent to which the powertrain architecture can be shared depends on the specifications of the different load circuits.At ultra-low load currents, the problems of cross regulation and conduction loss in the powertrain should not pose a major concern although we plan to evaluate the different loss mechanisms in a hybrid power architecture.The hypothesis of regulating the outputs only when required can yield benefits in the overall power efficiency and will be further assessed by system modeling, simulations and measurements. Contributions: § An architecture for supply-regulation, targeted at achieving high power efficiency at 1-10µW.Variation in on-chip power supply continues to be a major challenge in modern CMOS processes due to technology scaling resulting in increasing device densities and operating currents.Since the length of global wires such as power and ground lines does not scale at the same rate as device dimensions, IR-drop continues to increase in deep-sub-micron processes.Since most modern microprocessors operate at clock frequencies in the GHz regime [33], such systems are most susceptible to Ldi/dt events, resulting in power supply overshoots and undershoots.While supply overshoots can cause reliability issues such as gate-oxide breakdown and hot-carrier injection (HCI), supply undershoots can result in timing violations such as setup-time and hold-time failures.Thus, power supply droops can limit the maximum operating frequency (F MAX ) of a modern microprocessor.In self-powered ultra-lowpower (ULP) systems, the magnitude of load current transients is negligible except when the system is in a mode where it needs to acquire physical data or send data over a radio link.Hence, it is hypothesized that line and load regulation requirements for powering digital circuits in ULP systems can be relaxed to some degree.Variation in the power supply can result in timing errors in low-voltage circuits as well [38].Additionally, analog and mixed signal components such as the radio or the analog front-end need a tight line and load regulation even in ULP systems.Hence, there is a need for a low-cost, low power method to monitor voltage variation even in ULP systems to account for the trade-off between relaxed voltage regulation and the susceptibility of digital circuits to timing failures. Background and Prior Art: An Ldi/dt event occurs if there is a sudden change in the current consumption, especially when the microprocessor switches from one operating mode to another, resulting in high-frequency overshoot or undershoot noise.Resonant supply noise in the mid-frequency range is another source of power supply noise, which results mainly from the resonance of the package inductance and the decoupling capacitors [37].During dynamic voltage scaling (DVS), the slow transient response time of voltage regulators can result in low-frequency droops.Fig. 4.1 describes the two major sources of power supply fluctuations.High-frequency noise is generally induced on the supply due to Ldi/dt events and influences timing in local circuit paths.Noise due to package resonance and low-frequency droops takes time to recover and thus is present for multiple clock cycles and impacts performance globally across the chip.Existing work in literature such as [34] has proposed on-die dynamic voltage monitoring and adaptive clock distribution schemes to enable tolerance to power supply variations across a wide operating range.In [35], techniques for timing error detection and correction are proposed to reduce metastability occurring due to dynamic power supply and temperature variations.Analog techniques have been employed in [36] where on-die sensors are distributed to monitor peak overshoots and undershoots.Adding decoupling capacitors can reduce dynamic IRdrop.Active decoupling capacitors can compensate the noise in the low to mid-frequency range [37].However adding decoupling capacitors increases gate leakage.Analog droop monitors [36] and metastability detectors [35] consume higher quiescent currents.Hence such techniques cannot be applied directly in subthreshold processors such as [4][39] which are used in energy-constrained systems such as wireless sensor nodes and other applications related to the IoT.Time § What are limits on droop resolution and how does this change with the power-supply noise frequency?What resolution is acceptable for a subthreshold processor?§ What will be the calibration scheme for measuring power supply noise? Approach: We propose the following methods to explore and address the impact of power supply variation in ultra-low power systems: § Latch-Based implementation for digital circuits We hypothesize that latch-based circuits can provide better immunity to power supply variation.A latchbased pipeline stage typically allows the designer to achieve higher performance than a register-based implementation owing to time-borrowing and allowing greater setup-time margin as compared to a flipflop.Assuming no clock skew, the setup-time constraint for a latch is: where, !"#$ = transparency window of a latch Similarly for a flip-flop based design, where, !"#_!"#$%&_!! = clock period of a flip-flop based stage !"#!! = clock to flip-flop output delay Hence, !"#_!"#$%&_!"#$! < !"#_!"#$%&_!! which means that a latch-based pipeline stage can operate at a higher clock frequency than a flip-flop-based stage.Moreover, since a latch-based pipeline provides an additional transparency window, the incoming data has an additional setup-time margin equivalent to !"#$ , which aids in resolving metastability issues arising due to power supply variations and lowfrequency supply noise.Short-paths in a latch-based design can be avoided as long as, where, !!"#_!"#$ = hold time constraint Although a flip-flop based timing path has greater hold-time margin as compared to a latch-based path, employing out-of-phase non-overlapping clock signals can offset this limitation.To demonstrate the circuit robustness of a latch-based implementation to power supply variation, we analyzed the impact of lowfrequency power supply droops on both register-based and latch-based implementations of a 32-tap Finite Impulse Response (FIR) filter across a wide range of supply voltages.FIR filters play an important role in most low-power as well as high-performance DSP applications [41].We investigate the circuit robustness to power supply variation for both latch-based and register-based versions of the FIR filter by measuring the energy-delay (ED) trends.We use ED curves as a metric to evaluate the resiliency of a synthesized digital circuit (in this case an FIR filter) to power supply variations.We implement a low-power technique using digital circuits to measure the low-frequency droop present in the power supply.4.2 describes the block diagram of the system designed for analyzing and comparing the impact of power-supply variation on latch-based and register-based versions of the FIR filter.We implement a 16-bit, 32-tap FIR filter using both flip-flops and latches.For the latch-based implementation, we incorporate a dual-phase non-overlapping clock architecture to reduce the probability of hold-time failures.Both the latch-based and register-based FIR filters have dedicated ENABLE signals and supply rails while they share a common reset and ground rail.A global block-select signal helps in selecting the 32-bit output from each FIR filter.Fig. 4.2 also describes the proposed droop measurement scheme.The core of the droop measurement circuit is an on-chip 13-stage current-starved ring oscillator (RO) operating from the supply rail, VDD_DROOP, which contains voltage droops.The ring oscillator is biased in subthreshold by an external bias signal, VBIAS, which can be generated by an ultra-low-power bandgap reference such as [20].An 8-bit digital counter and comparator are powered by a clean, well-regulated supply without ripple, VDD_CLEAN.This 8-bit counter and comparator logic compares the number of clock cycles with a programmable 8-bit user-defined threshold, THR and generates an enable/disable signal to count the number of RO clock cycles denoted by DROOP.The number of RO clock cycles will vary depending on the magnitude of droop present.The difference between DROOP and THR provides an 8-bit digital proxy measurement for the amount of supply droop present.At a system-level, VDD_CLEAN can be obtained from a voltage regulator such as the buck-boost regulator proposed in [4].An on-chip voltage regulator needs to provide high conversion efficiency for a target load current range.For a fixed conversion efficiency of a regulator, a lower-power droop monitoring circuit would reduce the overhead on the limited power budget of an energy-constrained system.§ Decoupling capacitors Adding decoupling capacitors is a part of the standard physical design flow to resolve issues related to dynamic IR-drop.However, as discussed in Section 4.2, adding a large number of decoupling capacitors increases gate leakage.Thus, the designer needs to be more prudent with adding decoupling capacitors in ULP systems.We propose the following approaches to the design flow: § Methodology for adding decoupling capacitors during Physical Design Designers mostly use prior experience to justify the amount of decoupling capacitance in an SoC.This is dependent on the technology, package parasitics, load current profile in different operating modes and the sensitivity of custom macros to power supply variation.We propose to establish a vector-based dynamic-IR analysis methodology to optimize the amount of decoupling capacitance required.The flow will enable the designer to address power supply variation in a ULP system, without compromising on leakage.The flow will allow the designer to incorporate different circuit topologies of decoupling capacitors.§ Active Decoupling Capacitor design The basic concept behind active decoupling capacitors is to switch a pair of parallel decoupling capacitors to a series combination to give a local voltage boost in the presence of power supply droops.The control schemes for these switches have been implemented using power-hungry comparators [42] and opamps [37], which will not meet the power constraints of ULP systems.Using circuit techniques and biasing the comparators/op-amps in sub-threshold, we can achieve lower quiescent currents. Evaluation Metrics: We will evaluate the proposed methods using the following metrics VBIAS § Power consumption For any droop measurement or compensation circuit, which needs to be implemented in a ULP system, quiescent power consumption is an important factor.The power consumption of the proposed ring oscillator-based droop measurement scheme discussed in Section 4.4 was reported to be 0.9µW [43].The proposed opamp or comparator-based control scheme of the active decoupling capacitor should be sub-µW to fit into the overall power budget of a ULP system.§ Average Droop/IR drop The magnitude of the average power supply droop should be lower after compensating with active decoupling capacitors or by employing the proposed design methodology to counter dynamic IR drop.§ Resolution and Sampling Rate The various droop measurement schemes in literature, sample the noisy supply rails to record a digital equivalent of the power supply droop.For a low power system, dominated by power supply noise in the low-mid band frequencies, a tradeoff between power consumption with the sampling rate or resolution is necessary.However, the sampling rate and resolution should be high enough to capture the noise amplitude and behavior correctly.§ Area Although most of the proposed techniques tend to trade-off power consumption with area, we will evaluate the overall area footprint of the proposed circuits for comparison with state-of-the-art. Results We have explored latch-based digital circuit implementation to analyze circuit robustness to power supply noise.For the latch insertion in the FIR filter, we mapped a 16-bit, 32-tap FIR filter to logic gates using commercial synthesis tools.To control time-borrowing and allowing latch insertion only when necessary, custom scripts were used to replace each register with a pair of master and slave latches clocked by out-of-phase non-overlapping clocks [40].After inserting latches, timing optimization and logic restructuring was performed to balance all pipeline stages to achieve timing closure at 200kHz and 0.5V.Fabricated in a 130nm CMOS process, the test chip was packaged in a 64-pin PGA package for testing convenience.A Link Instruments IO3200 patterngenerator/logic-analyzer module was used to provide input patterns and off-chip clock signals to both latch-based and register-based FIR filters.Current measurements were performed using a Keithley 2401 sourcemeter.External droop was added to the supply with a function generator.A 1 kHz saw-tooth waveform of varying peak-to-peak amplitude was coupled to the power supply.External noise and supply droop can be injected off-chip by coupling a fast rising ramp signal to the power supply using a large coupling capacitor in the order of 47µF or higher.measurement circuit consumes less than 1.5µW across a range of supply voltage ranging from 0.5-0.8Vand can be leveraged in ULP systems such as wireless sensor nodes.Fig. 4.4 shows the energy-delay trends of both the latchbased and register-based FIR filters both with and without externally injected power supply noise.Fig. 4.4 shows that the latch-based implementation provides 25-37% improvements in energy-efficiency below 0.6V in the presence of 1kHz power supply droop ranging from 44-120mV.At higher voltages and operating frequencies, the register-based implementation provides better energy-efficiency.This is because active-energy dominates at higher voltages and the latch-based implementation has a higher switching capacitance owing to a dual-phase clocking scheme. Anticipated Results The proposed approach and the techniques discussed in Section 4.4 will allow us to understand the impact of power supply droop on the circuit robustness in ULP systems.We hope to design a low-cost solution to measure the magnitude of power supply droop without a significant power overhead.The proposed flow on using dynamic IR drop based on vectors from actual design test cases will allow the designer to achieve better immunity to supply variation with minimum leakage power overhead.The low power active decoupling capacitor implementation will provide supply noise compensation in the low-to-mid band frequency range.The design of ULP systems for IoT applications, such as health monitoring, surveillance and home automation involves a high degree of system integration, consisting of a variety of circuit components, such as ULP processors, subthreshold DSP accelerators, wakeup radios etc.While power delivery to such components plays a major role in defining the overall system-level power budget and electrical specifications, it is important to use circuit or architectural techniques to design and optimize such components for lower power.Before an energy harvesting or a voltage regulation scheme can be designed, it is imperative to understand the power or energy characteristics of such macros and analyze the circuit performance to power supply variations.Technology also plays a major role not only in the design of high-efficiency DC-DC converters and but also assists in lowering the energy or power consumption of circuit components.In this chapter, we will present an energy-efficient MSP430 processor designed in an FD-SOI process optimized for subthreshold operation.We will evaluate the energydelay and leakage power characteristics of a 32-tap FIR filter in a 55nm Deeply-Depleted-Channel (DDC) technology.Then we will discuss the need for a ULP comparator with a low input-referred offset in a 10nW wakeup radio for ULP applications. Background and Prior Art: Wearable sensors, portable biomedical electronics such as ECG monitors, and self-sustaining surveillance systems need to achieve energy-efficiency and ultra-low standby power.In this section, we will discuss the circuit architecture and implementation of two major components, which play an integral role in such systems. Subthreshold processors and accelerators The restrictions in size and the need for a longer operational lifetime render self-powered systems severely energy-constrained.Within the limited energy budget, such systems need to run application-specific programs and sub-routines such as ECG monitoring [4] [46].Hence, energy-efficient processing at the circuit and at the system level is essential to minimize the energy per operation in such systems.Existing work in literature has reported systems or processor implementations consuming nW to µW power levels by operating the system near the threshold voltage (V th ) of a transistor [44][45] [46][47].Operating a digital circuit in the subthreshold regime causes transistor leakage to be a dominant source of energy consumption because of exponentially large delays.Prior work in literature such as [45] has proposed digital logic styles to suppress subthreshold leakage of conventional bulk devices.Hence optimizing the leakage characteristics of a device can result in significant benefits at the overall system level.However low voltage transistor operation presents four key challenges: § Minimize the subthreshold swing and achieve maximum ON current below V th § Minimize static leakage current § Minimize V th variation § Minimize device capacitances. Thus, if the process technology provides CMOS transistors, optimized for lower subthreshold leakage with reduced V th variation and minimal degradation in performance, then energy-efficient and reliable digital processors and circuits can be implemented for ULP applications. RF-IN To conserve energy, self-powered systems such as wireless sensor nodes spend most of the time in standby mode and perform active operation only when required.To synchronize with the base station and bring the system out of standby mode, a Wake-up radio (WRX) can provide a viable solution.Since a WRX is always active and listens to an incoming RF signal or pattern, the active power of a WRX needs to be lower than the overall standby power of the system, which tends to be in the nW range for digital components.Reducing the power consumption of a WRX comes at the cost of reduced sensitivity to the incoming RF signal.Existing work in literature, such as [48][49] implement a WRX architecture similar to Fig. 5.1, where the incoming RF signal is rectified and the output DC voltage from the rectifier is sampled using a low-power comparator.A ULP baseband correlator processes the sampled output from the comparator, compares the sample with an expected code word and issues a wake-up signal.The size of the correlator and the sampling frequency is determined by the overall receiver sensitivity and power budget, which is typically in the nW range.Since the input RF-signal power is typically limited, the rectified output voltage is restricted to less than 10s of millivolts.As a result, the comparator needs to have a very low input-referred offset.Moreover, the threshold of the comparator should be controllable to avoid false system wake-ups in the presence of noise or interference.Thus, the comparator needs a mechanism for offset control.Since a WRX is severely power-constrained, the comparator should consume a very low quiescent current (typically less than 10nA).The clocked comparator used in [48] uses a current DAC for setting the bias currents in both the pre-amplifier and the regenerative feedback circuit with the input common mode referenced to ground.The dynamic comparator in [50] consumes very low static current and uses a combination of high-V T and standard-V T devices to reduce leakage with reduced performance penalty.A dual-rail clocked-comparator architecture is proposed in [51] to provide greater resilience to kickback noise. Research Questions: § How can we quantify the benefit due to process technology in the power and performance of digital circuits, such as processors and digital filters?§ What other circuit topologies and logic families can be used to implement ULP digital circuits, apart from static-CMOS?What are the trade-offs between performance and reliability?§ What is the optimal resolution for comparator threshold in a ULP wake-up receiver?How does adding more resolution bits influence the overall power consumption and receiver sensitivity?§ What are the preferred architectures to realize a ULP comparator with low input-referred offset?Can a ULP comparator in the nW range be realized using a continuous-time comparator or some other topology?§ How does the input-referred offset vary across different comparator topologies, supply voltage and power? Approach: We adopted the following methods to answer some of the research questions. Use of Technology to optimize performance and energy consumption in low-voltage digital circuits In order to evaluate the advantages of process technology and devices optimized for subthreshold operation, we implemented an MSP430 processor in a 90nm FD-SOI process and also demonstrated a 16-bit, 32-tap FIR filter in a 55nm DDC technology.Due to better I on /I off ratios and less V T variation in the devices supported by these technologies, we implement a 1.3µW MSP430 processor operating at 0.4V and 250kHz [52].It consumes 67% less energy as compared to [53] which demonstrates a similar processor implementation using conventional bulk devices.The FIR filter, implemented in a low-leakage 55nm process consumes 5x lower energy as compared to a similar sized FIR filter implemented in a conventional bulk technology [54].Substrate biasing in the DDC technology offers further 39.4% savings in energy due to reduced leakage. Ultra-low-power Comparator topologies We will explore different continuous-time and clocked comparator architectures implemented in literature [49][50] [51] and evaluate the advantages and disadvantages of each topology in a power-constrained system such as a wake-up radio.We will explore different techniques for offset-compensation and threshold control.For instance, in a clocked comparator, one phase of the clock can be used for offset compensation and the other phase can be used for comparator operation.We will explore the impact of the input common mode level on the comparator offset.We will explore the impacts of device noise such as thermal or flicker noise on each comparator topology.We will also evaluate kickback noise, which limits the functionality and performance in clocked-comparator topologies. Ultra-low-power circuit families and logic styles Static-CMOS logic families have been conventionally used in digital circuits as they offer higher Static-Noise-Margin (SNM), more reliability and lower power.However, in power-constrained systems, such as wake-up radios, new logic styles are needed to further reduce the power consumption.A buck regulator can provide a lower operating voltage but can have a very low power-efficiency at ultra-low loads (in the order of 10nW or less).Hence, we will explore new logic styles, which consume very low power but at a higher voltage-level such as [45], which utilizes the leakage current of a transistor for circuit operation at ultra-low frequencies.Stacked circuit topologies can provide another solution but is limited by signal swing and level shifting between different stages. Evaluation Metrics: We will evaluate the proposed methods and approach using the following metrics Power We will measure the power consumption of the ULP comparators and the overall low power wake-up receiver system.The power consumption of always-active components such as wake-up receivers, clock sources etc. are important in a wireless sensor node to estimate the total operational lifetime.We will also measure the standby and active power of the MSP430 processor and FIR filter to evaluate the benefits provided by the low-power, low-leakage 90nm FDSOI and 55nm DDC technologies.The power profile of the components discussed in this chapter will help the designer to estimate the specifications of the voltage regulator and power delivery circuits. Energy Energy per cycle or energy per instruction is another metric, which will help the designer to evaluate the energyefficiency of the system.Voltage scaling to subthreshold levels, reduces the active energy but also increases circuit delays exponentially.Thus, it is important for the system to operate at the energy-optimal point.We will compare the energy per cycle vs.delay for the proposed MSP430 processor and FIR filter implementation. Operating voltage The optimal operating voltage range for minimum power or energy needs to be evaluated for the proposed components.Along with power consumption, the voltage range will provide a design specification for voltage regulation and power delivery circuits. Operating frequency We will evaluate the performance of the proposed FIR filter and MSP430 processor in subthreshold regime by determining the maximum operating frequency.By comparing the maximum operating frequency at a fixed voltage, we can estimate the performance benefits due to technologies, such as FDSOI and DDC. Input-referred offset and noise in comparators As discussed, the comparators used in wake-up receivers need a very low input-referred offset for detecting an ultra-low power RF signal.Hence, it is important to select a comparator topology with compensation for process variation and mismatch.In the presence of interference signals and noise, it is important to control the offset and set the comparator threshold.We will design comparators with low input-referred offset and an offset compensation scheme to control the comparator threshold.Input-referred noise due to thermal and flicker noise will be used as another metric for the comparator design. Noise and Noise Margins The SNM will help in evaluating the circuit stability.Power supply noise or common mode noise can influence circuit functionality, especially in comparators.Within the available power budget, a high PSRR and CMRR are necessary for rejecting power supply noise and achieving a high overall signal-to-noise ratio (SNR). Results: Subthreshold processors and digital FIR filters We implemented a 16-bit MSP430 processor and a 16-bit, 32-tap FIR filter, designed for subthreshold operation in a 90nm Extremely Low Power (xLP) FDSOI and in a 55nm DDC technology respectively, using logic synthesis and auto-place-and route (APR) tools.For the MSP430 processor, a library of logic gates and sequential circuits such as flip-flops and latches were characterized to operate at 0.36V, and timing closure was achieved at 200 kHz using static-timing-analysis (STA) tools.Measurement in silicon shows an energy consumption of 5pJ/cycle at 0.4V running a QRS peak detection algorithm on an ECG data at 250 kHz on the processor.For the peak detection implementation, the processor consumes 5pJ per cycle at 0.4V and 250 kHz.If a higher performance is needed for overall ECG detection at the system level, the processor can operate at 1MHz at 0.6V, consuming 6.7pJ per cycle.Hence, if a higher performance is desired, by sacrificing 34% energy, 4x performance improvement can be achieved.Measured results show 55% reduction in V T variation of the fabricated devices in the xLP FDSOI process as compared to a standard FDSOI process.The measured minimum energy across 8 functional dies shows a σ/µ of 0.0405.Fig. 5.2 shows the measured energy-delay trends of the processor and I ds -V gs measurements of 46 PMOS transistors across two wafers.The 3σ variation in V T was found to be 8mV for a device with channel length, L g = 180nm and V ds = 0.3V.The reduced variation in V T was achieved due to reduced V T sensitivity to silicon thickness.The absence of random dopant fluctuations and reduced channel length sensitivity to source-drain anneal variations further minimize V T variation.Fig. 5.3 shows that the energy vs. delay and leakage power vs. supply voltage for the FIR filter with and without applying Reverse Body Biasing (RBB) to the transistors.The minimum energy per cycle for the FIR filter (at 0.36V) is ~5X lower than [54], and applying RBB of 0.25V gives 39.4% further reduction due to lower leakage energy. Anticipated Results: Ultra-low-power Comparator topologies for a ULP wakeup radio We have implemented several topologies of clocked, dynamic and continuous time comparators for a wake-up receiver system in 130nm CMOS technology.The specifications of the wake-up radio system are such that it needs to consume 10nW of total power and needs to be sensitive to a -60dBm input RF signal.For the comparators, we hope to achieve input-referred offset voltages in the order of 100s of µV and power consumption ranging from ~3-5nW. Ultra-low-power circuit families and logic styles Alternate logic styles and circuit families can hopefully reduce the power consumption, especially at battery voltages in the order of 1V so that an 8-bit correlator, designed using such logic styles consume less than or equal to 1nW.Lower power can be achieved by trading off either operating frequency (which might be reduced to 1kHz or less), SNM or layout area. Fig 2 . 2 Fig 2.2 Electrical model of a TEG 3 . 3 Research Questions: § Which kind of regulation scheme would provide the peak end-to-end efficiency at sub-µW load power? Fig 4 . 3 : Fig 4.3: Variation in RO frequency with supply and measured digital equivalent of injected droop Fig 4 . 4 : Fig 4.4: Measured power of droop measurement unit and measured Energy-Delay trends of latch-based and flip-flop based FIR filters
18,165
2016-01-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
On non-degenerate Berge-Tur\'an problems Given a hypergraph $\mathcal{H}$ and a graph $G$, we say that $\mathcal{H}$ is a \textit{Berge}-$G$ if there is a bijection between the hyperedges of $\mathcal{H}$ and the edges of $G$ such that each hyperedge contains its image. We denote by $ex_k(n,\text{Berge-}F)$ the largest number of hyperedges in a $k$-uniform Berge-$F$-free graph. Let $ex(n,H,F)$ denote the largest number of copies of $H$ in $n$-vertex $F$-free graphs. It is known that $ex(n,K_k,F)\le ex_k(n,\text{Berge-}F)\le ex(n,K_k,F)+ex(n,F)$, thus if $\chi(F)>r$, then $ex_k(n,\text{Berge-}F)=(1+o(1)) ex(n,K_k,F)$. We conjecture that $ex_k(n,\text{Berge-}F)=ex(n,K_k,F)$ in this case. We prove this conjecture in several instances, including the cases $k=3$ and $k=4$. We prove the general bound $ex_k(n,\text{Berge-}F)= ex(n,K_k,F)+O(1)$. Introduction Given a hypergraph H and a graph G, we say that H is a Berge copy of G (in short: a Berge-G) if there is a bijection between the hyperedges of H and the edges of G such that each hyperedge contains its image. Berge hypergraphs were introduced by Gerbner and Palmer [9] as a generalization of the notion of hypergraph cycles due to Berge. A closely connected area is that of generalized Turán problems.Given graphs H and G, we let N (H, G) denote the number of copies of H in G. Let ex(n, H, F ) := max{N (H, G) : G is an n-vertex F -free graph}.The sytematic study of this topic was initiated by Alon and Shikhelman [1] after several sporadic results. The connection between Berge-Turán problems and generalized Turán problems was established by Gerbner and Palmer [10], who showed that ex(n, K k , F ) ≤ ex k (n, Berge-F ) ≤ ex(n, K k , F ) + ex(n, F ).The upper bound was strengthened by Füredi, Kostochka, and Luo [3] and independently by Gerbner, Methuku and Palmer [8].To state this result, we need some definition. A blue-red graph G is a graph with each edge colored blue or red.We denote by G blue the subgraph consisting of the blue edges and by G red the subgraph consisting of the red edges.We say that a blue-red graph G is F -free if G does not contain F (here we do not care about the colors).Given an integer k ≥ 3, let g(G) := N (K k , G blue ) + |E(G red )|.Let ex col (n, F ) := max{g(G) : G is an n-vertex F -free graph}. A hypergraph Turán problem is called degenerate if the order of magnitude of the extremal function is smaller than the largest possible, i.e. smaller than n k in our case.By the above bounds, ex k (n, Berge-F ) = o(n k ) if and only if ex(n, K k , F ) = o(n k ), which happens if and only if χ(F ) ≤ k by a result of Alon and Shikhelman [1].Another result of Alon and Shikhelman [1] shows that if In the non-degenerate case, for k ≥ 3 we have that ex(n, We believe that a stronger connection also holds. The above conjecture is known to hold in the case F has a color-critical edge (an edge whose deletion decreases the chromatic number).The k-uniform expansion F +k of a graph F is the specific k-uniform Berge copy that contains the most vertices, i.e., the k − 2 vertices added to each edge of F are distinct for different edges, and distinct from the vertices of F .Pikhurko [19] showed that for r ≥ k, the Turán number of K +k r+1 is equal to N (K k , T (n, r)) if n is sufficiently large.According to the survey [18] on expansions, Alon and Pikhurko observed that Pikhurko's proof generalizes to the case F is an (r + 1)-chromatic graph with a color-critical edge.A simpler proof for the Berge case can be found in [7]. In general, the above observations imply that ex k (n, Berge-F ) = ex(n, K k , F ) + O(n 2 ).This was improved to ex k (n, Berge-F ) = ex(n, K k , F ) + o(n 2 ) in [4].We further improve this bound in our next result. We show that Conjecture 1.2 holds if F contains a color-critical vertex (a vertex whose deletion decreases the chromatic number). Theorem 1.4.Let χ(F ) > k and assume that F contains a color-critical vertex.Then for sufficiently large n we have ex k (n, Berge-F ) = ex(n, K k , F ). We show that Conjecture 1.2 holds in the 3-and 4-uniform case.Furthermore, it holds in any uniformity if the chromatic number of F is sufficiently large.Recall that if χ(F ) > k, then the asymptotics of ex(n, K k , F ) is known, thus the asymptotics of ex k (n, Berge-F ) is known.Even if Conjecture 1.2 is true, it only improves the asymptotic result to an exact result in the few cases when ex(n, K k , F ) is known.Besides the case where F has a color-critical edge, we are aware only of the following results.Let 2K r+1 denote two vertex-disjoint copies of K r+1 and B r,1 denote two copies of K r+1 sharing exactly one vertex.Gerbner and Patkós [12] determined ex(n, K k , 2K r+1 ) and ex(n, K k , B r+1,1 ).The first of these results was extended by Gerbner [6] to ex(n, K k , F ) in the case each component of F either has chromatic number r + 1 and contains a color-critical edge, or has chromatic number at most r.Gerbner [6] also determined ex(n, K k , Q r+1 ) for a class of graphs Q r that we do not define here and most values of k. For the Berge copies of the graphs mentioned above, we can show that Conjecture 1.2 holds.In fact, B r+1,1 and Q r each has a color-critical vertex, thus we already dealt with them in Theorem 1.4.Let K i + T (n − i, r) denote the graph we obtain by adding i vertices to T (n − i, r) and joining them to every vertex.Theorem 1.6.Let us assume that F consists of s components with chromatic number r + 1, each with a color-critical edge, and any number of components with chromatic number at most r. To prove the above theorems, we use the following results on the structure of the extremal graphs that are interesting on their own.Let us denote by σ(F ) the smallest possible order of a color class in a χ(F )-coloring of F .Theorem 1.7.Let χ(F ) = r + 1 > k and G be an n-vertex F -free blue-red graph with g(G) = ex col (n, F ). Then the followings hold. (i) For every vertex u of G we have that the number of blue k-cliques plus the number of red edges containing u is at least (ii) Let ε > 0 be sufficiently small.Then there exist an r-partition of V (G) to A 1 , . . ., A r , a constant K = K(F, ε) and a set B of at most rK(σ(F )−1) vertices such that the followings hold.For each i we have|A i |= (1 − o(1))n/r, each red edge is between two elements of B, every vertex of B is adjacent to at least εn vertices in each part and to at least cn vertices in all but one parts for some constant c = c(F ).Furthermore, every vertex of A i \ B is adjacent to at most εn vertices in A i and all but at most ε(2r k + 1)n vertices in A j with j = i. (iii) Let H be an n-vertex k-uniform Berge-F -free hypergraph with ex k (n, Berge-F ) hyperedges.Then every vertex of H is contained in at least Proofs We will use the following stability result due to Ma and Qiu [17]. Theorem 2.1 (Ma, Qiu [17]).Let χ(F ) > k and let G be an n-vertex F -free graph that contains ex(n, K k , F ) − o(n k ) copies of K k .Then G can be turned into T (n, r) by adding and removing o(n 2 ) edges. Let us start with the proof of Theorem 1.7, that we restate here for convenience. Theorem.Let χ(F ) = r + 1 > k and G be an n-vertex F -free blue-red graph with g(G) = ex col (n, F ). Then the followings hold. (i) For every vertex u of G we have that the number of blue k-cliques plus the number of red edges containing u is at least (ii) Let ε > 0 be sufficiently small.Then there exist an r-partition of V (G) to A 1 , . . ., A r , a constant K = K(F, ε) and a set B of at most rK(σ(F )−1) vertices such that the followings hold.For each i we have|A i |= (1 − o(1))n/r, each red edge is between two elements of B, every vertex of B is adjacent to at least εn vertices in each part and to at least cn vertices in all but one parts for some constant c = c(F ).Furthermore, every vertex of A i \ B is adjacent to at most εn vertices in A i and all but at most ε(2r k + 1)n vertices in A j with j = i. (iii) Let H be an n-vertex k-uniform Berge-F -free hypergraph with ex k (n, Berge-F ) hyperedges.Then every vertex of H is contained in at least hyperedges.We note that the analogous results for ex(n, K k , F ) can be found in [17].Generalizations to some other graphs in place of K k can be found in [5] for (i) and in [6] for (ii).Our proof follows the proofs in [5] and [6]. Proof.Observe that G contains at least ex(n, K k , F ) − ex(n, F ) blue copies of K k , thus G blue can be transformed to a complete r-partite graph by adding and removing o(n 2 ) edges by Theorem 2.1.Note that there may be several different such complete r-partite graphs on the vertex set V (G) that can be obtained this way, we pick one with the smallest number of edges inside the parts and denote it by G ′ .It is easy to see that each part has order Let x denote the number of blue k-cliques and red edges that contain u and a vertex from S, then x = O(n k−2 ).Now we apply a variant of Zykov's symmetrization [23].If |S| − x, then we remove the edges incident to u from G. Then for every vertex v that is connected to each vertex of S with a blue edge, we connect u to v with a blue edge.For every vertex w that is connected to each vertex of S with a red edge, we connect u to w with a red edge.This way we do not create any copy of F , as the copy should contain u, but u could be replaced by any vertex of S that is not already in the copy, to create a copy of F in G.We removed d G (k, u) blue k-cliqes and red edges, but added at least d G (H,S) |S| − x blue k-cliqes and red edges, a contradiction.Therefore, we have that the blue k-cliques plus the red edges containing u is at least This completes the proof of (i). The proof of (iii) is similar.We pick S the same way, but instead of blue k-cliques and red edges, we count the hyperedges containing u, let d H (k, u) denote their number.Let y denote the number of hyperedges that contain u and a vertex from S. If d H (k, u) < d H (k,S) |S| − x, then we remove the hyperedges containing u and for every hyperedge H that contains exactly one vertex v ∈ S, we add (H \ {v}) ∪ {u} as a hyperedge.Then the same reasoning as above completes the proof of (iii). Let B denote the set of vertices that are adjacent to at least εn vertices in their part A i .Note that by the choice of G ′ , vertices of B are incident to at least εn vertices in each other part.Let B i = B ∩ A i . Claim 2.2. There is a K depending on ε and F such that |B|≤ K(σ(F ) − 1). The analogous claim for uncolored graphs G 0 with ex(n, K k , F ) copies of K k is in [17].However, the proof of that claim does not use that G 0 is extremal, only that G 0 contains ex(n, K k , F ) − o(n k ) copies of K k .As this holds for G as well, the claim follows. Consider now the set U i of vertices v such that v ∈ A i \ B is adjacent to less than |A j |−ε(2r k + 1)n vertices of some A j .As there are o(n 2 ) edges missing between parts, we have that Claim 2.3.For each i we have that U i = ∅. Proof of Claim. Let Let us delete the edges from each v ∈ U i to A i and connect v to each vertex of each V j , j = i with a blue edge.We claim that the resulting graph G ′′ is F -free.Indeed, consider a copy F 0 of F with the smallest number of vertices in U i .Clearly F 0 contains a vertex v ∈ U i , as all the new edges are incident to such a vertex.Let Q be the set of vertices in F 0 that are adjacent to v in G ′ .They are each from ∪ j =i V j .Their common neighborhood in V i is of order n r − o(n).Therefore, at least one of the common neighbors is not in F 0 , thus we can replace v with that vertex to obtain another copy of F with less vertices from ∪ r i=1 U i , a contradiction. We deleted at most εn k−1 blue k-cliques and red edges for each vertex v ∈ U i , since they each contain one of the less than ǫn edges incident to v inside U i .We claim that we added more than εn k−1 blue k-cliques.We consider only those blue k-cliques that contain v, a new neighbor of v in V j with j = i, and k − 2 other vertices from other sets V ℓ .We have at least 2r k εn choices for the neighbor and at least n/r − εn choices for the other vertices.If ε is sufficiently small, then indeed, we obtain more than εn new blue k-cliques, thus g(G ′′ ) > g(G ′ ), a contradiction unless U i is empty.Now we show that there is a constant c = c(F ) such that each vertex is adjacent to at least cn vertices in all but one parts.Assume that v is adjacent to less than cn vertices in A 1 and in A 2 .Then the number of blue cliques containing v is at most r It is left to show that each red edge is between vertices in B. Assume that u ∈ B and uv is a red edge.Let us change its color to blue.We will find more than one new blue k-clique greedily.We can assume without loss of generality that u ∈ A 1 and v is in either A 1 or in A 2 .Let us observe that u and v have at least cn − ε(2r k + 2)n common neighbors in G blue inside V 3 , we pick one of them.These three vertices have at least cn − 2ε(2r k + 2)n common neighbors in G blue inside V 4 , we pick one of them, and so one.We can pick k vertices if cn − (k − 2)ε(2r k + 2)n > 0, which holds if ε is small enough.Clearly we can pick more than one blue k-clique this way, completing the proof of (ii). Theorem 1.3 is easily implied by (ii) of Theorem 1.7, since in an F -free n-vertex blue-red graph, the number of blue k-cliques is at most ex(n, K k , F ), while the number of red edges inside B is O(1).Theorem 1.4 is also implied by (ii) of Theorem 1.7, since a color-critical vertex means that σ(F ) = 1, thus |B|= 0, hence there are no red edges. Let us continue with the proof of Theorem 1.5.Recall that it states that if χ(F ) > k and k ≤ 4 or if χ(F ) is sufficiently large, then Conjecture 1.2 holds. Proof of Theorem 1.5.Let χ(F ) = r +1.We will use Lemma 1.1.Let G be a blue-red F -free graph with g(G) = ex col (n, F ). Assume that there is a red edge uv in G and apply now (ii) of Theorem 1.7.We obtain a partition of V (G) to A 1 , . . ., A r with |A i |= (1 + o(1))n/r such that there are o(n) edges inside parts, and there is a set B of vertices with |B|= o(n) such that each vertex outside B is adjacent to all but o(n) vertices in each other part. Assume that u and v have Ω(n) common neighbors in at least k −2 of the sets A 1 , . . ., A r , say A 1 , . . ., A k−2 .Then at least Ω(n) of those vertices are not in B, we will use only those vertices.We pick a common neighbor in A 1 \ B, then it has Ω(n) common neighbor with u and v in A 2 .Therefore, we can pick a common neighbor in A 2 \ B, and so on.The resulting cliques do not contain any vertex of B, thus by turning uv blue, we obtain multiple blue k-cliques, thus g(G) increases, a contradiction. We obtained that u and v have Ω(n) common neighbors in at most k − 3 of the sets A i , say A 1 , . . ., A k−3 .In the remaining r − k + 3 sets A i , they have o(n) common neighbors, thus at least one of them, say u has at most (1 + o( 1))(r − k + 3)n/2r neighbors in A k−2 , . . ., A r .Consider now the number of blue k-cliques containing u.There are o(n k−1 ) blue k-cliques that contain u and an edge inside an A i that is not incident to u.Therefore, we can focus on those blue k-cliques that contain u, and the other k − 1 vertices are from different parts. Let K be such a blue k-clique and assume that K contains i vertices from A 1 , . . ., A k−3 .There are at most (1 + o(1)) k−3 i n r i ways to pick such an i-set.For the remaining k − 1 − i vertices of K, we have to pick one neighbor of u from k − 1 − i of the remaining r − k + 3 sets, and in total u has (1 + o(1))(r − k + 3)n/2r neighbors in those sets.Then the number of (k − 1 − i)-cliques is at most ex((1 + o(1))(r − k + 3)n/2r, K k−1−i , K r−k+4 ).A theorem of Zykov [23] states that ex(n, K s , K t ) = N (K s , T (n, t − 1)) = ( 1 We apply (i) of Theorem 1.7, thus we know that each vertex v is in at least This holds only if 3 , a contradiction.If r = 5 or r = 4, then one can easily obtain a contradiction as well.This completes the proof of (i). There are several other pairs (k, r) when we could obtain a contradiction a similar way.However, if k = r, the left hand side has a term k−3 k−4 /8.If k ≥ 11, then this term alone is larger than the right hand side, thus we do not have a contradiction in general.In fact, one can easily see that for k = r = 5 we do not obtain any contradiction.On the other hand, if k is fixed and r grows, there is only one term on the left hand side of (1) of order r k−1 , and it is r k−1 /2 k−1 (k − 1)!.Since the leading term on the right hand side is r k−1 /(k − 1)!, we obtain a contradiction for r large enough, proving (ii). Let us continue with the proof of Theorem 1.6 that we restate here for convenience. Theorem.(i) Let us assume that F consists of s components with chromatic number r + 1, each with a color-critical edge, and any number of components with chromatic number at most r. The corresponding generalized Turán results are proved in [6] and we will extend the proofs from there.We omit some details.We remark that in [6], the proof of the statement ex Changing any blue edge in K s−1 + T (n − s + 1, r) to red destroys Θ(n k−2 ) copies of K k , thus it decreases g(G).This gives an alternative proof of (i) of Theorem 1.6. Proof.We start with proving (i).Let G be a blue-red F -free graph with g(G) = ex col (n, F ).We apply (ii) of Theorem 1.7.Assume first that there are s independent edges u 1 v 1 , . . ., u s v s inside the parts such that for each i, at least one of u i and v i are not in B. Observe that u i and v i have Ω(n) common neighbors in each part besides the one containing them.Using this, we can easily extend each edge to an (r + 1)-chromatic component of F , where u i v i plays the role of a color-critical edge.We can also find the other components to obtain a copy of F in G, a contradiction. If |B|≥ s, then we can find s distinct vertices among their neighbors not in B, resulting in the contradiction.By similar reasoning, there are no s − |B| independent edges inside parts but outside B. Therefore, the edges inside parts that are not incident to any vertex of B form at most s − 1 − |B| stars plus O(1) further edges.Since the vertices outside B are incident to o(n) edges inside parts, there are o(n k−1 ) k-cliques containing such a vertex.This implies that deleting all the edges inside parts that are not incident to B, we lose o(n |V (H)|−1 ) copies of H.If |B|< s − 1, then we can add a vertex to B creating Θ(n |V (H)|−1 ) copies of H, a contradiction.We obtained that |B|= s − 1 and then there is no edge inside parts but outside B. This implies that G is a subgraph of K s−1 + T (n − s + 1, r), completing the proof. (1 − o(1))n/r, otherwise the number of blue cliques is at most r k n r k − Θ(n k ).Let A 1 , . . ., A r denote the parts and let f (v) denote the number of red edges and blue k-cliques incident to v that are removed this way.Then we have v∈V (G) f (v) = o(n k ).Consider a set S of |V (F )| vertices in A 1 such that v∈S f (v) is minimal.Then by averaging v∈S f (v) ≤ |S| |V 1 | v∈V 1 f (v) = o(n k−1 ).Let us consider blue k-cliques and red edges that contain exactly one vertex s of S, and the other vertices are in the common neighborhood of S in G. Let d G (k, S) denote the number of such blue k-cliques and red edges.Observe that each vertex of S is in d G (k,S) |S| such blue k-cliques and red edges.Clearly i ways to pick the (k − 1 − i)-clique. Funding: Research supported by the National Research, Development and Innovation Office -NKFIH under the grants SNN 129364, FK 132060, and KKP-133819.
5,703.6
2023-01-03T00:00:00.000
[ "Mathematics" ]
Chloride and Fluoride Contents in Flue Gas During Domestic Lignite Coals Combustion as a Parameter in the Design of Flue Gas Desulphurisation Plant Recently, research in the field of coal combustion include impurities, specifically halogen elements (F, Cl, Br, I and At). Emission of chlorides and fluorides from the combustion depends on content and forms of these elements in coal, combustion process and emission reduction equipment. Examination of chlorides and fluorides content in coal and in flue gas is particularly important for design of flue gas desulphurisation plant, the integral part of the modern power plants which ensure meeting the requirements of SO2 emission regulations. In flue gas desulphurisation facilities, the presence of HCl may increase sorbent consumption and HCl and HF have the influence on wastewater treatment. This paper presents the results of chlorine and fluorine contents in domestic lignites and their concentration in flue gas. The aim of investigation was to determine the reference Cl and F concentrations in flue gas that would be used in the design of flue gas desulphurization plant. INTRODUCTION Steady rise in world electricity consumption imposes the development of new and improvement of existing coal combustion technologies in large power plants as well as installations and other equipment for flue gas treatment to meet increasingly strict environmental regulations.In the past years, research of the coal combustion process was devoted, among other things, to studying the content and transformation of halogen elements (F, Cl, Br, I and At).According to reactivity, halogens can be classified as follows F>Cl>Br>I [1], hence most of the research is related to Cl and F. During combustion the halogens are transferred to gas and under favourable conditions form acids, causing problems in both power plants' installations and their environment.Chlorides have corrosive effects on evaporator and super heater tubes, but also emerge in the flue gas desulphurization process [2,3,4].If chlorides escape into the atmosphere, they cause aerosol occurrence [5].Fluorides are very toxic and in high amounts may be hazardous for humans and the living world [6]. Chloride and fluoride emissions from combustion depend on the content and form of these elements in coal, combustion conditions and characteristics of installations for emissions reduction.Differences in fluoride and chlorine contents are not only a consequence of the original matter structure and coal transformation in the process of its formation (because chloride and fluoride occurrence is not determined by coal rank only) but also emerge inside the same coal basin. Chlorine in coal comes from the original matter that coal was formed from and research established that there are three main forms of chlorine in coal [7,8]: − chlorine ions in saltwater and other water-bound compounds in coal -NaCl, KCl, CaCl 2 and sometimes MgCl 2 and FeCl 2 ; − organic compounds-bound chlorine -organic (is present bonded in organic macromolecules), and Recent studies indicate that chlorine is mostly bonded in the first of above mentioned states and that in the form of crystal water-bound chlorine anion in coal pores.In drying process, chlorine precipitates to give NaCl [9,10]. Chlorine content in coal varies from several ppm to several thousand ppm, and for the majority of coals it is within range of 50-2000 ppm, although it can be much higher in coals from some basins.Yudovich and Ketris determined the mean concentrations of chlorine (at global level) from 120±20 ppm in lignites and subbituminous coals, and from 340±40 ppm in anthracites and bituminous coals.Differences in Cl concentrations in coal between coal basins can be pronounced and can deviate considerably from mean value.Also, differences in Cl concentrations in coal basin have been observed to be depth -related.Cl concentration increases with increase of depth, due to water presence [7].Concentration of Cl compounds rises as high as ̴ 86% of coal C content [8]. Mean fluorine concentration is 90±7 ppm for lignites and sub-bituminous coals, and 82±6 ppm for anthracites and bituminous coals.Concentrations in ash are 630±50 ppm and 580±20 ppm, respectively [16].Concentrations increase with increase of depth, decreasing after a certain depth, when a maximum value is reached.As a rule of a thumb, mean fluorine concentration rise with coal rank [12].Depending on the method employed to determine fluorine content, it can vary largely in different types of coal, hence, in some cases it is difficult to compare coal fluorine content and its properties during combustion processes reported in various studies.Swaine [5] found that the fluorine content in coals is in the range 20-500 ppm with mean value of approx.150 ppm. A wide range of methods have been developed for determining chlorine and fluorine content in coal which can be classified into two groups: quantitative (standard and non-standard) and extraction methods.Studies of chlorine and fluorine content indicate differences in results obtained by applying different methods or from different laboratories, which are explained by different forms of coal mass-bound chlorine and fluorine. Chlorine emissions from coal-fired plants range from 50 to several thousand ppm, depending on chlorine content in coal, type of boiler and installed equipment for pollutants control.Water-soluble chlorine, which is present in coal and bound to coal structure by weak bonds, is very quickly transferred into a gas phase during pulverized coal combustion.The remaining amount of organic mass bound chlorine in coal is released during combustion of carbon.In practical terms, the whole amount of vaporized chlorine occurs in combustion products in the form of HCl.Elemental chlorine may occur during the oxidation process in the presence of metal oxides that behave as catalyst in fly ash and boiler deposits.Thus HCl leaving the chimney is in vapor state, although a certain amount of HCl may be adsorbed in smaller fly ash particles.Fluorine emissions are generally lower compared to chlorine emissions, because fluorine content in coal is lower.The largest amount of fluorine in coal passes to HF during the combustion process, while only a smaller amount (< 10%) may remain in slag.Also, a smaller amount of HF may be adsorbed in fly ash particles prior to their leaving the stack.According to investigations [17], an assumption was introduced that fluorine occurs in two different forms that behave differently during the combustion process.The first form accounting for 25 -50 % of total fluorine is water -soluble emitted as gaseous HF, and the second one (50-75 %) is water insoluble and inert during the combustion process and remains in ashes. In flue gas desulphurization plants HCl from flue gases is absorbed faster than sulphur dioxide, so the presence of HCl may increase requirements for sorbent.In the preabsorber a larger amount of fly ash and dissolved gases is separated, such as HCl and HF, and wastewater is discharged to a water treatment plant.The wastewater treatment is determined by HCl and HF contents in dissolved gases.High concentration level of halogens, particularly of chlorine (Cl), in suspension inside the absorber causes metal point corrosion.Ions of the halogens penetrate easily through protective passive film, particularly in points of micro cracks or material physical and chemical heterogeneity.Also, fluorine reacts with aluminum oxide to form ion AlF 6 3-causing deposits inside the absorber difficult to remove.Because of these reasons chlorine and fluorine concentrations are always the subject of studies in flue gas desulphurization plant design aiming to keep their concentration below allowable levels [18]. This paper presents the results of analyses of chlorine and fluorine contents in domestic lignites and their concentration in flue gas developed at pulverized coal combustion in thermal power plants.The aim of studying the analyses results was to determine the reference Cl and F concentrations in flue gas that would be used in the design of flue gas desulphurization plant. EXPERIMENTAL TESTS Experimental analyses of chlorine and fluorine contents in coal and their transformations during the combustion process were carried out at the units of Thermal Power Plant Nikola Tesla (TPPNT), which use low-calorific coal -lignite from Kolubara open pit mines. Studies were conducted under real conditions and at common operating modes of coal fired units that use coal from different mining pits of Kolubara coal basin: Tamnava, Vreoci Stari and Vreoci Novi, as well as coal from power plants' coal depot.Studies were performed at the TPPNT units B1 (620 MW of power capacity, symbol B) and at the TPPNT units A6 (350 MW of power capacity, symbol A).Four studies were carried out for each unit (symbols I-IV).Each study included two series of analyses, 3 hours each at a time, so the total duration of each experiment was 6 hours. Coal sampling was performed from a coal feeder every 30 minutes and finally a composite sample was formed for each experiment.At units with two flue gas ducts (right and left), simultaneous measurements were performed of hydrogen chloride and hydrogen fluoride concentrations.Along with sampling performance and its dynamics, unit operating parameters and flue gas characteristics were recorded. To obtain representative results, two different methods for determining chlorine and fluorine contents in coal were employed.Chlorine content in coal was determined using the ISO 587 [19,20] and ASTM D4208 [21] methods, while the ASTM D5987 [19,20] and ASTM D3761 [21] methods were applied to determine fluorine content in coal. Hydrogen chloride and hydrogen fluoride concentrations in flue gas were determined by the EPA Test method 320: 1999.This method was also employed to determine oxygen and moisture contents in flue gas. 1 and 2) and unit operating parameters as well as flue gas characteristics (Tables 3 and 4).[19,20,21].Results for measurements of the chlorides and fluorides content in flue gas [22] are presented in Figs 3 and 4. The contents of chlorides and fluorides in flue gas (calculated on a dry flue gas basis, 6 %v/v O2, NPT) for the whole period of experiment (two 3-hour series each) are given in Fig. 5. DISCUSSION Based on analyses of the chlorine and fluorine contents in coal determined by different standard methods, significant differences are noted between the results obtained (Figs 1 and 2).Within the framework of analyzed samples, chlorine concentration was in the range 24÷426 mg/kg (according to ISO 587 method) and 113÷640 mg/kg (according to the ASTM D4208 method), respectively.Mean concentrations were 134 mg/kg and 203 mg/kg, respectively.Comparing these concentrations with chlorine mean concentrations at global level, it is noted that obtained results for the coals from Kolubara coal basin correspond, according to the ISO 587 method, to the values complying with literature data on lignites, whereas the ASTM D4208 method produces results corresponding to a group of coals between lignite and sub-bituminous coals, on one hand, and to bituminous coals and anthracite, on the other hand. Within the framework of analyzed coal samples, fluorine concentration ranged from 1÷42 mg/kg (according to the ASTM D5987 method) and from 85÷171 mg/kg (according to the ASTM D3761 method), respectively.Mean concentrations are 17 mg/kg and 140 mg/kg, respectively.Comparing these concentrations with mean fluorine concentrations in lignites and sub-bituminous coals, at global level, the ASTM D3761 method produced values twice as much as mean values for fluorine concentration reported in literature.Mean values of fluorine concentration in coal, obtained by averaging samples according to the ASTM D5987 method is much lower than those obtained by Yudovich and Ketris for means related to concentrations in bituminous coals and anthracite.For all experiments, chlorine concentrations in coal are higher than fluorine concentrations, which is in agreement with literature data [15,16].At the same time, the obtained values of chlorine and fluorine contents in coal agree with the values obtained in the study of coal samples from different coal pit mines in Kolubara coal basin [23,24]. For TPPNT B1, hydrogen chloride concentration is within the 9.5 -11.2 mg/m 3 range (calculated on a dry flue gas basis and 6 %v/v O 2 ), mean value being 9.8 mg/m 3 .The coefficient of deviation is 4.3 %.Hydrogen fluoride concentration is in the 1.6-2.0mg/m 3 range (calculated on a dry flue gas basis and 6 %v/v O 2 ), while mean value is 1.9 mg/m 3 .The coefficient of deviation is 10.1 %.For TPPNT A6, hydrogen chloride concentration is within the range of 9.5-10.4mg/m 3 (calculated on a dry flue gas basis and 6 %v/v O 2 ), mean value being 10.5 mg/m 3 .The coefficient of deviation value equals 4.8%.Hydrogen fluoride concentration is in the 1.8-2.2mg/m 3 range (calculated on a dry flue gas basis and 6 %v/v O 2 ), where mean value amounts to 2.0 mg/m 3 .The coefficient of deviation is 9.1 %. In flue gas, hydrogen chloride concentrations are higher than hydrogen fluoride concentrations.There are slight differences between hydrogen chloride and hydrogen fluoride mean values when studied at different units (<10 %). Using data on measured chlorine and fluorine content, maximum concentrations of chlorides and fluorides in flue gas were determined, assuming that the total amount from coal was transformed into gases in the flue gas (Figs 6 and 7).The calculated HCl concentrations for the IV-B series of study (according to the result for Cl by the ISO 587 method) are higher, except for one result, than those measured.In the TPPNT A6 series of experiments, the trend is that calculated values are lower than those measured (except for two results).For the series of fluorine content determination according to the ASTM D3761 method, the calculated HF concentrations in flue gas are substantially higher than those measured, and for the series of F content determination according to the ASTM D5987 method it is not possible to deduce a single-valued conclusion.Bearing in mind above mentioned conclusions, primarily differences in measurement results for HCl and HF contents in flue gas and values obtained by calculations based on material balance, the requirements for future studies must include monitoring of Cl and F content in coal and fly ash along with simultaneous parallel measuring of HCl and HF in flue gas using the reference and FTIR method.Those studies would allow determination of the conversion factor i.e. determination of Cl and F binding from coal in fly ash. Figure1. Chlorine content in coal determined by different study methods (calculated on dry mass basis)Figure 2 .Figure 3 .Figure 4 .Figure 5 . Figure1.Chlorine content in coal determined by different study methods (calculated on dry mass basis)
3,260.6
2017-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Size and time-resolved growth rate measurements of 1 to 5 nm freshly formed atmospheric nuclei This study presents measurements of size and time-resolved particle diameter growth rates for freshly nucleated particles down to 1 nm geometric diameter. Novel data analysis methods were developed, de-coupling for the first time the size and time-dependence of particle growth rates by fitting the aerosol general dynamic equation to size distributions obtained at an instant in time. Size distributions of freshly nucleated total aerosol (neutral and charged) were measured during two intensive measurement campaigns in different environments (Atlanta, GA and Boulder, CO) using a recently developed electrical mobility spectrometer with a diethylene glycol-based ultrafine condensation particle counter as the particle detector. One new particle formation (NPF) event from each campaign was analyzed in detail. At a given instant in time during the NPF event, sizeresolved growth rates were obtained directly from measured size distributions and were found to increase approximately linearly with particle size from∼1 to 3 nm geometric diameter, increasing from 5.5 ± 0.8 to 7.6± 0.6 nm h−1 in Atlanta (13:00) and from 5.6 ± 2 to 27± 5 nm h−1 in Boulder (13:00). The resulting growth rate enhancement 0, defined as the ratio of the observed growth rate to the growth rate due to the condensation of sulfuric acid only, was found to increase approximately linearly with size from∼1 to 3 nm geometric diameter. For the presented NPF events, values for 0 had lower limits that approached ∼1 at 1.2 nm geometric diameter in Atlanta and∼3 at 0.8 nm geometric diameter in Boulder, and had upper limits that reached 8.3 at 4.1 nm geometric diameter in Atlanta and 25 at 2.7 nm geometric diameter in Boulder. Nucleated particle survival probability calculations comparing the effects of constant and size-dependent growth indicate that neglecting the strong dependence of growth rate on size from 1 to 3 nm observed in this study could lead to a significant overestimation of CCN survival probability. Introduction Atmospheric aerosols influence climate and climate change on local to global scales by affecting the atmospheric radiation balance directly through scattering and absorbing incoming solar radiation and indirectly as cloud condensation nuclei (CCN) (Charlson et al., 1992).Atmospheric measurement and modeling studies have shown that new particle formation (NPF), through photochemical reactions of gasphase precursors, greatly increases the number concentration of atmospheric aerosols, and is often followed by rapid growth of the nucleated aerosol to a CCN-active size, significantly increasing the CCN population (Lihavainen et al., 2003;Kerminen et al., 2005;Spracklen et al., 2008;Kuang et al., 2009).This rapid growth, often many times that of the growth assuming the condensation of sulfuric acid alone (Weber et al., 1997), is neither well understood nor represented in regional and chemical transport models (Pierce and Adams, 2007;Wang and Penner, 2009;Spracklen et al., 2010).This lack of understanding limits the ability to realistically assess the impact of NPF on the global surface CCN population and its contribution to the aerosol indirect effect. Growth rates based solely on the condensation of sulfuric acid vapor significantly underestimate the observed growth rate (Sihto et al., 2006;Riipinen et al., 2007;Iida et al., 2008;Kuang et al., 2010;Nieminen et al., 2010) largely because organic compounds are responsible for up to 95 % of the growth (Mäkelä et al., 2001;O'Dowd et al., 2002;Smith et al., 2008Smith et al., , 2010)).This enhancement in growth can be characterized by a quantity, , defined as the measured diameter growth rate divided by the diameter growth rate due to the condensation of sulfuric acid, quantifying the contribution of other species to the observed growth.Compiled values of for nanoparticle growth rates measured in diverse environments indicate an average of value of 5 to 10 with values as high as 20 to 50, clearly showing that, on a global level, growth rates of freshly nucleated particles are due to the uptake of species other than sulfuric acid (Kuang et al., 2010). Particle growth rates reflect the sum of all gas-to-particle conversion processes that contribute to growth, and therefore include important information on chemical processes that affect growth.Chemical models are needed to explain observed growth rates and developing such models will require measurements of gas-phase species that contribute to growth, measurements of particle composition, as well as an understanding of the dependence of observed growth rates on concentrations of the gas-phase precursors.For example, previous research has shown that sulfuric acid condenses on particles at the diffusion limit with an accommodation coefficient close to 1.0 (Jefferson et al., 1997), indicating that the contribution of sulfuric acid to the growth of large particles (>20 nm) can be modeled.Smith and co-workers have shown that alkyl ammonium carboxylate salts, formed by reactions between amines and carboxylic acids, account for 20 to 50 % of observed growth in the atmosphere (Smith et al., 2010), but the mechanism for this process is not yet understood in part because the gas-phase precursors have not been measured.Establishing such chemical models requires accurate information on growth rates.This paper describes methods that can be used to de-couple the dependencies of growth rates on size and time for the smallest (∼1 to 5 nm geometric diameter) nucleated particles.This de-coupling is particularly crucial as it allows for the clear interpretation of observed size-dependent growth as a consequence of the particular particle growth mechanism at work rather than the consequence of time-dependent vapor condensation on the growing aerosol.Early aerosol growth studies developed and applied techniques to obtain, from measured size distributions, size-dependent growth rates of 30 to 600 nm particles growing through gas to particle conversion (Friedlander, 1977;Heisler and Friedlander, 1977;McMurry and Wilson, 1982;McMurry and Grosjean, 1985).As aerosol sizing and counting instrumentation have improved since then (McMurry, 2000), particle growth rates have been obtained for even smaller sizes, down to 3 nmthe conventional size detection limit for measuring the total aerosol (Stolzenburg and McMurry, 1991). Recently, the development of a scanning mobility particle spectrometer (SMPS) using a diethylene glycol-based ultra-fine condensation particle counter (DEG UCPC) has enabled the first mobility-classified measurements of the complete number size distribution for the total aerosol during an atmospheric nucleation event, bridging the size range from vapor molecules and molecular clusters (<1 nm geometric diameter) to nanoparticles and sub-micrometer particles (Jiang et al., 2011b).While earlier studies have presented sub 3 nm size distributions acquired during nucleation events using activation-sizing techniques (Sipilä et al., 2009;Lehtipalo et al., 2011), mobility-classified size distributions of freshly nucleated aerosol were measured in this study using a DEG UCPC to extend SMPS measurements down to ∼1 nm (Jiang et al., 2011a).Such measurements enable the direct determination of not only the rates at which these nuclei are formed, but also the rates at which neutral nuclei as small as 1 nm geometric diameter grow as functions of time and size. Using size distribution measurements of the total aerosol (neutral and charged) down to ∼1 nm geometric diameter, novel data analysis methods were developed in this study to de-couple, for the first time, the size and time-dependence of particle growth rates for freshly nucleated aerosol.While earlier studies have presented evidence for size-dependent growth rates of nucleation mode particles, those results were obtained from size distributions of the ambient ion population and were averaged over particle size and growth time (Hirsikko et al., 2005;Manninen et al., 2009;Yli-Juuti et al., 2011).Methods for obtaining size and time-resolved growth rates are presented, along with insights into the processes of nucleation and growth provided by these measurements. Measurements Analyzed nucleation events were acquired during two intensive measurement campaigns: the nucleation and cloud condensation nuclei (NCCN) study that was carried out in Atlanta, Georgia, during July and August 2009 (Jiang et al., 2011b), and a new particle formation and growth study carried out at the Foothill Laboratory of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, during August and September 2010.Geographic and meteorological conditions at the NCAR site are described in Zhao et al. (2010).Observations of significant particle production at 3 nm followed by rapid growth to potential CCN-active sizes (∼80 nm diameter at 0.2 % supersaturation) have frequently been made at both sites during the summer (Stolzenburg et al., 2005;Kuang et al., 2009).While only a limited number of NPF events are presented in this study, their corresponding results are consistent with the results from other NPF events obtained during both campaigns. In the NCCN study, freshly nucleated aerosol size distributions down to 1 nm geometric diameter were acquired with a newly developed scanning mobility particle spectrometer (SMPS) utilizing a DEG UCPC as the particle detector (Iida et al., 2009;Jiang et al., 2011a).In this study, uncertainties in the measured size distribution arise from uncertainties in the chemistry-dependent activation efficiency of particles smaller than 2 nm geometric diameter using the DEG UCPC, uncertainties in the chemistry and size-dependent charging below 2 nm geometric diameter, and uncertainties from particle counting statistics in the DEG UCPC; uncertainties in the measured particle diameter were assumed to be negligible in comparison and were therefore not considered in this analysis.Further details regarding the setup and operation of the DEG SMPS are described in Jiang et al. (2011a).In the NCAR study, a DEG SMPS was also deployed and was operated identically to the system used in the NCCN study, except for the DEG UCPC, which was operated at a higher aerosol flow-rate and saturator temperature for increased particle detection efficiency (Kuang et al., 2012).Aerosol number size distributions at larger sizes were acquired with a conventional SMPS system (3 to 500 nm) (Woo et al., 2001) during NCCN and with a commercial TSI SMPS system (10 to 500 nm) during the NCAR campaign.A cluster chemical ionization mass spectrometer (Cluster CIMS) (Zhao et al., 2010) was also deployed during both campaigns to measure concentrations of gas-phase sulfuric acid monomer, [H 2 SO 4 ], and neutral molecular clusters formed by nucleation.Systematic uncertainties in the measurement of [H 2 SO 4 ] lead to an upper and lower limit that is a factor of 1.3 above and below the reported concentration, respectively; random relative uncertainties were assumed to be 10 % (Zhao et al., 2010;Jiang et al., 2011b).Further details regarding estimation of uncertainties in the measurement of [H 2 SO 4 ] can be found in the appendices.During NCCN, direct measurements of the molecular composition of freshly nucleated 10 to 40 nm diameter particles were performed using a Thermal Desorption Chemical Ionization Mass Spectrometer (TDCIMS) (Smith et al., 2004).The TDCIMS measured particles with mobility diameters chosen to correspond to the peak of the growing nucleation mode.Molecular composition was inferred from the ion current detected in both positive and negative ion mass spectra, with uncertainties typically ranging from 10 to 30 % (Smith et al., 2008;Smith and Rathbone, 2008). Data analysis DEG SMPS raw data were inverted to yield aerosol number size distributions (Knutson, 1976;Stolzenburg and Mc-Murry, 2008;Jiang et al., 2011a).While particles were sizeclassified according to their mobility diameter, results from this study are presented in terms of particle mobility diameter and particle geometric diameter, where the former is approximately 0.3 nm larger than the latter (Larriba et al., 2011).Estimated uncertainties in the measured size distribution and [H 2 SO 4 ] were fully propagated in the subsequent analysis.Size and time-dependent observed particle growth rates, GR OBS , were estimated using two new methods based on fitting measured size distributions to the aerosol general dynamic equation (GDE) (Gelbard and Seinfeld, 1978).Whether the measured size distributions were consistent with sampling from a regional air mass or with interception of a local plume determined the appropriate method to obtain growth rates from the measured size distributions.The analysis method for plume events utilizes the novel result that size distributions (<∼5 nm) for a nucleating system in the presence of an aerosol achieve pseudo steady-state shortly after the start of nucleation (McMurry, 1983).For regional events, the analysis method is similar in principle to earlier analysis techniques (Lehtinen et al., 2004;Verheggen and Mozurkewich, 2006) that fit size distributions to the GDE.For an aerosol system that is growing through simultaneous gas uptake and coagulation, the aerosol GDE can be integrated to describe the evolution of the number concentration between particle diameters D p1 and D p2 (D p2 > D p1 ) according to Eq. (1): ) where is the size interval defined by D p1 and D p2 , GR = dD p /dt, and n = dN/dD p .In the RHS of Eq. ( 1), the first and second terms are the condensational flux into and out of the aerosol size interval defined by , CoagSrc is the source term defining the production of particles in due to coagulation, and CoagSnk is the sink term defining the removal of particles in due to coagulation.With a measured size distribution n, the only unknown quantities in Eq. (1) are the diameter growth rates at the interval boundaries, GR(D p1 , t) and GR(D p2 , t), which are then obtained as functions of time and particle diameter through an iterative solution of Eq. (1) at various particle sizes.Further details of each method can be found in Appendices A1 (regional event analysis) and A2 (plume event analysis).Previous methods for obtaining growth rates from size distributions typically require either a distinct nucleation mode (Mäkelä et al., 2000;Lehtinen and Kulmala, 2003;Stolzenburg et al., 2005) or a discernible time shift between concentration profiles at two sizes (Weber et al., 1997;Sihto et al., 2006).With the former method, it is not possible to analyze NPF events characterized by sustained periods of particle production as there is no distinct nucleation mode nor is it possible to de-couple the size and time-dependence of observed growth rates.With the latter method, NPF events with high growth rates are not amenable to analysis due to the potential lack of a discernible time shift, while growth rates that can be obtained are often averaged over large size intervals (1 to 6 nm) and time intervals (typically hours).Both methods used in this study were developed to exploit the strong size and time-dependence inherent in freshly nucleated aerosol size distribution measurements down to 1 nm without the restrictions of earlier methods. Size and time-dependent particle growth rates due solely to the condensation of sulfuric acid, GR SA , were calculated using measured sulfuric acid concentrations and assuming bulk properties for the sulfuric acid vapor (density = 1.84 g cm −3 ), explicitly accounting for the dimension and motion of collision partners during condensation (Fuchs, 1964;Lehtinen and Kulmala, 2003;Seinfeld and Pandis, 2006); the effects of interaction potentials were neglected.The size-dependent growth rate enhancement (defined as the ratio of GR OBS to GR SA ) was then obtained, quantifying the size-dependent contribution of species other than sulfuric acid to the observed growth.In this study, size-dependent particle growth rates GR OBS and growth rate enhancements down to 1 nm geometric diameter will be presented for a regional event measured during the NCAR campaign (19 September 2010) and for a plume event measured during NCCN (7 August 2009).The methods of analysis to obtain both GR OBS D p and D p for regional and plume events are described in Appendices 1.2 and 2.2, respectively.Based on the size-dependent growth rates and growth rate enhancements obtained for the events analyzed in this study, implications for nucleation and growth processes will be discussed, along with potential impacts on CCN production from new particle formation. Size-dependent growth rates Observed size-dependent growth rates, GR OBS (D p ), down to 1 nm geometric diameter and their corresponding uncertainties are presented in Fig. 1a and b.Measurement uncertainties arising from particle counting statistics were included in the calculation of GR OBS (D p ).The effects of size-dependent uncertainties (in the composition dependent detection efficiency and in the bipolar charging efficiency) on GR OBS (D p ) are discussed in the appendices.The results in Fig. 1a are based on measurements in Atlanta on 7 August 2009 during the NCCN campaign.Growth rates increased systematically with size, and tended to be higher during the afternoon at 13:00 (5.5 ± 0.8 to 7.6 ± 0.6 nm h −1 ) than during the morning at 09:50 (2.1 ± 1 to 6.0 ± 0.4 nm h −1 ).These values of the growth rate are consistent with results reported in an earlier study at this site (Stolzenburg et al., 2005).We did not attempt to calculate growth rates for particles larger than 3 nm because the steady-state assumption required for analyzing size distributions from intercepted plumes becomes increasingly questionable as size increases (McMurry, 1983).The results in Fig. 1b are based on midday measurements at NCAR on 19 September 2010.Again, growth rates increased approximately linearly with size up to 3 nm geometric diameter during the period of peak particle production (13:00), increasing from 5.6 ± 2 nm h −1 to 27 ± 5 nm h −1 over the size range 0.8 to 3 nm geometric diameter.For the NCAR event, growth rates were higher than were observed in Atlanta and were approximately constant with size for particles in the 3 to 5 nm geometric diameter size range.These high growth rates above 3 nm are comparable with those observed during intense periods of particle production and growth in Mexico City (Iida et al., 2008). Previous studies have reported growth rate measurements during the initial steps of aerosol formation (Birmili et al., 2003;Kulmala et al., 2004b, c;Hirsikko et al., 2005;Stolzenburg et al., 2005;Iida et al., 2008;Manninen et al., 2009;Kuang et al., 2010), with some studies suggesting that diameter growth rates increase with size for the smallest particles (Kulmala et al., 2004b;Manninen et al., 2009).While those results were obtained from size distributions of the ambient ion population rather than from the total aerosol population (neutral and charged), such as reported here, their reported growth rate size-dependence is largely substantiated by our results in this study.The results from this study de-couple for the first time the size and time-dependence of observed particle growth rates, due to the new analysis methods which obtain size-dependent growth rates at a specified time.This de-coupling allows the clear interpretation of observed sizedependent growth to be an effect of the particle growth mechanism at work. The size-dependence of the observed growth rates (up to ∼3 nm geometric diameter for both analyzed events) can be compared with predictions of theoretical aerosol growth laws (Friedlander, 1977), providing information regarding possible mechanisms for aerosol growth.Particle growth laws that are qualitatively consistent with the observed increase in growth rate with size include: (1) nano-Köhler activation of nanometer-sized nuclei (Kulmala et al., 2004a), (2) multi-component diffusion corrected for the effect of particle curvature on vapor pressure (Kelvin effect), and (3) surface or volume-controlled reaction corrected for the Kelvin effect on surface and volume concentrations, respectively (Friedlander, 1977;Heisler and Friedlander, 1977;McMurry and Wilson, 1982;Kulmala et al., 2004b).The observed sizedependence could very well be a result of a combination of growth mechanisms occurring simultaneously. Size-dependent growth rate enhancements Size-dependent growth rate enhancements, (D p ), down to 1 nm geometric diameter and their corresponding uncertainties are presented in Fig. 2a during the NCCN campaign.At 09:50, (D p ) increases with size, from 1.9 ± 1 at 1.2 nm geometric diameter, where sulfuric acid condensation accounts for ∼50 % of the observed growth, to around 8.3 ± 1 at 4.1 nm geometric diameter, where sulfuric acid condensation accounts for ∼10 % of the observed growth.During the afternoon at 13:00, a similar size-dependence for was observed, albeit with lower values, ranging from 1.2 ± 0.2 at 1.2 nm geometric diameter, where sulfuric acid condensation accounts for nearly all the observed growth, to around 2.5 ± 0.3 at 4.1 nm geometric diameter, where sulfuric acid condensation accounts for ∼40 % of the observed growth.These relatively low values of are consistent with earlier observations in Atlanta of sulfuric acid-dominated condensational growth (Stolzenburg et al., 2005;Kuang et al., 2010) and nanoparticles composed primarily of ammonium sulfate (Smith et al., 2005). During 7 August 2009, the TDCIMS measured the composition of freshly nucleated particles at two sizes: 20 nm particles at the beginning of the NPF event (08:45-11:00), and 40 nm diameter particles later on during the event (13:30-14:30) in order to track the composition of the peak of the quickly growing nucleation mode.At the beginning of the event, measurements of the ion current indicated that dimethyl ammonium sulfate comprised ∼3 % of the total molar composition of 20 nm diameter particles.Measurements of the composition of 40 nm particles later on during the event showed that sulfate salts of ammonium and dimethyl ammonium comprised ∼60 % of the total molar composition. TDCIMS measurements of freshly nucleated nanoparticle composition provide direct information of gas-phase species that participate in nanoparticle growth (Smith et al., 2008;Smith and Rathbone, 2008).The nanoparticle composition measurements on 7 August 2009 suggest that, at the beginning of the event, the observed growth rate for 20 nm particles is a factor of 30 higher than the corresponding sulfuric acid limited growth rate, which is consistent with the observation that nanoparticle composition is dominated by species other than sulfate.During the event, the TDCIMS measurements indicate that sulfate contributes an increasing amount to the composition of 40 nm particles, suggesting that the enhancement to growth is a factor of 1.5 for 40 nm particles.This observation of increasing sulfate contribution to nanoparticle composition (20 to 40 nm) measured by the TDCIMS supports the observed decrease in growth rate enhancement (1 to 4 nm) from the morning to the afternoon measured by the DEG SMPS. The results in Fig. 2b are obtained for the time interval 12:40-13:00 during the period of peak particle production for the NPF event measured on 19 September 2010 during the NCAR campaign.(D p ) increases approximately linearly with size during that time period, ranging from around 3.1 ± 1 at 0.8 nm geometric diameter, where sulfuric acid condensation accounts for ∼33 % of the observed growth (the balance of which could include volume contributions from associated ammonium and water), to 25 ± 4 at 2.7 nm geometric diameter, where sulfuric acid-limited condensation accounts for ∼5 % of the observed growth. then remains approximately constant with size up to 5 nm geometric diameter.Values of as high as 20 to 50 have been observed at other locations (Kuang et al., 2010).A number of studies have presented evidence of sulfuric acid limited condensation accounting for only a fraction of the observed sub 3 nm growth in ambient (Weber et al., 1997;Fiedler et al., 2005;Sihto et al., 2006) and laboratory experiments (Metzger et al., 2010).Due to the nature of the methods used to obtain sub 3 nm growth rates in those studies, the reported growth rate enhancements are, by definition, averages over the size and time it takes for a nucleated particle to grow to ∼3 nm.The growth rate enhancement values presented in Fig. 2a and b are the first reported results of size-resolved down to ∼1 nm geometric diameter, providing a direct indication that species other than sulfuric acid can play a significant role in particle growth below 2 nm geometric diameter. The values of at the smallest particle sizes also provide insights into the nucleation process, namely, upper limit estimates on the critical cluster size.For both analyzed NPF events, decreases with decreasing size, approaching values of 1.9 and 1.2 at 1.2 nm geometric diameter during the morning and afternoon of the NCCN event, respectively, and approaching a value of ∼3 at 0.8 nm geometric diameter during the period of particle production for the NCAR NPF event. It is worth noting that GR SA (D p ) is determined assuming zero evaporative flux of sulfuric acid from the particle sur-face, leading to an upper limit estimate of GR SA (D p ) and, consequently, a lower limit estimate of in this study.At the size of critical clusters or smaller, evaporation competes with or even overwhelms the sulfuric acid-limited condensation flux, and the resulting is then substantially less than 1.Therefore, a value greater than unity is only expected at sizes greater than the critical cluster size.For the NCCN event at 09:50, = 1.9 at 1.2 nm geometric diameter, indicating that the critical cluster is formed at a smaller size, that the bottleneck to nucleation must occur below 1.2 nm geometric diameter.This result is consistent with the ambient aerosol size distribution measured at 10:00 (Jiang et al., 2011b), where the steepest drop in the distribution function occurs between [H 2 SO 4 ] and [H 2 SO 4 ] 3 , implying that the bottleneck to nucleation occurs below ∼1 nm geometric diameter, the estimated size of the sulfuric acid trimer.For the NCAR event from 12:40-13:00, a value of ∼3 suggests that the critical cluster size is less 0.8 nm geometric diameter. Impact on nucleated particle survival probability This observed size-dependence in growth rate up to 3 nm not only provides constraints on potential growth rate mechanisms, but also provides, for the first time, realistic growth rate inputs when modeling the nucleated particle survival probability in aerosol microphysical modules used in regional and chemical transport models (Pierce and Adams, 2007;Spracklen et al., 2008;Pierce and Adams, 2009;Wang and Penner, 2009;Spracklen et al., 2010).The nucleated particle survival probability is defined as the probability that a nucleated particle (∼1 nm) will grow to a detectable size (3 nm) before being scavenged by the pre-existing aerosol.This probability is usually parameterized assuming a constant growth rate as the nucleated particles grow to 3 nm (Weber et al., 1997;Kerminen and Kulmala, 2002;McMurry et al., 2005;Lehtinen et al., 2007).Results from this study indicate that this assumption is not valid below 5 nm.The relative impact of a size-dependent growth rate on particle survival probability was modeled by numerically integrating the aerosol general dynamic equation for an aerosol population growing by simultaneous gas-uptake and coagulation (Friedlander, 1977;Gelbard and Seinfeld, 1978;Kuang et al., 2009), explicitly accounting for size-dependent growth below 3 nm.Measured inputs for this calculation were the observed size-dependent growth rates, growth rates derived from observed sulfuric acid concentrations, and the preexisting aerosol size distribution obtained at 12:40 for the NCAR NPF event.Nucleated particle losses were determined exclusively from coagulation with the pre-existing aerosol.The results from this model calculation are event and environment-specific and are meant to be representative only of regions where sulfuric acid-limited condensation accounts for only a fraction of the observed growth.Model results for the survival probability as a function of final particle size are shown in Fig. 3 for three growth rate scenarios: (1) constant growth -the growth rate below 3 nm is assumed to be constant and equal to the observed growth at 3 nm; (2) size-dependent growth -the growth rate below 3 nm is equal to the observed size-dependent growth rate; and (3) sulfuric acid-limited growth -the growth rate below 3 nm is equal to the size-dependent growth rate assuming only the condensation of sulfuric acid, a model for condensational growth that has been used in a number large-scale simulations (Pierce and Adams, 2007;Wang and Penner, 2009;Spracklen et al., 2010). Under the size-dependent growth scenario, the modeled probability of a nucleated particle growing to 3 nm before being scavenged by the pre-existing aerosol is ∼8 %, orders of magnitude larger than the survival probability assuming sulfuric acid-limited growth, and ∼4× lower than the survival probability assuming a constant growth rate below 3 nm equal to the growth rate at 3 nm.While growth scenarios 1 and 3 have been regularly implemented in aerosol simulations, they can potentially lead to gross over and underpredictions of survival probability for particle growth to a CCN-active size, since particles are most vulnerable to loss below 3 nm.Accurate representation of the impact of NPF and growth on CCN production will require a mechanistic understanding of the processes and species responsible for this size-dependent growth. Summary This study presents measurements and analysis methods that de-couple, for the first time, the size and time-dependence of diameter growth rates for freshly nucleated particles down to 1 nm geometric diameter.Data analysis methods were developed to obtain size-dependent growth rates at an instant in time for regional and plume NPF events by fitting the aerosol general dynamic equation to measured size distributions.Observed growth rates were found to increase approximately linearly with size from 1 to 3 nm geometric diameter, consistent with predictions from nano-Köhler theory, and Kelvin-limited diffusion, surface, and volume growth laws.Corresponding growth rate enhancements were also found to increase approximately linearly with size, starting from ∼1 and 3 at the smallest sizes (1.2 and 0.8 nm geometric diameter, respectively) and reaching values as high as 8 and 25 (4.1 and 2.7 nm geometric diameter, respectively) for the events that were analyzed in this study.The contribution of species other than sulfuric acid to the observed growth for the analyzed events is significant below 3 nm, accounting for up to 95 % of the observed growth.For such events where growth is dominated by species other than sulfuric acid, neglecting size-dependent growth could lead to a significant overestimation of the resulting CCN survival probability.Further measurements and analyses of freshly nucleated aerosol number size distributions will help to provide further constraints and insights into ambient nucleation and growth processes, complementing measurements of particle composition by mass spectrometry. Decoupling the size and time dependence of particle diameter growth rates Data analysis methods were developed to obtain aerosol population dynamics information from measured size distributions, namely the aerosol diameter growth rate as a function of particle diameter and time during the period of particle production and growth.Two different approaches were used to obtain these growth rates, depending on whether the nucleation event was more regional or plume-like in nature. A1 Regional event analysis A1.1 Model development For a regional event, the state of the aerosol system is characterized as being spatially and chemically homogeneous. C. Kuang et al.: Size and time-resolved growth rate measurements In such a system, the evolution of the number concentration between sizes D p1 and D p2 (D p2 > D p1 ) for an aerosol system that is growing through simultaneous growth due to gas uptake and coagulation is described mathematically with the following population balance equation (Eq.A1) (Gelbard and Seinfeld, 1978) and term definitions (Eqs.A2-A4): where is the size interval defined by D p1 and D p2 , GR = dD p /dt, n = dN/dD p , β is the Fuchs form of the Brownian coagulation coefficient (Lehtinen and Kulmala, 2003;Seinfeld and Pandis, 2006), and . In the LHS of Eq. (A1), dN (t) dt is the time rate of change of the size distribution n integrated from D p1 to D p2 .In the RHS of Eq. (A1), the first and second terms are the condensational flux into and out of the aerosol size interval defined by , CoagSrc is the coagulation source term defining the production of particles in due to the coagulation of particles, and CoagSnk is the coagulation sink term defining the removal of particles in due to self-coagulation of particles within , coagulation with particles smaller than D p1 , and coagulation with particles larger than D p2 .Equation (A1) is simply the general dynamic equation integrated from D p1 to D p2 (Gelbard and Seinfeld, 1978), where CoagSrc and CoagSnk define the total (rather than net) gain and loss, respectively, in particle number in due to coagulation.With a measured size distribution n, the only unknown quantities in Eq. (A1) are the diameter growth rates at the interval boundaries, GR(D p1 , t) and GR(D p2 , t). The condensational flux at D p1 is defined as (Heisler and Friedlander, 1977;Weber et al., 1996): Equation ( A1) can be re-arranged to yield an equivalent expression for the condensational flux at D p1 : where the subscript in J bal refers to the balance method by which J D p1 , t is obtained (Sihto et al., 2006;Riipinen et al., 2007).To facilitate the analysis, the observed diameter growth rate GR(D p , t) is represented as the product of two terms, as shown in Eq. (A7): where GR SA (D p , t) is the diameter growth rate based on the condensation of only sulfuric acid vapor, and (D p ) is the ratio of the observed growth rate GR(D p , t) to GR SA (D p , t), an empirical factor that represents the contribution of other species and processes to the observed growth.Implicit in Eq. ( A7) is the assumption that is constant with time at a given size.For the short time interval during the NPF event analyzed with this method, this assumption will be verified in the subsequent section describing how (D p2 ) is calculated.Substituting Eq. (A7) into Eqs.( A5) and (A6), and letting ), yields: Equating Eqs.(A8) and (A9) then yields: Values of J SA (D p1 , t) and J bal D p1 , t are obtained from the measured size distribution and sulfuric acid concentration, where GR SA (D p1 , t) is calculated using the measured sulfuric acid concentration and a sulfuric acid monomer volume and diameter assuming a bulk density of 1.84 g cm −3 .Estimated uncertainties from the measurement of aerosol and sulfuric acid number concentrations are included and propagated through Eqs.(A1) through (A9).Correlations between individual terms in Eq. (A9) are included in the uncertainty propagation.For the results presented in the main text, uncertainties in the aerosol size distribution and number concentration are estimated from particle counting statistics (where it has been assumed that there is no uncertainty in the measurement of particle diameter), while random relative uncertainties in the measurement of [H 2 SO 4 ] were assumed to be 10 %.The effects of systematic uncertainties in the measurement of [H 2 SO 4 ] (see Appendix B) and the aerosol size distribution (due to uncertainties in chemistry-dependent detection efficiencies and charging efficiencies) are detailed in Appendix C. With values of (D p2 ), obtained by a method which will be discussed in the subsequent section, a linear least-squares regression can be performed where values of J bal D p1 , t , J SA (D p1 , t), and their corresponding uncertainties are fit to Eq. (A10), yielding a best-fit estimate for (D p1 ) and its standard error.Standard techniques for applying a leastsquares algorithm to data with uncertainties in both coordinates were applied (Williamson, 1968;Neri et al., 1989;Cantrell, 2008).The time interval for this procedure is chosen to be long enough to yield a large enough data set for fitting (∼20 min, 5 data points), but short enough so that the assumption of constant over that brief time interval can be reasonably made.Fixing the upper size boundary at D p2 (D p2 = 5.4 nm), this regression procedure can then be repeated at different values of D p1 to obtain as a function of size below D p2 . A1.2 Determining at the upper size boundary D p2 Over a single size bin i (∼0.5 nm bin spacing at D p2 = 5.4 nm geometric diameter), Eq. ( A1) can be rewritten as: where is initially assumed constant over such a small bounded by D p1 and D p2 .This assumption will be relaxed during a subsequent iterative calculation of .Re-arranging Eq. (A11) yields: where A and B are defined as: and Values of A and B are calculated from the measured size distribution with corresponding uncertainties due to propagation of measurement uncertainties in aerosol and sulfuric acid number concentration.Correlations between terms in Eq. (A13) and between terms in Eq. (A14) are included in the uncertainty propagation.Values of A and B along with their uncertainties are then fit to Eq. (A12) to yield a best-fit estimate for (i) and its standard error using established methods (Williamson, 1968;Neri et al., 1989;Cantrell, 2008). The standard error is equal to the square root of the variance multiplied by the sum of squared residuals weighted by the inverse variances of the individual data points (Cantrell, 2008).An example of such a fit is shown in Fig. A1.The time interval selected for analysis in Fig. A1 corresponds to a period of strong nucleation and rapid growth during the analyzed NPF event, where there are sufficient particle counts at the smallest bin sizes.This initial best-fit estimate of (i) is then input into Eq.(A10) to determine (i − 1) for the adjacent size bin.Relaxing the assumption of constant (i) over bin i in Eq. (A11), a weighting factor is then introduced into Eq.(A14) in order to match condensational fluxes at the boundary between bins i and i − 1, yielding: The subsequent calculation of a new (i) and (i − 1) is iterated until (i) changes by less than a user-set criterion, in this case, 1 %.An example of this iterative convergence is shown in Fig. A2, where (i) at the upper boundary converges to a value of 18 ± 3.5 (one standard error).For this particular calculation, median values of [H 2 SO 4 ] were used (no systematic uncertainty applied).It should be noted that over a different time interval of similar length, (i) will likely be a different value.September 2010 in Boulder, CO (NCAR).At each step, (i) and its corresponding standard error (1se) were obtained by a linear leastsquares fitting method modified for regression through the origin with measurement uncertainties in both axes (Cantrell, 2008).Median values of [H 2 SO 4 ] were used in this particular calculation (no systematic uncertainty applied). A1.3 Determining of which are shown in the main text in Fig. 2b.Using Eq. (A7) and best-fit estimates of (D p ), the observed growth rates GR OBS (D p ) and resulting uncertainties can be calculated, results of which are shown in the main text in Fig. 1b. A2.1 Steady state assumption Previous work has shown that in a nucleating system, steady state concentrations can be achieved for small particles (∼5 nm) in time periods of less than about one hour for typical atmospheric conditions (McMurry, 1983).This steady state is due to the balance between formation from smaller particles by condensation and coagulation, and removal by coagulation with particles of all sizes.Figure A3 shows the simulation results for collision-controlled nucleation in a system that initially contains no particles and with the monomer concentration fixed at 1 × 10 8 # cm −3 , which is in the range of sulfuric acid vapor concentrations observed in Atlanta during NCCN 2009.Concentrations initially increase rapidly because formation rates exceed coagulation loss rates due to the low particle concentrations immediately following formation.After 30 to 60 min, however, quasi-steady state is achieved.The slow, steady decrease in concentrations with time (t) that is observed after this period is due to the gradual increase in coagulation losses resulting from the increas-Fig.A3.Time-dependent particle number concentrations at various particle geometric diameters D p for collision-controlled nucleation in the free-molecular regime for a system that is initially particlefree.For particles smaller than 5 nm, less than an hour is needed to reach a quasi-steady state.After quasi-steady state is reached, the decrease in number concentration is due to the increase of aerosol surface area (a coagulation sink), and does not depend explicitly on time. ing aerosol surface area.astime progresses.For free molecular kinetics, which was assumed in obtaining these illustrative results, coagulation losses vary in proportion to the pre-existing aerosol surface area, which varies in proportion to t 1/5 (McMurry and Friedlander, 1977).Although the time dependence for atmospheric aerosols (which fall in the transition regime) will have a quantitatively different time dependence, it will follow a qualitatively similar weak dependence on time.It is this weak time dependence that allows the establishment of a quasi-steady-state determined by the instantaneous aerosol size distribution. All the new particle formation events observed in Atlanta during the 2009 intensive measurement campaign appear to be due to the impact of plumes from nearby coal-fired power plants.Evidence for this is the correlation between concentrations of freshly nucleated particles and [SO 2 ], and the sharp variabilities in particle concentrations as the wind transported the plume towards and away from our site.We estimate that, depending on the wind direction and speed, transport times from the stack to the sampling site typically exceeded about two hours.This leads us to conclude that number distribution functions of sub -5 nm particles should be quasi-steady-state. A2.2 Model development For each 15 min measurement of the aerosol size distribution, a smooth curve is fitted to obtain a continuous size distribution.Number distributions were obtained by merging data from the Cluster Chemical Ionization Mass Spectrometer (Cluster CIMS) and a diethylene glycol scanning mobility sizing spectrometer (DEG SMPS) data as described by Jiang et al. (2011b) and shown in Fig. A4.The fit to the measured size distribution was performed for sizes ranging from 0.8 nm geometric diameter (clusters containing 3 sulfuric acid molecules) to 5.0 nm geometric diameter.Particle geometric diameters were estimated from measured mobility diameters according to the method of Larriba et al. (2011).Applying steady state to the number concentration in the GDE of Eq. (A1) (Appendix A.1.1)yields: where the terms CoagSnk and CoagSrc are defined in Eqs. ( A3) and (A4), respectively, n = dN/dD p , and GR is the particle diameter growth rate.In the plume analysis method, D p1 and D p2 are adjacent size bins.GR is estimated by first solving Eq. (A16) for GR through iteration, initially assuming GR(D p1 , t) = GR(D p2 , t).This is a reasonable starting point provided that the diameter bin sizes dD p are small enough.So for each size D p1 we first obtain GR(D p1 , t), defined by: For subsequent iterations, . The relative change in GR(D p1 , t) after one iteration is less than 3 %. A2.3 Calculation of the relative uncertainty Uncertainties in the number concentration N and the diameter growth rate GR are estimated at each size for which measurements were obtained. Uncertainties in N The particle concentration N for each size bin is calculated using Eq.(A19): where C is the number of particles counted by the DEG UCPC, f detection is the size-dependent DEG detection efficiency (which includes the activation efficiency and fractional diffusional deposition), f charging is the size-dependent Filled triangles are data obtained by the cluster chemical ionization mass spectrometer (Cluster CIMS) and filled circles are data obtained from two scanning mobility particle spectrometers (SMPS) (Jiang et al., 2011b).The dashed line is obtained by fitting a second order polynomial using the method of least squares to the data over the size range where growth rates are calculated in this work. charged fraction, Q a is the aerosol flow rate through the DEG UCPC, and η t is the size and flow rate-dependent particle transmission efficiency through the sampling system upstream of the DEG UCPC.The relative uncertainty for C is 1/ √ C, which is derived by assuming particle counting as a Poisson process.Flow rates were calibrated twice daily and were accurate to within 5 %.Uncertainties in flow rate (and η t ) are therefore negligible relative to other uncertainties.Wiedensohler's approximation for Fuchs' diffusion charging theory (Wiedensohler, 1988) is used to calculate the charging fraction f charging .Since the charged fraction for particles smaller than 2 nm has not been studied, relative uncertainties in f charging are difficult to quantify and have been neglected in this section.However, the effects of uncertainties in f charging on the inverted size distributions and subsequent growth rate calculations can be determined and are investigated in Appendix C. The relative uncertainty for f detection is determined from the activation efficiency of sodium chloride particles (Jiang et al., 2011a).If activation efficiencies of freshly nucleated atmospheric particles are different from sodium chloride, this will lead to systematic errors in the inverted size distribution, which have been neglected in this section.The effects of uncertainties in activation efficiency on the inverted size distributions and growth rate calculations are investigated in Appendix C. With these assumptions, the relative error for N is, where cf m is the mass-dependent transmission factor, k is the ion-molecular rate constant, t is the measured reaction time, S SA is the counting rate for sulfuric acid summing over m/z 97 (when available) and 160, and S Re is the counting rate for the reagent ions summing over m/z 125 and 188.The overall systematic uncertainty in [SA] can be estimated from the systematic uncertainties in each term of Eq. (B1).The uncertainty associated with the ion-molecular rate constant k was obtained from the measurements of Viggiano et al. (1997), who estimated a relative systematic uncertainty of ±10 to 15 %.Here, we use a value of k = 1.86 × 10 −9 cm 3 s −1 , the rate constant between sulfuric acid and the reagent ion NO − 3 HNO 3 , assuming it is the predominant reagent ion.However, concentrations of NO − 3 (HNO 3 ) 2 are sometimes elevated, which introduces an additional uncertainty.Therefore, a slightly higher relative systematic uncertainty of ±20 % is used in this analysis.The relative uncertainty associated with the ratio S SA /S Re was estimated to be ±10 % from laboratory measurements with relatively stable sulfuric acid concentrations.For ambient measurements, the relative uncertainty is expected to be somewhat larger, ±15 %. Background measurements (usually taken at night) have a counting rate lower than 100 Hz, which is equal to about 2 × 10 5 cm −3 .Given that sulfuric acid concentrations during an NPF event are above 10 6 cm −3 , background counts have a negligible impact on the calculation of sulfuric acid concentration.The relative systematic uncertainty associated with the ratio S SA /S Re is therefore estimated to be about ±15 %.The relative systematic uncertainty associated with the mass-dependent transmission factor cf m was estimated to be ±10 % for the Cluster CIMS using positive ions (Zhao et al., 2010), and is a combination of the transmission efficiency through the octopole and quadrupole, and the mass discrimination of the electron multiplier.Values for the reaction time t were measured, yielding a relative systematic uncertainty of ±10 %.Propagating the uncertainties in the individual terms in Eq. (B1) yields an overall relative systematic uncertainty of ±30 % for the measurement of [SA]. Appendix C Effect of systematic measurement uncertainties on C1 Effect of systematic uncertainties in the measurement of [H 2 SO 4 ] A systematic relative uncertainty of ±30 % for the Cluster CIMS measurement of [H 2 SO 4 ] leads to upper and lower limits that are a factor of 1.3 above and below the reported concentrations.The effect of this systematic uncertainty was propagated through the relevant equations in Appendix A, the results of which are shown in Fig. C1a for the NCCN event and in Fig. C1b for the NCAR event. We believe the primary sources of uncertainty in calculated values of N are associated with uncertainties in f detection and f charging , both of which become more uncertain as particle size decreases and may depend on composition (O'Dowd et al., 2002;Iida et al., 2009;Jiang et al., 2011a;Premnath et al., 2011).We have measured f detection in the laboratory for particles of known composition (NaCl, Ag, and tetra-alkyl ammonium ions N + [C n H 2n+1 ] 4 ).For the salt and metals, measurements were done for particles of different charges: +1, 0, and −1.Some of these data were reported by Jiang et al. (2011a), and more results will be described in forthcoming publications.The DEG UCPC did not detect Ag particles smaller than 1.7 nm mobility diameter, while atmospheric nuclei as small as 1.1 nm mobility diameter were detected.The tetra-alkyl ammonium ions were even more difficult to detect than the silver.Furthermore, we found that if f detection is assumed equal to values measured for sodium chloride and f charging is calculated using bipolar stationary state theory (Wiedensohler, 1988;Reischl et al., 1996;Alonso et al., 1997), distribution functions measured with the DEG SMPS are in good qualitative agreement with distribution functions of neutral molecular clusters measured independently by the cluster chemical ionization mass spectrometer (Cluster CIMS) (Jiang et al., 2011b).Therefore, this is the approach that was used to calculate distribution functions in this study. To estimate the effects of size-dependent uncertainties of the product f detection • f charging , we assume that uncertainties increase linearly with decreasing size from ±10 % at 3 nm The differences between the two figures arise from differences in the detection efficiency: the DEG UCPC was operated at higher flow rates and super-saturation ratios during the NCAR study.For both campaigns, the same size-dependent systematic relative uncertainty in f detection • f charging was assumed, linearly decreasing from ±50 % at 1 nm geometric diameter to ±10 % at 3 nm geometric diameter.The solid black line represents the product of the size-dependent detection efficiency of negatively charged NaCl and the bipolar charging using the parameterization of Wiedensohler (1988).Geometric particle diameters were estimated according to Larriba et al. (2011).to ±50 % at 1 nm, as shown in Fig. C2a.The solid black line corresponds to the value of f detection •f charging used in our work, and the dotted lines show our upper and lower bounds for this product.We further assume that uncertainties of the same magnitude are associated with Cluster CIMS measurements of neutral molecular clusters that contained three or four sulfuric acid molecules.With these assumptions for systematic uncertainties in f detection • f charging , upper and lower limits on the size-dependent growth rate enhancement factors, , are calculated according the procedure described in Appendix A2. Figure C3a shows the corresponding range of for measurements at 09:50 on 7 August 2009.The relative uncertainty in is about 15 % at 1.2 nm geometric diameter and 40 % at 3 nm geometric diameter.Furthermore, approaches a value of unity close to 1 nm geometric diameter, and increases monotonically with size above that.It follows that our important qualitative conclusions, that approaches unity for the smallest particles and increases systematically with size above that, are not seriously compromised by uncertainties in f detection • f charging .The relative effect of these same uncertainties on GR is identical to the effect on since GR scales linearly with according to Eq. (A7) (GR = • GR SA ) in the appendices, where GR SA is the growth rate based on the condensation of only H 2 SO 4 vapor. C2.2 NCAR event A similar calculation was performed for the NCAR event assuming the size-dependent product f detection • f charging presented in Fig. C2b, which has a slightly higher value compared to that used for the NCCN event, due to the higher values of f detection resulting from operating the NCAR DEG UCPC at a higher flow rate and instrument super-saturation compared to the NCCN DEG UCPC.For the NCAR event, a size-dependent systematic uncertainty in f detection • f charging was assumed, linearly decreasing from ±50 % at 1 nm to ±10 % at 3 nm.The resulting range in due to this assumed systematic uncertainty in f detection • f charging is presented in Fig. C3b, where the upper and lower limits in are still seen to increase monotonically with size up to ∼3 nm geometric diameter.The overall trends in (D p ) and the resultant conclusions are maintained given the prescribed systematic uncertainty in f detection • f charging . and b.Measurement uncertainties arising from particle counting statistics and random error in the measurement of [H 2 SO 4 ] were included when calculating the uncertainties in (D p ).The effects of sizedependent uncertainties (in particle detection and charging efficiency) and systematic uncertainties in the measurement of [H 2 SO 4 ] are discussed in the appendices.The results in Fig.2aare based on values of GR OBS (D p ) and GR SA (D p ) obtained during the morning at 09:50 and the afternoon at 13:00 for the NPF event measured on 7 August 2009 Fig. 1 . Fig. 1.Observed growth rates and corresponding uncertainties are plotted as functions of particle mobility diameter D mob p (upper abscissa) and geometric diameter D geo p (lower abscissa) for NPF events measured on (a) 7 August 2009 (NCCN), where uncertainties are presented as one standard deviation and on (b) 19 September 2010 (NCAR study), where uncertainties are presented as one standard error, calculated according toCantrell (2008).Geometric particle diameters were estimated according toLarriba et al. (2011). Fig. 2 . Fig. 2. Values of the growth rate enhancement and their corresponding uncertainties are plotted as a function of particle mobility diameter D mob p (upper abscissa) and particle geometric diameter D geo p (lower abscissa) for the NPF event observed on (a) 7 August 2009 (NCCN), where uncertainties are presented as one standard deviation, and on (b) 19 September 2010 (NCAR) where uncertainties are presented as one standard error, calculated according toCantrell (2008).A dotted line indicating = 1 is shown for reference.Geometric particle diameters were estimated according toLarriba et al. (2011). Fig. 3 . Fig.3.Survival probability of a nucleated particle (D p = 1.12 nm) as a function of final particle diameter for three growth rate scenarios: (1) GR(D p ) = GR OBS (3 nm), where the growth rate below 3 nm is assumed to be constant and equal to the observed growth at 3 nm (14 nm h −1 ); (2) GR(D p ) = GR OBS (D p ), where the growth rate below 3 nm is equal to the observed size-dependent growth rate; and (3) GR(D p ) = GR SA (D p ), where the growth rate below 3 nm is equal to the size-dependent growth rate assuming the condensation of only sulfuric acid.Inputs for this model calculation were the observed size-dependent growth rates, growth rates derived from observed [H 2 SO 4 ], and the pre-existing aerosol size distribution obtained at 12:40 for the NPF event on 19 September 2010 (NCAR). Fig. A1 . Fig. A1.Comparison of A and B calculated at 5.4 nm geometric diameter with a least-squares fit line over the time period 12:40-13:00 during an NPF event observed on 19 September 2010 in Boulder, CO (NCAR).A relative uncertainty of 10 % was assumed for the sulfuric acid concentration measurements when calculating B and its corresponding uncertainty.Median values of [H 2 SO 4 ] were used in this particular calculation (no systematic uncertainty applied).Uncertainties in A and B are presented as one standard deviation (1σ ).The uncertainty in the best-fit value of is presented as one standard error (1se) calculated according toCantrell (2008). Fig. A2 . Fig. A2.Convergence of (i) as a function of iteration step.Values of A and B were calculated at 5.4 nm geometric diameter for the time period 12:40-13:00 during an NPF event observed on 19 September 2010 in Boulder, CO (NCAR).At each step, (i) and its corresponding standard error (1se) were obtained by a linear leastsquares fitting method modified for regression through the origin with measurement uncertainties in both axes(Cantrell, 2008).Median values of [H 2 SO 4 ] were used in this particular calculation (no systematic uncertainty applied). Fig. A4 . Fig. A4.Particle size distribution observed on 13:00, 7 August 2009 (NCCN).Filled triangles are data obtained by the cluster chemical ionization mass spectrometer (Cluster CIMS) and filled circles are data obtained from two scanning mobility particle spectrometers (SMPS)(Jiang et al., 2011b).The dashed line is obtained by fitting a second order polynomial using the method of least squares to the data over the size range where growth rates are calculated in this work. Fig. B1 . Fig. B1.Effect of systematic uncertainties in [H 2 SO 4 ] on growth rate enhancement as a function of particle mobility diameter D mob p Fig. C1 . Fig. C1.Upper and lower bound estimates (dotted lines) to the quantity f detection • f charging (the product of size-dependent detection and charging efficiencies) as a function of particle mobility diameter D mob p (upper abscissa) and particle geometric diameter D geo p (lower abscissa) obtained for DEG SMPS operation during (a) NCCN and (b) the NCAR campaign.The differences between the two figures arise from differences in the detection efficiency: the DEG UCPC was operated at higher flow rates and super-saturation ratios during the NCAR study.For both campaigns, the same size-dependent systematic relative uncertainty in f detection • f charging was assumed, linearly decreasing from Fig. C2 . Fig. C2.Effects of systematic uncertainties in particle detection and charging efficiencies on growth rate enhancement as a function of particle mobility diameter D mob p (upper abscissa) and particle geometric diameter D geo p (lower abscissa) for the NPF event observed on (a) 7 August 2009 (09:50, NCCN) and on (b) 19 September 2010 (12:40-13:00, NCAR).Values of calculated using the size-dependent detection efficiency of negatively charged NaCl and the bipolar charging efficiency using the parameterization of Wiedensohler (1988) are presented as open circles.Resulting upper and lower limits on , based on assumed systematic uncertainties in the product f detection • f charging , are shown for each value of as horizontal bars.A line indicating = 1 is shown for reference.Geometric particle diameters were estimated according to Larriba et al. (2011). The only effect of this systematic uncertainty in [H 2 SO 4 ] is to either increase or decrease the sulfuric acid-limited growth rate GR SA and the resulting (D p ) by a factor of 1.3 at each size D p (see Eqs. A7 and A24).While the range between values of (D p ) is somewhat large (1.7×), the dependence of on size is unambiguous since the applied uncertainty is systematic.There is no effect on the observed growth rate GR OBS (D p ) since the calculation of GR OBS (D p ) does not depend on [H 2 SO 4 ].
13,360.2
2012-04-12T00:00:00.000
[ "Environmental Science", "Physics" ]
KPM: A Flexible and Data-driven K-process Model for Nucleosynthesis The element abundance pattern found in Milky Way disk stars is close to two-dimensional, dominated by production from one prompt process and one delayed process. This simplicity is remarkable, since the elements are produced by a multitude of nucleosynthesis mechanisms operating in stars with a wide range of progenitor masses. We fit the abundances of 14 elements for 48,659 red-giant stars from APOGEE Data Release 17 using a flexible, data-driven K-process model—dubbed KPM. In our fiducial model, with K = 2, each abundance in each star is described as the sum of a prompt and a delayed process contribution. We find that KPM with K = 2 is able to explain the abundances well, recover the observed abundance bimodality, and detect the bimodality over a greater range in metallicity than has previously been possible. We compare to prior work by Weinberg et al., finding that KPM produces similar results, but that KPM better predicts stellar abundances, especially for the elements C+N and Mn and for stars at supersolar metallicities. The model fixes the relative contribution of the prompt and delayed processes to two elements to break degeneracies and improve interpretability; we find that some of the nucleosynthetic implications are dependent upon these detailed choices. We find that moving to four processes adds flexibility and improves the model’s ability to predict the stellar abundances, but does not qualitatively change the story. The results of KPM will help us to interpret and constrain the formation of the Galaxy disk, the relationship between abundances and ages, and the physics of nucleosynthesis. INTRODUCTION After hydrogen, helium, lithium, and beryllium, all other naturally occurring elements are made in stars, supernovae, and the collisions of stars.Stellar surface abundances-the abundances measured by taking a spectrum of a stellar photosphereare thought to deliver a relatively unprocessed record of the element abundances in the gas from which the star formed (though see, e.g., Pinsonneault et al. 2001;Oh et al. 2018;Souto et al. 2019;Vincenzo et al. 2021b).These birth abundances were set by a combination of nucleosynthetic processes involved in making heavy atomic nuclei, and astrophysical processes involved in delivering atoms from stellar interiors to star-formation sites (e.g., Johnson et al. 2020).Thus nuclear physics and a wide swath of astrophysics are critically intertwined in our understanding of stellar surface abundances, motivating theoretical, experimental, and observational work. At the present day, stellar surface abundances are not very well explained by purely ab initio, physics-driven models.Theoretical yields vary from data set to data set, as they are dependent on progenitor properties and explosion assumptions (e.g., Rybizki et al. 2017;Blancato et al. 2019;Buck et al. 2021;Griffith et al. 2021b).The wide parameter space of progenitor and supernova models coupled with uncertainties in reaction rates and explosion physics hinder the creation of an accurate nucleosynthetic model from theory alone.In the long run, it is incumbent upon us to understand these issues and correct the assumptions or calculations underlying our nucleosynthetic and astrophysical models.In the short run, however, we gather data-tens of millions of abundance measurements on millions of stars in different astronomical surveys such as RAVE (Steinmetz et al. 2006), SEGUE (Yanny et al. 2009), LAMOST (Luo et al. 2015), Gaia-ESO (Gilmore et al. 2012(Gilmore et al. , 2022)), APOGEE/MWM (Majewski et al. 2017), GALAH (De Silva et al. 2015), and H3 (Conroy et al. 2019).This raises the question: Can we take a data-driven approach to nucleosynthesis? In this Article, we build a purely data-driven model for the surface element abundances observed in stars.We treat each star as being a linear combination of nucleosynthetic processes, beginning with one that is primarily responsible for the α-element Mg (prompt enrichment, such as core-collapse supernovae or CCSN, e.g., Andrews et al. 2017), and one that is primarily not responsible for Mg (delayed enrichment, such as Type-Ia supernovae or SNIa).Beyond these up-front assumptions, we try to be agnostic about how the elements are produced. [Fe/H]1 (e.g., Fuhrmann 1998;Bensby et al. 2003;Adibekyan et al. 2012) to separate stars into populations with high and low SNIa enrichment.As established in Griffith et al. (2019), these populations are referred to as high-Ia and low-Ia, to reflect their enrichment origins, instead of the traditional low-α and high-α nomenclature.We adopt this updated naming convention in this Article.Using the median [X/Mg] vs. [Mg/H] abundance trends, these prior works explain data from the GALAH2 and SDSS-IV APOGEE3 surveys, respectively, with a two-process model.Because the median abundance trends in [X/Mg] vs. [Mg/H] space are largely insensitive to aspects of chemical evolution, such as outflows and variations in star formation history (W19), the population abundance trends are set by the nucleosynthetic processes and can be used to empirically constrain Galactic enrichment. These works, as well as Ting & Weinberg (2022) and Ratcliffe & Ness (2023), find that the Milky Way stellar abundances are well fit by two components, grounded in [Fe/H] and [Mg/Fe], down to residuals of 0.01 to 0.03 dex for the most precisely measured elements and 0.05 to 0.1 dex for elements (such as Na, C, and Ce) with large measurement errors.Simultaneously, Frankel et al. (2018) and Ness et al. (2022) have found that disk abundances are also well described by a two-component model of birth radius and age.Correlations between two-process model parameters and stellar ages and kinematics (W22) as well as the success of a two-component model of [Fe/H] and age in predicting APOGEE abundances (Ness et al. 2019) suggest that these two 2-dimensional models are somehow interconnected. Beyond standard CCSN and SNIa enrichment, many elements have contributions from additional nucleosynthetic processes, such as the rapid (r) and slow (s) neutron capture processes (e.g., Arlandini et al. 1999;Bisterzo et al. 2014) in asymptotic giant branch (AGB) stars (e.g., Simmerer et al. 2004;Karakas & Lugaro 2016), merging neutron stars (e.g., Kilpatrick et al. 2017), or atypical supernova explosions (e.g., Nomoto et al. 2013).After predicting stellar abundances from [Fe/H] and [Mg/Fe], Ting & Weinberg (2022) identify correlated abundance residuals that are unexplained by observational uncertainties, indicative of additional nucleosynthetic processes that standard disk CCSN and SNIa enrichment cannot explain.Results from G22 and W22 support this conclusion, and both works attempt to add additional processes to their models to account for non-CCSN and non-SNIa enrichment, though in a restrictive manner.Other sources of abundance scatter, such as stochastic sampling of the Initial Mass Function (IMF), IMF variations, and bursty star formation history could also cause deviations away from a two-process model (Belokurov et al. 2018;Griffith et al. 2023). To date, survey abundances have not been fully exploited to create a data-driven model of nucleosynthesis.While works such as Ting et al. (2012), Casey et al. (2019), and Ratcliffe et al. (2020) effectively use clustering algorithms to identify elements with like sources and reduce abundance dimensionality, the results are difficult to translate into a model of nucleosynthesis.Clustering components can be linked to nucleosynthesis sources and enrichment history, but have not yet been used to describe the enrichment of a single star. In this work, our main innovations are to relax the assumptions made in G22 and W22, to be more agnostic about the nucleosynthetic processes, and to be more principled with the measurements or inferences from data.In the K Process Model (KPM), we find the intersection between reliable facts about nucleosynthesis and good abundance measurements to build an edifice of Galactic enrichment.The model is hierarchical, in that it learns some parameters (process vectors) that are shared across all stars, but different for each element, and some parameters (process amplitudes) that are shared across all elements, but different for each star.The parameters output by our model can thus be used as de-noised abundance labels; these will sharpen relationships between abundances and stellar parameters (including birth location and time).Our main contribution is to construct a data-driven model for nucleosynthesis that has good statistical properties, only enforcing a small number of constraints to break the degeneracies that arise in models of this form.All other KPM parameters are set by the data with no fixed normalization. This paper is organized as follows.In Section 2 we present the assumptions and the implementation of KPM.In Section 3 we describe the APOGEE data sample employed in this paper.We apply KPM to the APOGEE data in Section 4 and compare our results to those of W22 in Section 4.2.In Section 5 we explore variations from the fiducial model, changing our assumptions about Fe production as well as the number of model components.Finally, we discuss and summarize our results in Section 6. THE K-PROCESS MODEL As in W22 and G22, we propose that all stellar abundances can be generated by a combination of K nucleosynthetic processes.In this picture, each element has K metallicity-dependent process vector components that are shared across the full stellar sample, while each star individually has K process amplitudes, which apply across all elements, such that the expected logarithmic abundance of element j relative to H in star i (m ij ) is defined as: Each star i has K process amplitudes (A k i ) and each element j has K metallicity dependent process vector components (q Z k,j ).The Z superscript denotes the dependence of the process vectors on metallicity, Z, taken to be [Mg/H].The observed abundance can be expressed where "noise" represents observational noise and/or other sources of intrinsic abundance scatter that are not included in this model.For detailed examples of a similar model with K = 2, see Section 2 and Figures 2 and 3 of W22, where they demonstrate the vector addition and describe the process parameters for a few example stars. In KPM, we adopt the following set of assumptions: 1. K processes -All elements on the periodic table are produced by a combination of nucleosynthetic processes such as CCSN, SNIa, AGB stars, and merging neutron stars (Johnson et al. 2020).The majority of α, light odd-Z, and Fe-peak elements (the elements observed by APOGEE) are dominantly produced by K = 2 sources, with one being a prompt process or mix of prompt processes, and one being a delayed process or a mix of delayed processes.This is substantiated by theoretical yields (e.g., Anderson 2019;Rybizki et al. 2017) and past successful data-driven models (e.g., Ness et al. 2019, G22, W22, Ting & Weinberg 2022;Ratcliffe & Ness 2023).In this paper we therefore assume that K ≥ 2, though KPM could in principle be implemented with K = 1. 2. Linearity -At every metallicity, the (linear) (X/H) abundances of a star can be expressed as a linear combination of K processes.These processes themselves will depend on metallicity, but a linear sum is sufficient to explain all element abundances at any overall metallicity.Because different stars can get to their metallicities by different histories, and because detailed abundances beyond metallicity must matter at some level, the true enrichment mechanism is at least slightly nonlinear; thus this assumption must be at least slightly wrong in detail. 3. Non-negativity -All process vector components for all elements are non-negative and all process amplitudes are non-negative.This assumption implies that the elements considered here are only produced, and not ever destroyed, by the K processes (relative to hydrogen).This makes the model similar to a non-negative matrix factorization (Blanton & Roweis 2007;Tsalmantza & Hogg 2012).In KPM, this assumption is enforced by requiring that the process vector components and amplitudes are always greater than or equal to zero, such that 4. Mg production -All Mg is produced in a prompt process and no other processes contribute to its production.This is substantiated by theoretical yields where Mg is purely produced by prompt CCSN (e.g., Woosley & Weaver 1995;Arnett 1996;Anderson 2019;Rybizki et al. 2017).This assumption (along with non-negativity) breaks a set of symmetries in the process space and makes the processes quasiinterpretable in terms of nucleosynthesis sources.Because such a prompt process is likely dominated by CCSN (e.g., Andrews et al. 2017), we label the first process with "CC". In KPM, this assumption is enforced by fixing the Mg process vector components such that q Z CC,Mg = 1, q Z k>1,Mg = 0 (4) at all metallicities.Equation 4 also imposes that the Mg process is metallicity independent. 5. Fe production -Fe is produced through a combination of a prompt and delayed process.Because the delayed process is likely dominated by SNIa (e.g., Thielemann et al. 2002;Andrews et al. 2017), we label the delayed process with "Ia" 4 .While the prompt process constraint (Mg) is grounded in nucleosynthesis theory, there is no equivalent nucleosynthesis fact to constrain the delayed process.To break model degeneracies, we also fix the Fe process vector components such that q Z CC,Fe = 0.4, q Z Ia,Fe = 1 − q Z CC,Fe q Z k>2,Fe = 0 (5) at all metallicities.This assumption places a star with purely prompt enrichment on the low metallicity [Mg/Fe] plateau near 0.4 dex, in agreement with APOGEE observations but in contention with recent results from Conroy et al. (2022) which place the [Mg/Fe] plateau near 0.6 dex.We explore the impact of different q Z CC,Fe assumptions in Section 5.1. 6. Metallicity dependence -We permit the process vector components for all elements other than Mg and Fe to float as a function of metallicity.The variation is parameterized by a linear spline in log-process space, attached to a set of variable control points, knots, where the piecewise functions are joined.We assume that a particular set of 11 hard-coded knots between [Mg/H] of -0.8 and 0.6 are sufficient to capture the metallicity dependence.We choose knot number and location such that we capture the complex metallicity dependence of the abundance trends while maintaining a sufficient number of stars to fit with each linear component. 7. APOGEE abundances and uncertainties -We assume that the APOGEE abundances and uncertainties can be used for this project.This is not the same as assuming that they are correct, but rather that it is possible and useful to build an interpretable model to explain them.We describe the potential data systematics in Section 3.For our purposes, we care mainly about the statistical observational errors rather than systematics that arise from imperfect modeling of the spectra such as NLTE effects, though differential systematics across the sample can artificially add abundance scatter.The actual derived values of q Z k,j will be affected by systematic offsets in the abundances.We add a softening parameter Q (Equation 7) to allow for the possibility that APOGEE observational errors are underestimated, or that there is intrinsic scatter around the KPM predictions. 8. Robust likelihood function -The observed value of [X/H] can be described as the K process expected value plus observational noise and/or other sources of intrinsic abundance scatter, as described by Equation 2. The expression in this equation can be thought of as the key assumption underlying our likelihood function.In detail the (negative two times the) log likelihood function is given by a chi-squared (χ 2 ) objective 4 While other enrichment channels with similar timescales may be included in the respective processes, the "CC" and "Ia" naming convention conforms to the choices in W22 and G22, and avoids the possible confusion of process numbers (1 and 2) with supernova type (II and Ia). where 1/σ 2 ij is the (robust; see below) inverse variance on measurement ij.Because we don't want to be too drawn or influenced by outlier points, we don't use the observed errors σ obs,ij in the likelihood, but instead we soften them in the spirit of iteratively reweighted least squares (Holland & Welsch 1977): where Q is a softening parameter.Our results are largely insensitive to the choice of Q.We find that the predicted abundances of all elements change by less than 0.01 dex for Q between 1 and 10, so we choose to set Q = 5.Very small Q values (e.g., Q = .1)will erase some of the abundance structure and produce poorer fits. 9. Implementation and optimization -With the above assumptions in place, the likelihood function can be optimized to a set of stellar abundances.The model is initialized at the Mg and Fe process vector components from Equations 4 and 5.It subsequently optimizes the process amplitudes (dubbed the A-step) using only Mg and Fe at fixed process vector components, and then optimizes the process vector components (dubbed the q-step) for all elements at fixed process amplitudes.The Astep and q-step are alternated, repeating 48 rounds of optimization, in the K = 2 case, and updating the best-fit parameters when the objective function improves.We find few differences in the best-fit parameters when we decrease the number of iterations to 32, indicating that the model quickly finds a good solution.In detail the optimizations are performed with a nonlinear χ 2 minimization algorithm (Gauss-Newton nonlinear least-squares) from jaxopt5 . KPM mirrors the two-proccess model from prior work (G22, W22) but, unless otherwise noted, the assumptions are weaker, there is a likelihood function in play, and the implementation is more general.In particular, we don't assume anything about the relationships between the process vector components and the morphologies of observed element-abundance ratio diagrams. DATA In this paper, we employ stellar abundances from APOGEE DR17 (Abdurro'uf et al. 2022), part of the SDSS-IV (Majewski et al. 2017).The APOGEE survey obtains high-resolution (R ∼ 22, 500) near-infrared (IR) observations (Wilson et al. 2019) for stars in the Galactic disk, halo, bulge, and nearby satellites/streams.Observations are taken with two nearly identical spectrographs on the 2.5m Sloan Foundation telescope (Wilson et al. 2019) at Apache Point Observatory in New Mexico and the 2.5m du Pont Telescope (Bowen & Vaughan 1973) at the Las Campanas Observatory in Chile.Spectral data are reduced and calibrated with the APOGEE data processing pipeline (Nidever et al. 2015), after which stellar parameters and abundances are calculated with ASPCAP (APOGEE Stellar Parameter and Chemical Abundance Pipeline; Holtzman et al. 2015;García Pérez et al. 2016).See Jönsson et al. (2020, DR16) and Holtzman et al. (in prep.,DR17) for a more detailed description of APOGEE data reduction and analysis, and Zasowski et al. (2013Zasowski et al. ( , 2017)), Beaton et al. (2021), andSantana et al. (2021) for a discussion of survey targeting.APOGEE DR17 reports stellar parameters, including T eff and log(g), as well as 20 elemental abundances: C, C I, N, O, Na, Mg, Al, Si, S, K, Ca, Ti I, Ti II, V, Cr, Mn, Fe, Co, Ni, and Ce for 657,135 stars.In DR17, new spectral libraries (Hubeny et al. 2021) are generated using the Synspec code and incorporate NLTE corrections for Na, Mg, K, and Ca (Osorio et al. 2020).Among the reported elements and ions, some are measured more precisely than others.We exclude Ti from our analysis as there are large differences between the abundances derived from the Ti I and Ti II lines (Jönsson et al. 2020).We also exclude P and V, as the P abundances are measured from a few very weak spectra features and V abundances are one of the least precise and least accurate labels (Jönsson et al. 2020).Both P and V display strong abundance artifacts and large scatter.Among the remaining elements, we note the following concerns: weak Na spectral features, large abundance scatter in S, significant systematic artifacts in Cr abundances at super-solar metallicities, potentially strong unaccounted-for NLTE effects on Mn abundances (Bergemann et al. 2019), and large abundance scatter in Co and Ce.For a more detailed discussion of abundance systematics and their effects on population trends, see Jönsson et al. (2020) and Griffith et al. (2021a). For our stellar sample, we select a subset of APOGEE DR17 stars with the goal of minimizing statistical errors from poor observations and systematic errors from abundance trends with T eff and/or log(g) while preserving a sufficient number of stars to conduct a meaningful statistical analysis across the Galactic disk.To remove poor quality data points, we require ASPCAP flags STAR BAD and NO ASPCAP RESULT equal zero.We only include stars from the main survey sample (EXTRATARG = 0), and use named abundances (X FE), as recommended by Jönsson et al. (2020).In addition to these quality cuts, we apply the following sample selection: To eliminate red clump (RC) stars, which show abundance variations from the RGB sample (Vincenzo et al. 2021a), we cross-match with and remove stars that appear in the APOGEE DR17 RC VAC6 (Bovy et al. 2014). These cuts result in a sample of 48,659 stars that span the Galactic disk.We plot their Z vs. R location, as well as the distributions of distances and eccentricities in Figure 1, taking distances and kinematics from (Queiroz et al. 2023).While our stellar sample extends from Galactic center, to the outer disk, to the halo, the majority of our stars (75%) are within 3.5 kpc of the sun.Further, 94% of our stellar sample has an eccentricity less than 0.4, indicative of in situ origin (e.g., Sales et al. 2009).In this paper, we assume that the KPMfits will be consistent across the Galactic disk, as We present abundances for Mg, O, Si, S, Ca, C+N, Na, Al, K, Cr, Fe, Ni, Mn, Co, and Ce.In the analysis of each element, X, we drop stars with X FE FLAG.Ce abundances are flagged in the most stars, resulting in ∼ 700 Ce labels being excluded.While the surface abundances of C and N differ from the stellar birth abundances for RGB stars due to the CNO processes and dredge-up events (Iben 1965;Shetrone et al. 2019), the total C+N abundance remains constant.As in W22, we consider C+N as an element, taking [(C+N)/H] to be [C+N/H] = log 10 (10 [C/H]+8.39+ 10 [N/H]+7.78 ) − log 10 (10 8.39 + 10 7.78 ), (8) using logarithmic solar abundances for C (8.39) and N (7.78) from Grevesse et al. (2007).We further adopt the error on the [C/Fe] abundance as the error on [C+N/Fe], since C typically dominates in the abundance ratio. We plot the distributions of all abundances in [X/Mg] vs. [Mg/H] for our sample in the first column of Figures 2 and 3. THE FIDUCIAL MODEL We fit the APOGEE sample with our fiducial model of K = 2, such that with the assumptions from Section 2. This fit produces process vector components q Z CC,j and q Z Ia,j as a function of [Mg/H] for each element and process amplitudes A CC Process Parameters and Fractional Contributions We plot the process vector components as a function of [Mg/H] in the third column of Figures 2 and 3 and provide the values at the [Mg/H] knots in Tables 1 and 2. The process vector components inform us about the relative contribution of prompt and delayed processes to the formation of the elements, as well as the metallicity dependence of the enrichment.By definition, q Z CC,Fe = 0.4 at all metallicities.For Mg and Fe, we also require q Z CC,j + q Z Ia,j = 1, implying q Z Ia,Fe = 0.6.No such constraints are placed on other elements.We note that KPM differs from the previous two-process models in this regard, as G22 and W22 require that the process vector components for all elements sum to 1 at solar metallicity. In the fourth column of Figures 2 and 3, we plot the distribution of fractional contributions from the prompt process (f CC ij ) to each element, where We generally find that the distributions are bimodal, like the observed abundance patterns, as the high-Ia and low-Ia populations have differing fractional contributions from prompt and delayed sources.We find that the α-elements (O, Si, S, Ca) are best fit with q Z CC,j and f CC ij > 0.5 at all metallicities.This is in agreement with theoretical prediction that α-elements are dominated by prompt CCSN production (e.g.Andrews et al. 2017).O, a Mg-like element theoretically purely produced in prompt CCSN, shows f CC i,O near 1 from [Mg/H] = −0.75 to solar.At supersolar metallicity, the delayed process contributes to O production, driving the f CC i,O value down to ∼ 0.8 at [Mg/H] = 0.4.S behaves like O, with almost entirely prompt production up to solar metallicity, after which delayed enrichment contributes more significantly.Conversely, we find that Si and Ca are best fit with prompt and delayed enrichment at all metallicities, though the prompt process always dominates.For Si, the delayed process appears to increase linearly with [Mg/H], while the Ca delayed enrichment increases from [Mg/H] of −0.75 to −0.1 and then decreases from [Mg/H] of −0.1 to 0.5. The process vector components of light odd-Z elements Al and K resemble those of the α-elements, such as S.Both exhibit q Z CC,j and f CC ij near 1 through solar metallicity, with an increase in q Z Ia,j and downturn in f CC ij at supersolar metallicities (especially for K).The behavior of the Na process vector components is more complex, with peaks and troughs in q Z Ia,Na .We find that Na has the strongest contributions from the delayed process of all α and light odd-Z elements, with q Z Ia,Na ≳ 0.5 at almost all values of [Mg/H] and f CC i,Na < 0.3 at [Mg/H] > 0. The strong delayed contribution to Na is in agreement with findings of W22 and G22, and in tension with theoretical yields (e.g.Andrews et al. 2017;Rybizki et al. 2017). Unlike α and light odd-Z elements whose delayed production is dominated by SNIa, C and N are thought to be promptly produced in CCSN with additional delayed enrichment from AGB stars (e.g.Andrews et al. 2017).We find that the prompt and delayed processes both contribute significantly, and nearly equally, across our stellar sample.Though theoretical N yields from AGB stars have a strong metallicity dependence (Karakas 2010;Ventura et al. 2013;Cristallo et al. 2015;Johnson et al. 2022), we observe only a slight positive metallicity dependence in q Z CC,C+N and a shallow dip in q Z Ia,C+N .We find a population of stars with f CC i,C+N near 0.9 and a population near 0.4. The Fe-peak elements (Cr, Mn, Fe, Co, Ni) are thought to be produced through prompt CCSN production and delayed SNIa production (e.g.Andrews et al. 2017).By construction, q Z CC,Fe = 0.4 and q Z Ia,Fe = 0.6 at all metallicities.This produces a bimodal distribution in f CC i,Fe similar to that observed in abundance space.Because of our choice of q Z CC,Fe , only a few stars have f CC i,Fe = 1 (see Section 5.1).We instead observe a population with f CC i,Fe near 0.8 and a population near 0.4.The process vector components and f CC i,Fe distribution for Cr and Ni strongly resemble those of Fe.All three elements have even atomic numbers.At supersolar metallicity, we find that the prompt process dominates Cr production, resulting in an upturn in f CC ij .Conversely, Ni displays a dominant, and increasing, delayed process vector component at supersolar metallicities.The process vector components for Mn and Co (odd atomic numbers) show a complex metallicity dependence, more resembling that of Na.Both elements display a strong delayed process, with the q Z Ia,Mn > 0.5 at all metallicities and > 1 for [Mg/H] > 0.1.Mn is the only element for which f CC i,Mn decreases to 0 for [Mg/H] ≳ 0.2.Finally, we find that the delayed process dominates Ce production at intermediate metallicity, with q Z Ia,Ce increasing up to [Mg/H] ≈ −0.2 and then decreasing to nearly 0 at [Mg/H] ≈ 0.3.The f CC i,Ce values are clustered near 0.25 around [Mg/H] of 0.2, then increase such that the abundances are almost entirely dominated by prompt enrichment at high metallicity. In addition to process vector components, each star is fit with a prompt and delayed process amplitude, A CC i and A Ia i respectively (Table 3).All elemental abundances are used in the calculation of these amplitudes, so they can be interpreted as "de-noised" abundance labels that suppress observational scatter by averaging over elements via the data-driven model.The value of A CC i traces the metallicity (specifically [Mg/H]) Tinsley 1979Tinsley , 1980)), as was found in W22 and G22.We stress that the presence of the abundance bimodality was not fed into our model, and yet it is recovered in the best-fit process amplitudes.The stars with larger A Ia With the optimized process parameters in hand, we can use Equation 2 to calculate predicted abundances for the fiducial model-the abundances our stellar population would have if the model assumptions are correct and only one prompt and one delayed process contribute.To simulate observational noise, we add an error drawn from a Gaussian distribution with σ equal to the reported error on each abundance for each star.In Figure 2 and 3 we plot the predicted abundances plus estimated noise in the second columns.These distributions can be compared to the observed abundance distribution in the first columns. Overall, the fiducial model successfully reproduces the observed abundance distributions.It is capable of capturing metalicity dependences and bimodality.The predicted abundances plus estimated noise are not, however, able to reproduce the observed abundance scatter.This is especially noticeable for O, C+N, Na, Al, K, Co, and Ce.For these elements the scatter in the observed abundance distribution is larger than in the predicted distribution, suggesting that the APOGEE observational scatter is underestimated, that there are T eff or log(g) dependent abundance trends (e.g., Griffith et al. 2021a, W22), or that the K = 2 model is insufficient-a likely case for elements produced by AGB stars, such as C+N and Ce.The model performs similarly well when the population is downsampled to 5000, 1000, and 500 stars, though the number of nodes has to be decreased from 11 to 7. Comparing to W22 As discussed in Sections 1 and 2, KPM is based upon the two-process model developed in W19 and W22, but with increased flexibility, minimal normalization, and no forced dependence upon the [Fe/Mg] vs. [Fe/H] bimodality or population abundance trends.Further, the KPM utilizes all stellar abundances in the optimization of A CC i and A Ia i , whereas only Mg and Fe are used in W19 and only Mg, O, Si, Ca, Fe, and Ni in W22. In the fiducial model, we adopt K = 2, as in W22, but assume q Z CC,Fe = 0.4, 0.1 dex lower than the q Z CC,Fe value assumed in W22.In practice, this moves the implied "pure" CCSN enrichment plateau from [Fe/Mg] = −0.3 to [Fe/Mg] = −0.4(though the W22 plateau value is determined after they apply a global offset of +0.05 to all [Fe/Mg] abundances).Because our model is non-negative, it requires a lower q Z CC,Fe to correctly model the stars on the [Fe/Mg] plateau, whereas W22 assigns stars with [Fe/Mg] < −0.3 negative A Ia i values.While our stellar samples and model assumptions differ, we plot the W22 q Z CC,j and q Z Ia,j vector components as well as the W22 solar metallicity f CC ij values of Figures 2 and 3 for comparison with our fiducial model.We generally observe similar behavior between KPM and W22.Our q Z Ia,j vector components tend to be ∼ 0.1 greater than those of W22 for elements with significant delayed contributions because of our differing q Z CC,Fe assumptions.The metallicity dependencies agree for most elements, with small variations at the high-[Mg/H] end for O, Al, K, and Ce.We also see good agreement between the KPM and W22 solar metallicity f CC ij values, with the W22 points slightly offset to larger values for elements with significant delayed contributions. To compare the accuracy of the models' abilities to reproduce the observed abundances, we identify a subset of ∼ 23, 000 stars in both our sample and the W22 sample.We calculate the predicted abundances for each star under KPM and the two-process model, then determine the χ 2 value of the fits for each star (summing over the elements) and for each element (summing over the stars).We plot the cumulative stellar log(χ 2 ) distribution and the total χ 2 for each element in the right and left panels, respectively, of Figure 5.It is important to note that in the calculation of the W22 model residuals, we do not apply the temperature corrections discussed in Section 5.1 of W22. We find that, overall, the χ 2 decreases between the W22 two-process model and our K-process model, an indication that we better predict all of a star's abundances.When looking at each element individually, we find that we better predict C+N, Na, K, Ni, Mn, Co, and Ce, with major improvements to C+N and Mn.Our fiducial model is significantly worse at predicting Mg, Ca, and Fe than the W22 model, three of the six elements that W22 employ to fit the process amplitudes.Because KPM uses all elements in its optimization, Mg, Ca, and Fe are effectively de-weighted relative to the W22 model, while C+N and Mn influence the model parameters.If we re-fit KPM using only the Mg, O, Si, Ca, Fe, and Ni abundances in the A-step (as in W22), we find that KPM and the two-process model predict the abundances of all elements but Mn with similar accuracy, and that the two-process model better predicts Mn.Our fiducial model's success in predicting C+N and Mn is likely attributable to the inclusion of the elements in the A-step.The choice to include all elements or a subset of elements in the fits should be considered when implementing KPM.If searching for stars with anomalous abundances of element X relative to the expected abundances of others, one may want to exclude X from the A-step.Figure 5. Left: cumulative distribution of log 10 (χ 2 ) for W22 (dashed orange line) and our fiducial model (G23, solid purple line).Right: χ 2 per element for the same model fits with elements ordered by atomic number.Overall, our model has a smaller cumulative χ 2 than the previous two-process model and better predicts the abundances of C+N, Na, K, Mn, Co, Ni, and Ce. We note that our fiducial model is fit to a stellar sample that spans a wider range of T eff and log(g) than the W22 sample.If we repeat our analysis on the W22 stellar sample with q Z CC,Fe = 0.5 we almost perfectly recover the W22 process vector components, with small deviations at [Mg/H] > 0.1, and more substantially improve upon the stellar and elemental χ 2 values.Most notably, KPM is better able to predict the abundances of stars with [Mg/H] > 0, where the high-Ia and low-Ia sequences blend together and the W22 categorization of high-Ia and low-Ia stars may be incorrect. an average χ 2 per star > 90 while the models with q Z CC,Fe of 0.4 and 0.35 have an average χ 2 per star < 55.In both the metallicity independent and metalicity dependent cases, the models with q Z CC,Fe = 0.4 have the lowest average χ 2 per star, at 54.54 and 54.47, respectively, though the models with q Z CC,Fe = 0.35 have a χ 2 that is only greater by ∼ 0.1.Of the seven models explored here, the case with q Z CC,Fe = 0.4 and dq Z CC,Fe /dZ = 0.15 has the lowest average χ 2 per star, indicating that the Fe abundances are best fit by a metallicity dependent prompt process.Introducing this metallicity dependence subtly changes the shape of the predicted low-Ia distribution in a way that achieves better agreement with APOGEE observations. Though the q Z CC,Fe = 0.4 and 0.35 models are similar in terms of their goodness of fit, their nucleosynthesis implications are different.In Figure 8, we plot the median value of f CC ij (Equation 10) for the low-Ia population at solar metallicity (−0.05 < [Mg/H] < 0.05), where low-Ia stars are defined by (11) as in W19, W22, and G22.We only show the median f CC ij values for the models with dq Z CC,Fe /dZ = 0.0 as the solar metallicity median f CC ij values for the metallicity dependent models are almost identical for matching values of q Z CC,Fe .We find that the choice of q Z CC,Fe has little impact on the median f CC ij values of elements dominated by CCSN enrichment (e.g., O, Al, S, K).As the delayed contribution increases, the median elemental f CC ij values decrease more significantly with decreasing q Z CC,Fe .The choice of q Z CC,Fe most impacts the median f CC ij values for Na, Cr, Fe, Mn, and Ce, with the median f CC ij for Mn decreasing from 0.42 for q Z CC,Fe = 0.5 to 0.22 for q Z CC,Fe = 0.35.Because the q Z CC,Fe value sets the prompt enrichment plateau, a lower q Z CC,Fe model implies a lower f CC ij value.While the high q Z CC,Fe model can likely be ruled out due to poorness of fit, the true q Z CC,Fe value and its metallicity dependence are unknown.It is therefore important not to over-interpret the specific f CC ij values of a given model.The f CC ij parameter can provide qualitative descriptions of which elements have more or less prompt/delayed enrichment, but the exact values are uncertain. Increasing the Number of Processes In our fiducial model, we adopt K = 2 with the two processes representing prompt, CCSN-like enrichment and delayed, SNIa-like enrichment.While a K = 2 model can well describe the stellar abundances (e.g., Figures 2 and 3), the abundance residuals cannot be explained by observational noise alone and hold information about the intrinsic variations from a K = 2 model (Ness et al. 2019G22, W22, Ting & Weinberg 2022;Ratcliffe & Ness 2023).Potential sources of such scatter include metallicitydependent SN yields with a bursty star formation history, environmental variations in the IMF, stochastic sampling of the IMF, and more than two distinct processes (e.g., AGB stars, merging neutron stars, and unique classes of SNIa) with different time delays for enrichment (e.g.Belokurov & Kravtsov 2022;Griffith et al. 2023).Note that the existence of many enrichment channels is not in itself sufficient for producing CC, Fe = 0.5 q Z CC, Fe = 0.45 q Z CC, Fe = 0.4 q Z CC, Fe = 0.35 scatter around a K = 2 model (or even a K = 1 model); one needs star-to-star variation in the relative amplitude of these channels.For example, in a fully mixed one-zone model, all abundances depend only on time, even if many enrichment channels contribute. In this Section, we will explore the impact of adding additional processes to our model, increasing from K = 2 to K = 4.Because KPM is sensitive to enrichment with different time delays, adding components could be interpreted as adding sources with distinct enrichment time scales.For example, if AGB stars and SNIa enrich with the same time delay, the model would fit both sources in one delayed component.If AGB and SNIa enrich with different delay times, a third component could pick up delayed AGB enrichment not captured by the original delayed process.Indeed, evidence of a distinct AGB-like process is identified in G22 and W22, where correlated residuals are used to expand the two-process model.However, both works add components in a restrictive manner that requires choosing which elements to assign to 3rd and/or 4th processes and that does not allow for the original two processes to vary. Our goal is to demonstrate the potential for using KPM to flexibly model more than two enrichment channels and improve the accuracy of the abundance predictions.We allow the model to identify the elements best-fit with additional components and modify the K = 2 process parameters.Ultimately, such a method could be used to identify elements with more than two enrichment channels, but our data set may not be capable of doing so robustly.In the K = 4 case, our model becomes where q Z 3,j and q Z 4,j are the third and fourth process vector components and A 3 i and A 4 i are the third and fourth process amplitudes.The model, however, does require some regularization to converge.As in the K = 2 case where we assume that Mg is a pure CCSN element and fix the q Z CC,Fe and q Z Ia,Fe values, we need elements to regulate our 3rd and 4th processes.We choose Ce and Mn, two elements with larger residuals that likely have additional nucleosynthetic sources-Ce from AGB stars and Mn from distinct classes of SNIa (e.g.Gallino et al. 1998;de los Reyes et al. 2020;Gronow et al. 2021).To test the impact of our choice of representative elements we also fit the K = 4 model with the third and fourth processes fixed to C+N and Cr.We find that similar groups of elements are better fit with additional components.We initialize the K = 4 model at the K = 2 model values of q Z CC,j , q Z Ia,j , A CC i , and A Ia i with the added constraints that q Z 3,Mg = 0, q Z 3,Fe = 0, q Z 3,Mn = 0, q Z 3,Ce = 1 (13) and q Z 4,Mg = 0, q Z 4,Fe = 0, q Z 4,Mn = 1, q Z 4,Ce = 0 (14) at all metallicities.We first fit the A-step to only Mg, Fe, Mn, and Ce, and then conduct 32 iterations of the q-step and A-step, as described in Section 2. We again inflate the scatter according to Equation 7with Q = 5. The model converges upon a set of process vector components and amplitudes that can be combined with Equation 12to predict the stellar abundances and calculate the fractional contribution from each process.In Figure 9, we plot for the SNIa, third, and fourth processes.We find that processes 3 and 4 are most prominent in stars at low metallicity and that there is a large population of stars with A 3 i and/or A 4 i ≈ 0. In Figure 10 we plot the observed and predicted abundance distributions as well as the process vector components and fractional contribution from each component for a subset of elements.We note that the model parameters q Z 3,j and q Z 4,j should be interpreted in conjunction with the amplitudes, as we set the third and fourth process vector components for Ce and Mn to an arbitrary value with no metallicity dependence. , and A 4 i (right).All density plots are logarithmically scaled.The third and fourth processes contribute most to low metallicity stars. We find that the third process, regularized to Ce, contributes at a low level to O, Si, S, Al, and K and more significantly to Ca, Na, Cr, and Ce.The fourth process, regularized to Mn, contributes at a low level to K and more significantly to S, C+N, Na, Cr, Ni, Mn, and Co.These best-fit element groupings resemble, but are not identical to, the elements selected for additional components in W22, where the third process included Ca, Na, Al, K, Cr, and Ce and the fourth process included Ni, V, Mn, and Co. In the left-most columns of Figure 10 We plot the median q Z CC,j and f CC ij in light purple, q Z Ia,j and f Ia ij in dark purple, q Z 3,i and f 3 ij in light orange, and q Z 4,i and f 4 ij in dark orange.In the final column we plot a dotted grey line at 0 for reference. predictions for Ca, C+N, Na, Cr, Ni, Mn, Co, and Ce.We note that the predicted abundances do not have noise added (unlike Figures 2 and 3) to highlight the differences between the K = 2 and K = 4 predictions.Comparing the predicted abundances from the K = 2 and K = 4 process models, we see that the K = 4 process model is better able to capture the abundance scatter than the K = 2 model, especially at the low metallicity end of the low-Ia population.This result is expected, as adding more model components will increase the abundance space that KPM is able to reproduce. In the fourth and fifth columns of Figure 10, we plot the process vector components and median f k ij as a function of [Mg/H] of the low-Ia population (Equation 11), respectively, where We include q Z CC,j and q Z Ia,j as well as median f CC ij and f Ia ij from the K = 2 model in respective columns for comparison.We see that the third process contributes significantly to Ca, Na, Cr, and Ce at low metallicity, with decreasing contribution up to [Mg/H] ≈ 0.1.The fractional contribution from the K = 2 prompt and delayed processes to these elements decreases under the K = 4 model.The fourth process behaves in a similar manner but with elements C+N, Na, Cr, Ni, Mn, and Co.The fractional contribution from the third and fourth process is nearly identical in the high-Ia population. =The statistical improvement in the KPM between the K = 2 and K = 4 models is evident in the χ 2 values.In Figure 11, we plot the cumulative log 10 (χ 2 ) distributions for the fits to each star and the total χ 2 for each element for the fiducial K = 2 model and the K = 4 model.We find that the cumulative log 10 (χ 2 ) distribution decreases with the increase in model components by greater than two for most stars, expected for the addition of two degrees of freedom.We also find that the χ 2 per element is lower for all elements in the K = 4 model.Significant improvements to Ca, C+N, Mn, and Ce are likely due to the additional third and fourth components capturing abundance scatter that the original two processes could not.Notably, we also see a significant improvement in the Fe fit, even though we require q Z 3,Fe = q Z 4,Fe = 0.Because all elements influence the K = 2 model fit, the fiducial model was likely pulled away from the best solution for Fe to accommodate another element, like Mn.With the additional components able to account for the non-Fe-like enrichment, the original two processes are better able to capture the Fe enrichment. Through this investigation, we find that KPM is extendable to K > 2 processes.The additional processes improve the model quantitatively, but additional work is needed to improve the nucleosynthetic interpretability.We provide a discussion of the future science that KPM and the K = 4 model enable below. DISCUSSION In this paper, we present KPM, a flexible and data-driven model for inferring nucleosynthesis yields.KPM describes stellar abundances as the sum of K components, where each component is the product of a metallicity-dependent process vector component (fit to each element) and a process amplitude (fit to each star).Combined with a likelihood function and a set of assumptions (Section 2) that make the processes interpretable in terms of nucleosynthetic sources, the best-fit KPM parameters can be used to calculate fractional contributions from each process as well as a full suite of predicted K process abundances. We fit KPM with K = 2 to abundances labels for 15 elements and 48,659 RGB stars in APOGEE DR17, selecting a population that minimizes statistical and systematic errors while spanning an [Mg/H] of -0.8 to 0.5.In the K = 2 model, the first process, fixed to Mg, represents prompt CCSN-like enrichment, and the second process, fixed to Fe, represents delayed SNIa-like enrichment-but other nucleosynthetic sources with similar time delays may be mixed into each.Under our adopted assumptions, the prompt process also contributes to Fe, but the delayed process does not contribute to Mg, in accordance with theoretical expectations for CCSN and SNIa.Overall, we find that K = 2 is a good fit to the data and that the model successfully recovers the global abundance patterns in the Milky Way.While KPM does not rely on [Fe/Mg] vs. [Mg/H] bimodality or median abundance trends, it is able to recover the observed bimodal abundance distribution.Further, the fit parameters A CC i and A Ia i act as combined individual-process abundance labels, revealing a clearer signature of bimodality at high metallicity in A Ia i /A CC i vs. [Mg/H] space than in [Fe/Mg] vs. [Mg/H].This suggests that the KPM fit parameters and predicted abundances could be used as a higher signal-to-noise tracer of nucleosynthesis, as they are using a justified likelihood to condense information from 15 elements into two variables. To test the assumptions of the fiducial model, we explore the impact of varying the fixed value of q Z CC,Fe .We find that high values of q Z CC,Fe (0.5 and 0.45) are not able to reproduce the observed [Fe/Mg] vs [Mg/H] abundance distribution, regardless of the process vector component's metallicity dependence.Our requirement that A CC i and A Ia i are non-negative makes it impossible for the model to reproduce the lowest [Fe/Mg] values in the APOGEE data.Values q Z CC,Fe = 0.4 and 0.35 with dq Z CC,Fe /dZ = 0 or 0.15 produces similarly successful fits.While predicted abundance distributions appear similar for these models, the implied fractional contribution from the prompt process is dependent upon the Fe assumption for elements with substantial delayed enrichment.Through this exploration, we conclude that the quantitative nucleosynthetic interpretation of KPM is dependent upon the input assumptions, and that there is inherent uncertainty in the f CC ij values.Finally, we expand KPM from K = 2 to K = 4, regularizing the third and fourth processes to Ce and Mn.KPM builds off of the original model, such that the K = 4 model starts at the K = 2 solution and then finds the best-fit parameters for K = 4, altering the original solution and allowing all elements (except Mg and Fe) to have contributions from additional processes.We find that S, Ca, C+N, Na, Cr, Mn, Co, Ni, and Ce are best fit with a third and/or fourth component, with such processes contributing most significantly at low metallicity.The information for constraining the q Z k,j values for the third (fourth) process come from the star-by-star deviations of Ce (Mn) from the K = 2 model predictions and their correlation with deviations for other elements X i .Relative to the approach taken in W22 (Section 8), our K = 4 model requires non-negative q Z k,j and A k i for all elements and stars, and it starts by tying the third and fourth processes to individual elements rather than groups of elements.The K = 4 model improves the ability of KPM to fit the abundances of all elements but especially improves predictions of Ca, C+N, Fe, Mn, and Ce.This successful implementation of a K = 4 model shows that KPM can be extended to K > 2, and it has potential future use in constraining enrichment beyond a single prompt and delayed process-critical to understanding enrichment from AGB stars, merging neutron stars, and rarer novae. KPM is based upon the two-process model developed in W19 and W22.While the two models are identical in format for the K = 2 case, the model assumptions, parameter derivations, and implementations differ.The W22 two-process model derives process vector components from median abundance trends, reliant upon [Mg/Fe] vs. [Mg/H] bimodality, and fits process amplitudes to a subset of 2 − 6 α and Fe-peak elements.KPM, on the other hand, employs a likelihood function fit to all stars and all elements to derive both process amplitudes and vector components.Our more data-driven implementation results in the improved ability of the K = 2 model fit to predict all of a star's abundances.Notably, KPM can better predict C+N and Mn abundances than the W22 two-process model, since all elements are used to constrain the fits.The most significant improvement to the original two-process model, though, is in KPM's flexibility.The flexible implementation of the model allows us to easily vary the assumptions, such as q Z CC,Fe , and increase the number of model components to study the impact of our assumptions on the results and push the interpretation of KPM beyond standard CCSN and SNIa nucleosynthesis in a less restrictive manner than W22 and G22. However, KPM is not without its own faults.The assumptions listed in Section 2 may incorrectly skew our results, and the model could benefit from improvements in implementation.While assumptions (4) and ( 5) on Mg and Fe production are flexible, KPM requires that both elements have fixed process vector components.If our assumptions are incorrect and, for instance, Mg is not a pure prompt element or (in the K = 4 case) Fe has contributions from multiple delayed sources, our nucleosynthetic interpretation of KPM may be wrong.This becomes more challenging as K increases and we have to make more assumptions to break rotational symmetries (the symmetries in which the process amplitudes and the process vectors are transformed in corresponding ways to leave the predictions unchanged).Additionally, assumption (7) states that the APOGEE data products can be used for this project, but we inflate outlying-star abundance errors with a softening parameter, Q to account for their likely underestimation.It is also possible that Q is accounting for some of the real intrinsic scatter in the data and inflating the observational error on true outlier stars.In future KPM implementation, the development of a more robust method to justifiably down-weight outlier stars from the global fits would be beneficial.This method should account for both non-Gaussian observational errors (e.g., from bad telluric subtraction or unlucky line blends) and physically interesting outliers (e.g., from binary mass transfer).Finally, KPM fits process vector components along a spline with 11 knots (assumption 6), and those knots have fixed locations in metallicity.As this method fits a polynomial between each knot, it can result in sharp features at the knot locations in metallicity regions with few points or large scatter.Fitting process vector components with a differentiable function might be more reasonable, though results shown in Appendix C.1 in G22 suggests that this change might have minimal impact on the results.In general this model for the metallicity-dependence of the yields is very rigid; a better model could both have more flexibility and be smoother. Beyond improvements to the underlying model assumptions and implementation, KPM needs to include parameter uncertainty.While the model delivers process vector components and amplitudes, which can be used to calculate f k ij and K process predicted abundances, the current implementation does not return errors on any variable.The best method to derive such errors has not been explored, but one could use the likelihood function or bootstrapping.These methods will encapsulate the uncertainty on process parameters from the APOGEE abundance errors but will not capture the uncertainty due to model assumptions, such as q Z CC,Fe (Section 5.1).While such future changes will improve the model, the current form of KPM and its data products can support ongoing research and will enable new science.Most immediately, KPM provides high signal-to-noise abundance labels, A CC i and A Ia i , as well as de-noised stellar abundances (m ij ).The best-fit values of A CC i and A Ia i , in particular, are powerful tracers of nucleosynthesis.They show a bimodality at all metallicities, as do some of the de-noised abundances.And-because the model is a maximum-likelihood model-they represent information-theory optimal combined measures of α and Fe-peak abundances.That is, these data-driven amplitudes could replace more theory-driven measures of the relative contributions of CCSN and SNIa enrichment channels. In Section 4.1, we showed that the high-Ia and low-Ia populations are more clearly defined in In amplitude space, the low-Ia population can be re-defined as Such analysis with KPM parameters will be useful in studying nucleosynthesis, dynamics, disk formation, stellar ages, and much more.However, in this paper we only present fits for a small population with restricted stellar parameters, relative to the full APOGEE sample.While KPM could be fit to the full APOGEE sample, systematic abundance effects with T eff and log(g), as well as other abundance artifacts (e.g., Jönsson et al. 2020;Griffith et al. 2021a), cause the abundance trends to differ across the Hertzsprung-Russell diagram.The best-fit KPM parameters for the giants would differ from those for the dwarfs.If such systematics could be accounted for (see Sit et al. in prep) we could fit the full APOGEE stellar sample with KPM, or train KPM on a subset of high signal-to-noise stars and apply the fits to the full sample.This potential future analysis could reveal additional information about the nucleosynthetic history of our Galaxy and would provide higher signal-to-noise abundance labels for the full sample. The success of the two-process model (W19, W22) and KPM with K = 2 suggests that the distribution of disk stars in APGOEE abundance space is largely twodimensional (2D), though more dimensions are required to fully explain the data (Ting & Weinberg 2022, W22).In this paper, we have focused on a 2D nucleosynthetic Griffith & Hogg model, with the two dimensions representing prompt CCSN-like enrichment and delayed SNIa-like enrichment.However, another 2D class of theoretical models for the Milky Way exists, describing stars in terms of birth radius and birth date (e.g., Frankel et al. 2018;Ness et al. 2022).Are these two 2D models related?If they are, then the nucleosynthetic parameters from KPM (A Ia i and A CC i ) should predict asteroseismic ages (or masses), up to unpredictable aspects of mass transfer, as well as the guiding center radius, up to unpredictable aspects of radial migration.While a deeper study of the implications of the disk's two-dimensionality is outside the scope of this work, we show the relationship between asteroseismic age from the APOKASC sample (Pinsonneault et al. 2018) and process amplitudes in Figure 13.Here we see a clear gradient in age with A CC i and A Ia i /A CC i (as in G22 and W22), though outlier stars are scattered throughout.We predict that the KPM parameters will be better age diagnostics than APOGEE abundances, and that age outliers may be mass transfer objects.Finally, because of the flexibility of KPM, new scientific applications are enabled that were not feasible before.Because KPM performs well with a low number of stars and does not rely on [Mg/Fe] vs. [Fe/H] bimodality, non-bimodal populations can now be fit with a multi-component nucleosynthetic model.KPM could be applied to the low metallicity disk, halo, Gaia Enceladus Sausage, Nubecula Major, Nubecula Minor7 , other Milky Way satellites, and more.KPM can also be easily extended to K > 2 in a much less restricted way than the two-process model.While a K = 2 model well describes the global abundance patterns, intrinsic residual scatter on the scale of 0.01 to 0.02 dex remains (Ting & Weinberg 2022, W22, G22).This scatter could be signatures of enrichment from non-CCSN/SNIa sources, stochastic sampling of the IMF, environmental IMF variations, or metallicity-dependent SN yields with a bursty star formation history (e.g.Belokurov & Kravtsov 2022;Griffith et al. 2023).While it is difficult to identify non-CCSN or SNIa enrichment in the APOGEE data alone, where only C+N and Ce are expected to have significant contributions from other sources, there may be signatures in other surveys with better coverage of heavier elements.Applying a K > 2 model to GALAH (Buder et al. 2021), or an overlapping sample of APOGEE and GALAH stars (Nandakumar et al. 2022), could prove more successful. In the K = 2 and K > 2 cases, results from KPM will help us disentangle our Galactic formation and enrichment history.This data-driven model opens doors to many new research projects and exciting future scientific results.To use KPM yourself, please reference the KPM GitHub repository8 or contact the corresponding author. Figure 1 . Figure1.Left: distribution of our stellar sample in Z (kpc) vs. R (kpc) where (0,0) is the Galactic center.Center: distribution of stellar distances (kpc).Right: distribution of stellar eccentricity.Our stellar sample spans the Galactic disk, but the majority of our stars are within 3.5 kpc of the sun and have kinematics consistent with in situ origin. i and A Ia i for each star.From the model parameters, we can calculate fractional contributions from each process as well as a full suite of predicted K = 2 process abundances, shown in the second column of Figures2 and 3. Figure 2 . Figure 2. Abundance distributions and KPM parameters for C+N, α, and light odd-Z elements.First column: observed abundance distributions in [X/Mg] vs. [Mg/H].Second column: The predicted [X/Mg] vs. [Mg/H] abundance distribution of the fiducial model plus estimated noise.By comparing the first two columns we can evaluate the success of KPM in reproducing the observed abundance distributions.Third column: process vector components q ZCC,j (thin, purple) and q Z Ia,j (thick, orange) from this work (G23, solid, dark lines) and W22 (light, dashed lines).Overall offsets between the solid and dashed lines are driven largely by our normalization that places the [Mg/Fe] plateau at +0.4, rather than +0.3 in W22.Fourth column: distribution of fractional contribution from the prompt process (f CC ij ) predicted by the fiducial model.We plot the median f CC ij values of the low-Ia (orange square) and high-Ia (purple circle) populations in the solar metallicity bin from W22 for comparison.All density plots are logarithmically scaled. Figure 3 . Figure 3. Same as Figure 2, but for Fe-peak elements and Ce. i /A CC i values correspond to the high-Ia population, and those with low A Ia i /A CC i correspond to the low-Ia population.While in the Tinsley-Wallerstein diagram the two populations blend together at high metallicity, they are more distinguishable in our amplitude space.We plot A Ia i /A CC i vs. [Mg/H] in the center panel of Figure 4.The high-Ia and low-Ia populations are clearly separable through [Mg/H] of 0.4.This is further shown through the A Ia i /A CC i distributions in the right panel of Figure 4 for [Mg/H] bins of −0.75 to −0.425, −0.425 to −0.1, −0.1 to 0.225, and 0.225 to 0.55.The three lowest metallicity bins display a bimodal distribution and the highest metallicity bin is dominated by high-Ia stars. Figure 4 . Figure 4. Left: distribution of A Ia i /A CC i vs.A CC i for the fiducial model.This plot is similar to a [Fe/Mg] vs. [Mg/H] distribution where A Ia i /A CC i is a proxy for [Fe/Mg] and A CC i is a proxy for [Mg/H].Note that A CC i = 1 corresponds to [Mg/H] = 0. Center: distribution of A Ia i /A CC i vs. [Mg/H].In the left and center panels, we can clearly see the bimodality to high values of A CC i and [Mg/H].Both density plots are logarithmically scaled.Right: distribution of A Ia i /A CC i for ranges of [Mg/H] with metallicity bin increasing from top to bottom.[Mg/H] bins are of width 0.325 dex and span −0.75 to 0.55. Figure 8 . Figure 8. Elemental median values of f CC ij at solar metallicity for the low-Ia population for KPM with q Z CC,Fe = 0.35 (darkest purple) to 0.5 (lightest purple) and dq Z CC,Fe /dZ = 0.0.Elements are ordered by atomic number.The median f CC ij changes most dramatically for elements with strong delayed contributions. Figure 10 . Figure 10.Left: [X/Mg] vs. [Mg/H] abundance distributions for the observed sample (first column), K = 2 model (second column) and K = 4 model (third column) for Ca, C+N, Na, Cr, Ni, Mn, Co, and Ce.Observational errors are not added to the model predictions.All density plots are logarithmically scaled.Right: process vector components (fourth column) and median low-Ia fractional contribution from each process (fifth column) for the K = 4 (solid lines) and K = 2 (dashed lines) models to the low-Ia population as a function of [Mg/H].We plot the median q ZCC,j and f CC ij in light purple, q Z Ia,j and f Ia ij in dark purple, q Z Figure 11 . Figure 11.Left: cumulative distribution of log 10 (χ 2 ) for the K = 2 model (dotted light purple line) and the K = 4 model (solid dark purple line).Right: χ 2 per element for the same model fits.Elements are ordered by atomic number. 11 (W19, W22), this new definition re-classifies 647 stars as high-Ia and 224 stars as low-Ia.We show the location of these stars in A Ia i /A CC i vs. [Mg/H] and [Mg/Fe] vs. [Fe/H] in Figure 12.Many of the re-classified stars are at [Fe/H] > −0.1.When dividing in [Mg/Fe], it is difficult to correctly separate the populations at high metallicity, as they are blended together.Our new definition also re-classifies many stars near [Fe/H] of -0.3 as high-Ia, suggesting that the W19 and W22 high-Ia definition has too shallow a slope.While only ∼ 2% of stars are re-classified under the new definition, we suggest that Equation16be used to chemically define the low-Ia and high-Ia populations if KPM fits are available, especially if studying stars with [Fe/H] > −0.1.Beyond improving the definition of high-Ia and low-Ia populations, the KPM parameters and predicted abundances could be used in any current analysis that strives to show trends with abundance labels.We predict that trends of stellar parameters with [X/H] will be clearer when comparing to m ij , A CC i , or A Ia i . Figure 12 . Figure 12.Left: distribution of stars in A Ia i /A CC i vs. [Mg/H] where the black dashed line is the dividing line between the high-Ia and low-Ia populations (Equation 16).Stars that are re-classified as high-Ia are shown in purple (647 stars) and stars that are re-classified as low-Ia are shown in orange (224 stars).Right: Same as left, but in [Mg/Fe] vs. [Fe/H] space.The high-Ia and low-Ia populations have been labeled in both panels for clarity.The symbols emphasize the stars at the edge of the populations but the re-classified stars make up only ∼ 2% of the total population. Figure 13 Figure 13.A Ia i /A CC i vs.A CC i distribution for stars in the APOKASC sample (Pinsonneault et al. 2018) Each point is colored by the star's asteroseismic log 10 (age), with younger stars in black and older stars in yellow.A clear gradient with log 10 (age) and process parameters exists with young outliers scattered throughout.This shows that the process amplitudes constitute a good-quality age indicator. Table 1 . Fiducial model q Z CC,j values at [Mg/H] knot values for each element. Table 2 . Fiducial model q Z Ia,j values at [Mg/H] knot values for each element. Table 3 . Fiducial model A CC CC i .We find a bimodal distribution, similar to the Tinsley-Wallerstein diagram ([Mg/Fe] vs. [Fe/H], Wallerstein 1962; i vs.A
15,388.8
2023-07-11T00:00:00.000
[ "Physics" ]
On the mathematical fluid dynamics of the atmospheric Walker circulation Starting from the general, governing equations for a viscous, compressible fluid, with an associated description of its thermodynamics, we outline an asymptotic derivation based on the thin-shell approximation. [The details appear in another publication.] This produces a reduced system of equations which retain all the dynamics and thermodynamics of the steady atmosphere, the thin-shell approximation alone being the basis for the construction of the asymptotic solution. The leading order describes the background state of the atmosphere, and the next order provides a simple set of equations that can be used to investigate, for example, the Walker circulation, a particular atmospheric flow which is restricted to the neighbourhood of the Equator across the Pacific Ocean. Our formulation of this problem shows, explicitly and in detail, how the pressure and temperature gradients in the azimuthal direction drive the circulation; this extends the usual physical arguments used to describe the Walker circulation. An initial investigation highlights the rȏle of the variable eddy viscosity and then, on the basis of these observations, a solution is obtained which describes in detail the velocity and temperature fields in the Walker cell. In particular, we present an example of the temperature profile and of the streamlines for the flow along the Equator and which is bounded above by the tropopause. Further details of the Walker circulation are given, together with an identification of the heat sources that drive the motion. Finally, we comment on the changes to the flow pattern that arise during an El Niño event. Introduction main controller of the weather in this region of the Pacific Ocean. Indeed, this motion of the surface, coupled to the Equatorial Undercurrent, is an important element in the dynamics of the equatorial Pacific Ocean. For a general introduction to atmospheric flows, including a discussion of the Walker circulation, see, for example, [9,12]; for a mathematical model of this oceanic flow, with associated wave interactions, see [2,3]. The mathematical description that we explore here is based on the system of equations developed in [6], where a compressible, viscous fluid, with suitable thermodynamic properties, is used to represent the atmosphere. The formulation hinges on the thin-shell asymptotic approximation to describe the atmosphere on a (nearly) spherical, rotating Earth. In this presentation, we will briefly outline how these equations are obtained, and then write down the main system of equations that couple the dynamics and the thermodynamics; the full details are given in [6]. The plan is to take these equations and apply them to the Walker circulation; this will then constitute a special reduction of the general asymptotic formulation. The resulting system is readily analysed, the forcing required to generate the flow in the cell being explicit; it takes a simple form which is easily interpreted and can be related directly to the physical structure of the flow field. A few general observations about the flow structure in the Walker cell were given in [6], but the intention here is to provide far more detail. Indeed, we are able to include the adjustments needed to accommodate the changes associated with El Niño and El Niña events (which are described, for example, in [13]). Governing equations The underlying model that we use for our description of the atmosphere is based on the general equations for a compressible, viscous fluid, coupled to an equation of state and a suitable version of the first law of thermodynamics. In mathematical fluid dynamics, this is what constitutes a model; the development then follows the familiar route of non-dimensionalisation, scaling and the construction of an asymptotic solution. (More details about how the atmosphere is modelled, and the general principles that underpin the analysis, can be found in [6].) For this discussion, we choose to work in rotating, spherical coordinates, assuming a spherical Earth; the ellipsoidal approximation of the Earth's geoid-not invoked here-is carefully described in [6]. We allow the eddy viscosity to vary with height above the Earth's surface; indeed, in many models it is taken to be virtually zero beyond about 2 km altitude; see [15]. The development rests on one fundamental parameter: ε = H R , the thin-shell parameter, where H is the maximum thickness of the troposphere (about 16 km) and R is the average radius of the (spherical) Earth. (We use primes to denote physical (dimensional) variables; we will dispense with these as we move to nondimensional variables.) In the construction of the asymptotic version of this problem, we keep all other parameters fixed, as ε → 0, thereby retaining every physical attribute that contributes, at the same order, to both the dynamics and thermodynamics of the atmosphere. The Earth is rotating at the constant angular speed ≈ 7 · 29 × 10 −5 rad s −1 , and using this we non-dimensionalise according to: u = H (u, v, kw), the velocity vector in spherical coordinates (φ, θ, r ), where k measures the strength (in terms of ε) of the vertical velocity component; r = R (1 + εz); p = ρ ( R ) 2 p is the pressure, where ρ is an average density of the atmosphere; ρ = ρ ρ is the density. The coordinates are chosen so that φ is the azimuthal angle, and θ the meridional angle, being zero at the Equator and ±π 2 at the North/South poles, respectively. For a consistent thin-shell approximation, we choose to set k = ε, and then the governing equations for the steady atmosphere, with error terms indicated, can be written as these being the three components of the Navier-Stokes equation. The parameters that have been introduced here are where the former is a Reynolds number (defined using an average value of the dynamic eddy viscosity, μ ), and the latter (which can be interpreted as the ratio of the square of two speeds) typically takes a value of about 0 · 72, for H = 16 km; these are treated as O(1), i.e. fixed, as ε → 0. (This choice of parameter definitions ensures that we have a well-defined background state of the atmosphere, together with a dynamicthermodynamic coupling which describes its motion; an extensive discussion of this formulation is given in [6].) In addition, we have written the dynamic viscosity as μ (r ) = μ m(z) and the equation of mass conservation becomes The thermodynamic elements of the atmosphere are described, firstly, by the equation of state, where we have defined the temperature as with ≈ 287 m 2 s −2 K −1 the gas constant; secondly, the first law becomes where c p = c p ≈ 5 · 25, κ = κ c p H 2 ; c p is the specific heat of air, κ the thermal diffusivity of predominantly dry air and Q is the (non-dimensional) totality of heat sources/sinks. These final two parameters are also held fixed as ε → 0. The normalisation of the temperature, which uses the factor ( R ) 2 (about 800°K), produces a temperature variation of approximately T = 0 · 36 down to T = 0 · 27, from the bottom to the top of the troposphere. Although it transpires that R e is very large, and κ is very small, there is no necessity to incorporate additional assumptions or approximations: any thin viscous or thermal boundary layers, for example, are automatically included in the solutions. Finally, we observe that the second law of thermodynamics, which sets limits on the transformation between heat energy and mechanical energy, plays no direct ro "le in the calculations that we present here. Asymptotic structure of the solution To proceed, we seek an asymptotic solution, based on ε, and obtain the first two terms in an asymptotic expansion of the form where q, and correspondingly q n , represent each of u, v, w, p, ρ, T and Q. Further, we assume that the boundary conditions follow this same pattern, so that no terms appear in addition to those in the asymptotic sequence {ε n }. A general discussion of the nature and validity of this asymptotic expansion is given in [6]. The leading order is then obtained directly from Eqs. (1)-(3), (5) and (6), which gives This system has the solution where ς = gz − 1 2 cos 2 θ and p 0 is otherwise an arbitrary function at this stage, with The classical solution which describes the stationary background state of the atmosphere, independent of the velocity field, is given by and then Q 0 ≡ 0. This choice of model for the atmosphere shows that there are no external heat sources; the only heat supplied is that up from the surface of the Earth into the atmosphere. Further, we note that, although the choice (12) removes any direct coupling to the leading-order velocity field, this velocity field is, in general, a contributor to the solution at leading order (and is determined at the next order, as we now demonstrate). At the next order, O(ε), we obtain the system of equations which connects all the dynamics and thermodynamics of the motion: At this stage, we have written down the equations that appear, at O(1) and at O(ε), as the relevant descriptions of the general (steady) motion in the atmosphere, invoking only the thin-shell approximation. These equations can be used to describe many different phenomena: the Ekman spiral, geostrophic balance, the thermal wind, the Hadley-Ferrel-polar cell structure, the appearance of jet streams high in the troposphere and the Walker cell; see [6] for a discussion of all the foregoing applications, but where the last example is covered in only the barest outline. We now restrict the application of these equations to a careful and complete analysis of the Walker circulation. The Walker cell: formulation We start with a few salient features of the Walker cell. Its strength is attributed to the variation of sea-surface temperature across the Pacific along the equator-a difference of about 5°C, the warmer water to the West and the colder to the East. The cold water is present by virtue of advection northward along the coast of South America by the Humboldt current; see the discussions in [1,10]. This describes the general situation, but it can be disrupted every few years. In particular, the Walker cell weakens during an El Niño year, but it is strengthened in an El Niña year. So, for example, during an El Niño event, the displacement of the west-Pacific warm-pool to the mid-Pacific triggers the appearance of a double-cell, with an ascending component in the central Pacific (see [13]). In order to provide a mathematical description of these various phenomena, we must introduce some appropriate simplifications, aimed at producing a suitable set of (correctly asymptotic) equations that represent the type of flow field which supports the Walker cell in the neighbourhood of the Equator. Thus we set θ = 0 and assume no dependence on θ , together with no motion in the meridional direction, so v 0 ≡ 0; this describes flow in the (φ, z)-plane at the Equator (and we interpret the solution as being appropriate to a neighbourhood of θ = 0); see Equation (14) is identically satisfied, Eq. (15) is now and (16) simplifies to give The general approach that we adopt here is that developed in [6]. So, rather than input the heat sources that drive the motion-the obvious manoeuvre based on the physical nature of the problem-we aim to input a suitable temperature profile (as we have already done at O(1); see (12)), deduce the associated velocity field and then we may identify the heat sources required to drive and maintain the motion. This is the best way to proceed, we argue, and on two counts: (1) the precise nature of the heat sources, and how to model them, are notoriously difficult problems (see [7]); (2) the temperature field throughout the atmosphere is well known, using ground-based and satellite measurements (see [8]). So following this philosophy, and the more general development given in [6] (where all the details appear), we write which leads to where which allows us to find where G is an arbitrary function. Using (25) in (23), and evaluating on z = 0, yields and so from (19) we obtain with (from (21)) These two equations are our main results, defining the two-dimensional velocity field, given the pressure and temperature gradients; when coupled with (18), which simplifies to we are then able to identify the heat sources associated with this motion. We note, in particular, that Eq. (26) shows that the horizontal component of the velocity field is driven by the pressure gradient in the azimuthal direction evaluated on z = 0, and by the corresponding temperature gradient, but this also possesses a suitable z-structure. These observations are consistent with the accepted mechanism for the maintenance of the Walker cell (see [8,11]), but our version is quite explicit in providing the details of this forcing. These two gradients which drive the motion are, in our formulation, regarded as given forcing terms, independently assigned. A study of the oceans, however, suggests that the surface wind-stress generates upwelling, bringing colder water to the surface in the East, and so we have cooler water to the East, and warmer to the West, thereby producing the temperature gradient at the surface. A model for this upwelling, in the presence of the westward surface flow and the Equatorial Undercurrent, can be found in [4]. All the above relates to that region of the flow where the eddy viscosity plays a ro "le, most particularly in the lower regions of the troposphere. The upper regions, which are traditionally regarded as inviscid, we treat by using a suitable model for the eddy viscosity. Some of the evidence-see [15,16] for example-indicates that the eddy viscosity decays rapidly in the upper reaches of the atmosphere, and is virtually zero above about 2 km, although many alternative models for the viscosity appear in the literature; see the overview included in [5]. In our formulation, in conjunction with a variable viscosity, we impose the speed in the azimuthal direction at the bottom of the cell, u 0B (φ) (which is zero if the no-slip condition is invoked). Thus the twodimensional velocity field describing the Walker circulation is given by the solution of (26) and (27) which satisfies Note that the solution that we seek is bounded below by the ocean's surface and above by the tropopause (z = z 0 = 1) or a boundary close to this (z = z 0 < 1), and so we must satisfy w 0 = 0 on these two boundaries, and satisfying this condition at the top fixes u 0T (φ). example, the main purpose here is to emphasise the choices that are available. This will indicate, in particular, how it is possible to introduce suitable adjustments and additions based on observational data, opening the door to further investigations. As we mentioned earlier, we would, in the best of all possible worlds, aim to input the heat sources, determine the temperature field and then obtain the associated velocity field. This, however, is virtually impossible because neither the background knowledge nor the specific detailed data are available that would enable us to proceed. So we opt for a development based on, in principle, a choice for the temperature field as the starting point. Given this (i.e. T 1 (φ, z) or τ 1 (φ, z)) we obtain the velocity field directly (by integration) and then we may identify the heat sources. But even this sequence is not completely straightforward or useful, because it is far from clear what precise form of perturbation temperature profile, T 1 (φ, z), will generate the flow associated with a cell. Rather, it is better to guide the choice of T 1 (φ, z) (or τ 1 (φ, z)) by noting the type of velocity field that we need in order to recover a Walker cell. In addition, we must also choose a model for the vertical behaviour of the dynamic eddy viscosity (i.e. m(z)). It is reasonable to adopt this approach since our overall aim is to show that suitable solutions do exist, which provide a description of the Walker cell, and that will also enable us to include the adjustments needed to recover the cell structure that is observed in El Niño years. Equation (26) describes, in some detail, the vertical structure of the solution and this is to be consistent with the existence of a cell. However, the horizontal structure must be imposed in order to generate cells, the physical property that guides this being the obvious one: the Pacific Ocean is bounded by land masses to the East and to the West. The simplest choice which describes the required property is to set where is a constant and the cell sits in φ 0 < φ < φ 1 , expressed in degrees. (Clearly many other choices are possible, but we will limit our discussion here to just this one: we are aiming to confirm the existence of suitable solutions.) The horizontal velocity component then takes the form and so Eq. (26) becomes Before we proceed with a more general analysis of Eq. (32), we carry out a simple check to see if this formulation captures some of the important features of the cell structure: we choose T (z) = TT 2 0 (z), with T constant, take m = constant = 1 and which satisfies the no-slip condition on z = 0, and also U (z m ) = 0 where we take 0 < z m < z 0 ≤ 1, and α > z 0 with α + z m = −3ˆ T, the choice of α fixing the speed at the top of the cell; an example of this profile is shown in Fig. 3. Necessarilŷ < 0 for T > 0, this latter condition ensuring that the flow is westwards low down in the troposphere and eastwards higher up. Thus, from (30), we see that. which correspond to the observed properties of the Walker circulation: the pressure at the bottom of the troposphere is higher to the East, and the temperature is higher to the West. However, this simplistic observation ignores many of the detailed ingredients that make up Eq. (32), most notably the ro "le of a variable viscosity. We now take this initial examination a little further. The perturbation-temperature profile used in the preceding calculation, with T constant, is not likely to be relevant to any realistic description of the Walker circulation, although the general form of the velocity profile is what we expect (when we use the no-slip condition). To proceed, and in order to make more transparent the important properties of this flow, we now use a simplified velocity profile which excludes the no-slip condition at the bottom and therefore admits a wind blowing directly over the surface of the ocean. The simplest such profile is linear: where U 0 > 0 and 0 < z m < z 0 are constants. The viscosity that we work with, and the one for which most of the data is available (as mentioned earlier), is the kinematic eddy viscosity; thus we introduce and then we shall make choices for n(z). Equation (32) now becomes and an important observation follows directly. Evaluation on z = 0 gives and so a model for the viscosity in which n(0) = 0 and dn dz(0) > 0 (see [15], for example) gives < 0, which recovers the result ∂ p 1 ∂φ > 0, consistent with the observations. On the other hand, if n(z) = constant (> 0), then > 0; if n(0) > 0 and dn dz(0) < 0, then again > 0. We conclude that the choice of model for eddy viscosity has a profound effect on the underlying pressure gradient, even though the airflow near the surface of the ocean is always westwards in our formulation. But of course a critical feature of any solution that describes the Walker circulation is the temperature variation and the resulting gradient in the azimuthal direction. The construction of the temperature perturbation, T 1 (z) (see (30)), is obtained directly when we choose the velocity profile, U (z) (see (31)), together with the model for the kinematic eddy viscosity. In this first stage of the investigation, we have used n(z) = 1, n(z) = e −νz , n(z) = νze 1−νz , where ν > 0 is a constant, these describing a constant viscosity, exponentially decreasing viscosity, and a viscosity which increases from zero followed by exponential Table 1 The results of the calculations using the cubic and the linear velocity profiles, and the choice of viscosity model: constant, exponentially decreasing, increasing followed by exponentially decreasing. The signs of the gradients in the azimuthal direction on z = 0 are listed Profile → cubic cubic linear linear respectively. In each model, we have set the maximum value of n to be 1, which is consistent with a suitable choice of the non-dimensionalisation based on μ . We have chosen two velocity profiles for the calculations: the cubic polynomial drawn in Fig. 3 (and see (34)) and the linear model profile in (35). In both cases we have a flow which is westwards in the lower region of the troposphere and eastwards higher up. We tabulate (see the Table 1) the resulting signs of ∂ p 1 ∂φ and ∂ T 1 ∂φ on z = 0 and then, to correspond to the observed properties of the Walker cell, we should expect to use (as mentioned earlier) (The properties listed in the Table 1 are the same for all ν > 0.) If we use the conditions in (38) as the guiding principle for selecting suitable solutions, then the Table shows that we may use either a constant viscosity or a decreasing viscosity, in conjunction with the no-slip condition at the surface of the ocean, or the increasing-decreasing viscosity for the linear profile. In the light of these observations, we now examine in detail a simple profile which accommodates both a linear variation and a no-slip condition (but we produce the results for only one of these). First, we normalise Eq. (32) by writing , which takes a maximum value of 1 on z = 0 (consistent with our non-dimensionalisation). Thus we obtain where the prime denotes the derivative with respect to z. We see directly, by evaluating (40) on z = 0, that which determines P, and hence the pressure gradient in the azimuthal direction at the surface of the ocean. Here, we choose to work with the simple velocity profile where z 1 , z m , β and γ are constants. This profile is linear if βγ = −1 and it satisfies the no-slip condition at the ocean's surface if z 1 = 0; two examples are shown in Fig. 4 where the upper boundary of the flow is fixed at the tropopause (z = 1). Although we investigated the effects of a number of different profiles based on (42), of all the choices that we might make, we opt for a model which describes a wind that blows (westwards) over the ocean, with a profile which is linear. This choice is the one which, in a direct and natural way, captures the important properties of the wind structure in the atmosphere; other profiles are accessible by suitably choosing the values of z 1 , z m , β and γ . Further, we invoke the most reasonable model for the kinematic eddy viscosity: where ν > 0 is a constant. The particular velocity profile that we use for the calculations is shown Fig. 5, and we set ν = 10 throughout. We find that P ≈ −1 · 568 associated temperature perturbation, D(z), is depicted in Fig. 6 and, because D(0) > 0, we see that ∂ T 1 ∂φ < 0 on z = 0. Furthermore, the temperature profile shows that the perturbation temperature decreases rapidly at higher altitude. This is the type Fig. 4 Two examples of the profile given in (40); red curve: where ψ(φ, z) is the stream function for the flow; thus we have which ensures that w 0 = 0 on z = 0. However, in addition, we must choose the various parameters that describe the velocity profile, (42), so that we also satisfy w 0 = 0 on z = 1: the flow is bounded below by the surface of the ocean and above by the tropopause. The choices given above (see Fig. 5) satisfy this requirement. We are now able to produce the streamline pattern for this Walker cell, defined by lines ψ = constant, which is shown in Fig. 7 and plotted for the flow along the streamlines is in the clockwise direction when viewed northward. Finally, we may use our solution to provide a representation of the special flow configuration which arises during an El Niño event. This is most easily accomplished by simply adjusting the periodicity in the azimuthal direction, so plotting the streamlines associated with. gives the streamline pattern shown in Fig. 8 (again, viewed northward). This solution can be analysed to extract the properties associated with this flow, such as the pressure and temperature distributions that are required to maintain this structure. The details of the solution that we have obtained can now be used in the version of the first law of thermodynamics which is appropriate at this order, namely Eq. (28). We have obtained, explicitly, the background temperature, T 0 (z), and its perturbation, T 1 (φ, z), numerically; we can then determine both ρ 1 (φ, z) and p 1 (φ, z). All this enables us to find an expression for the heat sources/sinks, Q 1 , that are required to Fig. 8 The streamline pattern that corresponds to the flow during an El Niño event; the flow direction in the right-hand cell is clockwise, and anticlockwise in the left-hand cell. Thus there is an ascending flow in the central Pacific maintain the Walker cell. In particular we note that there is a contribution to the background (i.e. solar) heating from the terms. and the other terms: are heat sources that move with the fluid and so are associated with latent heat. Although we have not produced the (numerical) details here, they are readily available and, we suggest, are worth exploring if reliable data can be used to produce further properties of the Walker circulation. Discussion The development presented here is based on the Navier-Stokes equation for a compressible fluid, with variable viscosity, coupled to an equation of state and a suitable version of the first law of thermodynamics. These general governing equations have been non-dimensionalised and then an asymptotic solution is constructed which uses only the thin-shell approximation to describe the atmosphere; all other parameters are held fixed in the limiting process. Although the details are not developed here-they are available in [6]-we have presented the main results to aid the reader. These comprise the equations describing the background state of the atmosphere, together with its perturbation which incorporates all the dynamics and thermodynamics of the steady atmosphere. We have chosen the background state to be that which exists independently of the underlying velocity field; the issue then is to construct suitable solutions which describe the superimposed steady motion. The particular exercise undertaken here is to find a solution which represents and describes the Walker circulation that sits over the Pacific Ocean and along the Equator. The main drivers for this motion are well known; here, we aim to provide a careful mathematical treatment of this phenomenon. This can then be used to investigate, in detail, various properties of the flow and how it might change according to the ambient conditions. The resulting formulation, Eqs. (26) and (27) with (28), is the main theoretical conclusions of the work. In particular, we can be precise about the mechanisms that drive the motion in the cell: the pressure gradient in the azimuthal direction at the surface and a corresponding temperature gradient. One immediate consequence of our development is the form taken by this temperature-gradient term; this involves a combination of both the background temperature and the gradient of the perturbation temperature, via an integral-possibly not an obvious combination. Furthermore, this element of the forcing vanishes at the ocean's surface, leaving only the pressure gradient there. The nett result is to produce a simple, very specific differential system which couples the velocity field to the temperature and pressure gradients in the azimuthal direction, together with the opportunity to include any suitable, variable viscosity. An initial investigation, based on cubic and linear velocity profiles, for various models for the kinematic eddy viscosity, was undertaken. This showed that a realistic azimuthal velocity profile-westward lower down and eastward higher up-could be generated for other than the observed signs of the temperature and pressure gradients. Nevertheless, we used these observations to guide a more comprehensive examination of a solution which used a simple but general velocity profile, which could accommodate both a linear variation and also a quadratic variant, allowing for a no-slip condition at the surface. The choice which most closely accords with the existence of trade winds (crucial to early sailors) is a flow which does not satisfy the (classical and technically correct) no-slip condition at the surface. (Of course, we would certainly allow a non-zero speed at the surface if we are to model wind-driven waves.) Thus we have used parameter values that produce a westward flow down to the surface of the ocean and an eastward flow at higher altitude. In addition, the vertical velocity component, w 0 , must satisfy the requirement that the flow is bounded by the ocean surface below and the tropopause (or something close to this) above. Although we can always impose w 0 = 0 on z = 0, for any horizontal velocity profile, we must choose this profile with care so that we also have w 0 = 0 on z = 1 in order to satisfy mass-flow conservation. This describes the vertical structure of the solution; the horizontal structure required to produce a cell was imposed by invoking a simple trigonometric function that gave u 0 = 0 at the furthest extremities of the cell. With all these ingredients in place, a solution was computed (using Maple) which produced a vertical perturbation-temperature profile through the depth of the troposphere and a streamline pattern for the flow in the Walker cell. This temperature profile, shown in Fig. 6, is quite specific for this solution; on the other hand, given this profile and the maximum speeds at the top and bottom of the cell, the solution for U(z) can be recovered. The speed at the top, we note, is fixed by ensuring that w 0 = 0 along the top of the cell. So the details that we have presented confirm the existence of a suitable solution-and this was the main aim of the work-and, more significantly, the formulation provides an opportunity for further detailed investigation. Thus any number of choices for the variable viscosity, the surface pressure gradient and the temperature profile can be made, and solutions representing the Walker cell constructed; in addition, adjustments to the velocity profile can also be made and the consequences investigated. (We also record that the maximum value attained by the temperature perturbation in our calculations-which always occurs on z = 0-is significantly affected by the choice of ν in the viscosity model.) All this is particularly useful if reliable data are available to guide the choice of viscosity model, the temperature profile and the horizontal structure of the cell. Furthermore, the effects of changes in the ambient conditions (perhaps driven by climate change) can be tested using this system of equations. With this in mind, we introduced a simple modification which models the situation that arises during an El Niño event: we have seen that two cells are easily accommodated, mirroring what happens when the region of warmer water penetrates further to the East. There is no virtue in including an examination of the El Niña, because this is simply an enhancement-larger gradients and higher wind speeds-of the Walker circulation that we have described. However, there is one final observation that we can make which suggests, albeit in a numerical sense, that we have captured some important attributes of the Walker circulation. Using the values obtained in our numerical example, together with the parameter values and non-dimensionalisation given earlier, taking R e = 5 × 10 5 and assuming that the Walker cell extends over 80°of longitude, we find that a surface wind speed of 5 ms −1 produces a temperature change, εT 1 ( R ) 2 , along the extent of the cell at the ocean's surface of about 5°C. Thus our leading term in the perturbation of the background state produces a result that is altogether reasonable, although there is clearly no certainty about the values that should be used in this rudimentary calculation. The higher-order terms are proportional to ε n , n = 2, 3, … and, since there is no suggestion of non-uniformities in the asymptotic expansions, we appear to have captured the main contributor to the temperature perturbation which drives the Walker circulation. In conclusion, we have shown that a careful (asymptotic) approach to the general, governing equations that represent the atmosphere have led to a simplified system of equations. These produce a perturbation of a background state, the perturbation combining all the dynamics and thermodynamics of the steady atmosphere. Furthermore, these equations can be used to give a detailed description of the Walker circulation, providing a simple test-bed for the effects of pressure gradient, temperature profile, velocity profile and variable eddy viscosity to be investigated. Associated with this general flow structure, the disruption caused by El Niño and El Niña events can also be examined. There is much, we submit, that can be explored using the equations presented here, particularly if extensive and reliable data are available. Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/ by/4.0/.
8,075.2
2022-04-08T00:00:00.000
[ "Mathematics", "Physics", "Environmental Science" ]
Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization. To mitigate this issue, we propose a regularized fine-tuning method. Our method introduces two types of regularization for better calibration: (1) On-manifold regularization, which generates pseudo on-manifold samples through interpolation within the data manifold. Augmented training with these pseudo samples imposes a smoothness regularization to improve in-distribution calibration. (2) Off-manifold regularization, which encourages the model to output uniform distributions for pseudo off-manifold samples to address the over-confidence issue for OOD data. Our experiments demonstrate that the proposed method outperforms existing calibration methods for text classification in terms of expectation calibration error, misclassification detection, and OOD detection on six datasets. Our code can be found at https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning. Introduction Pre-trained language models have recently brought the natural language processing (NLP) community into the transfer learning era. The transfer learning framework consists of two stages, where we first pre-train a large-scale language model, (e.g., BERT , RoBERTa , ALBERT (Lan et al., 2020) and T5 (Raffel et al., 2019)) on a large text corpus and then fine-tune it on downstream tasks. Such a fine-tuning approach has achieved SOTA performance in many NLP benchmarks (Wang et al., 2018(Wang et al., , 2019. Many applications, however, require trustworthy predictions that need to be not only accurate but also well calibrated. In particular, a well-calibrated model should produce reliable confident estimates for both in-distribution and out-of-distribution (OOD) data: (1) For in-distribution data, a model should produce predictive probabilities close to the true likelihood for each class, i.e., confidence ≈ true likelihood. (2) For OOD data, which do not belong to any class of the training data, the model output should produce high uncertainty to say 'I don't know', i.e., confidence ≈ random guess, instead of producing absurdly wrong yet wildly confident predictions. Providing such calibrated output probabilities can help us to achieve better model robustness (Lee et al.,Figure 1: The reliability diagrams on in-distribution data (the first row) and the histograms of the model confidence on out-of-distribution (OOD) data (the second row) of CNN (Kim, 2014) and fine-tuned BERT-MLP classifier . Though BERT improves classification accuracy, it makes over-confident predictions for both in-distribution and OOD data. 2018), model fairness (Chouldechova, 2017) and improve label efficiency via uncertainty driven learning (Gal et al., 2017;Siddhant and Lipton, 2018;Shen et al., 2018). Unfortunately, Guo et al. (2017) have shown that due to over-parameterization, deep convolutional neural networks are often miscalibrated. Our experimental investigation further corroborates that fine-tuned language models can suffer from miscalibration even more for NLP tasks. As shown in Figure 1, we present the calibration of a BERT-MLP model for a text classification task on the 20NG dataset. Specifically, we train a TextCNN (Kim, 2014) and a BERT-MLP using 20NG 15 (the first 15 categories of 20NG) and then evaluate them on both in-distribution and OOD data. The first row plots their reliability diagrams (Niculescu-Mizil and Caruana, 2005) on the test set of 20NG 15 . Though BERT improves the classification accuracy from 83.9% to 87.4%, it also increases the expected calibration error (ECE, see more details in Section 2) from 4.0% to 9.5%. This indicates that BERT-MLP is much more miscalibrated for in-distribution data. The second row plots the histograms of the model confidence, i.e., the maximum output probability, on the test set of 20NG 5 (the unseen 5 categories of 20NG). While it is desirable to produce low probabilities for these unseen classes, BERT-MLP produces wrong yet over-confident predictions for such OOD data. Such an aggravation of miscalibration is due to the even more significant over-parameterization of these language models. At the pre-training stage, they are trained on a huge amount of unlabeled data in an unsupervised manner, e.g., T5 is pre-trained on 745 GB text. To capture rich semantic and syntactic information from such a large corpus, the language models are designed to have enormous capacity, e.g., T5 has about 11 billion parameters. At the fine-tuning stage, however, only limited labeled data are available in the downstream tasks. With the extremely high capacity, these models can easily overfit training data likelihood and be over-confident in their predictions. To fight against miscalibration, a natural option is to apply a calibration method such as temperature scaling (Guo et al., 2017) in a post-processing step. However, temperature scaling only learns a single parameter to rescale all the logits, which is not flexible and insufficient. Moreover, it cannot improve out-of-distribution calibration. A second option is to mitigate miscalibration during training using regularization. For example, Pereyra et al. (2017) propose an entropy regularizer to prevent over-confidence, but it can needlessly hurt legitimate high confident predictions. A third option is to use Bayesian neural networks (Blundell et al., 2015;Louizos and Welling, 2017), which treat model parameters as probability distributions to represent model uncertainty explicitly. However, these Bayesian approaches are often prohibitive, as the priors of the model parameters are difficult to specify, and exact inference is intractable, which can also lead to unreliable uncertainty estimates. We propose a regularization approach to addressing miscalibration for fine-tuning pre-trained language models from a data augmentation perspective. We propose two new regularizers using pseudo samples both on and off the data manifold to mitigate data scarcity and prevent overconfident predictions. Specifically, our method imposes two types of regularization for better calibration during fine-tuning: (1) On-manifold regularization: We first generate on-manifold samples by interpolating the training data and their corresponding labels along the direction learned from hidden feature space; training over such augmented on-manifold data introduces a smoothness constraint within the data manifold to improve the model calibration for in-distribution data. (2) Off-manifold regularization: We generate off-manifold samples by adding relatively large perturbations along the directions that point outward the data manifold; we penalize the negative entropy of the output distribution for such off-manifold samples to address the over-confidence issue for OOD data. We evaluate our proposed model calibration method on six text classification datasets. For in-distribution data, we measure ECE and the performance of misclassification detection. For out-of-distribution data, we measure the performance of OOD detection. Our experiments show that our method outperforms existing state-of-the-art methods in both settings, and meanwhile maintains competitive classification accuracy. We summarize our contribution as follows: (1) We propose a general calibration framework, which can be applied to pre-trained language model fine-tuning, as well as other deep neural network-based prediction problems. (2) The proposed method adopts on-and off-manifold regularization from a data augmentation perspective to improve calibration for both in-distribution and OOD data. (3) We conduct comprehensive experiments showing that our method outperforms existing calibration methods in terms of ECE, miscalssification detection and OOD detection on six text classification datasets. Preliminaries We describe model calibration for both in-distribution and out-of-distribution data. Calibration for In-distribution Data: For in-distribution data, a well-calibrated model is expected to output prediction confidence comparable to its classification accuracy. For example, given 100 data points with their prediction confidence 0.6, we expect 60 of them to be correctly classified. More precisely, for a data point X, we denote by Y (X) the ground truth label, Y (X) the label predicted by the model, and P (X) the output probability associated with the predicted label. The calibration error of the predictive model for a given confidence p ∈ (0, 1) is defined as: (1) As (1) involves population quantities, we usually adopt empirical approximations (Guo et al., 2017) to estimate the calibration error. Specifically, we partition all data points into M bins of equal size according to their prediction confidences. Let B m denote the bin with prediction confidences bounded between m and u m . Then, for any p ∈ [ m , u m ), we define the empirical calibration error as: where y i , y i and p i are the true label, predicted label and confidence for sample i. To evaluate the overall calibration error of the predictive model, we can futher take a weighted average of the calibration errors of all bins, which is also known as the expected calibration error (ECE) (Naeini et al., 2015) defined as: where n is the sample size. We remark that the goal of calibration is to minimize the calibration error without significantly sacrificing prediction accuracy. Otherwise, a random guess classifier can achieve zero calibration error. Calibration for Out-of-distribution Data: In real applications, a model can encounter test data that significantly differ from the training data. For example, they come from other unseen classes, or they are potential outliers. A well-calibrated model is expected to produce an output with high uncertainty for such out-of-distribution (OOD) data, formally, where K is the number of classes of the training data. As such, we can detect OOD data by setting up an uncertainty threshold. Calibrated Fine-Tuning via Manifold Smoothing We consider N data points of the target task , where x i 's denote the input embedding of the sentence and y i 's are the associated one-hot labels. Let f (·) denote the feature extraction x < l a t e x i t s h a 1 _ b a s e 6 4 = " k O 3 6 F w q n z 6 9 P G e 5 U 1 Y s d T l 4 2 T 9 o = " > A A A B 8 X i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z k q 2 G X B j c s K 9 o F t K Z k 0 0 4 Z m M k N y R y x D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 1 I Y d N 1 v Z 2 1 9 Y 3 N r u 7 B T 3 N 3 b P z g s H R 2 3 T J R o x p s s k p H u + N R w K R R v o k D J O 7 H m N P Q l b / u T m 8 x v P 3 J t R K T u c R r z f k h H S g S C U b T S Q y + k O P a D 9 G k 2 K J X d i j s H W S V e T s q Q o z E o f f W G E U t C r p B J a k z X c 2 P s p 1 S j Y J L P i r 3 E 8 J i y C R 3 x r q W K h t z 0 0 3 n i G T m 3 y p A E k b Z P I Z m r v z d S G h o z D X 0 7 m S U 0 y 1 4 m / u d 1 E w x q / V S o O E G u 2 O K j I J E E I 5 K d T 4 Z C c 4 Z y a g l l W t i s h I 2 p p g x t S U V b g r d 8 8 i p p V S v e Z a V 6 d 1 W u 1 / I 6 C n A K Z 3 A B H l x D H W 6 h A U 1 g o O A Z X u H N M c 6 L 8 + 5 8 L E b X n H z n B P 7 A + f w B + 4 W R F g = = < / l a t e x i t >x < l a t e x i t s h a 1 _ b a s e 6 4 = " p t Y S 2 Y t z p E m 1 9 I J M 5 K r 9 s 0 k F M H A = " > A A A B + 3 i c b V D L S s N A F J 3 U V 6 2 v W J d u B o v g q i R V s M u C G 5 c V 7 A O a U C a T S T t 0 8 m D m R l p C f s W N C 0 X c + i P u / B s n b R b a e m D g c M 6 9 3 D P H S w R X Y F n f R m V r e 2 d 3 r 7 p f O z g 8 O j 4 x T + t 9 F a e S s h 6 N R S y H H l F M 8 I j 1 g I N g w 0 Q y E n q C D b z Z X e E P n p h U P I 4 e Y Z E w N y S T i A e c E t D S 2 K w 7 w I X P M i c k M P W C b J 7 n Y 7 N h N a 0 l 8 C a x S 9 J A J b p j 8 n 6 r a Z 9 3 W w 9 3 D Q 6 7 b K O K j p H F + g K 2 e g W d d A 9 6 q I e o m i O n t E r e j N y 4 8 V 4 N z 5 W o x W j 3 D l D f 2 B 8 / g D v Q p T 9 < / l a t e x i t > on < l a t e x i t s h a 1 _ b a s e 6 4 = " C Z C h a C f d A h 9 O / e w y V 9 6 F q P Y v e 1 U = " > A A A B + H i c b V D L S s N A F J 3 4 r P X R q E s 3 g 0 V w V Z I q 2 G X B j c s K 9 g F N C J P J p B 0 6 j z q l S 1 h 4 E L 9 P V E g r v W U x 7 a T I z P W q 9 5 c / M 8 b 5 i Z t h Q U V W W 6 I w M t F a c 6 g k X C e A k y o I t i w q S U I K 2 p v h X i M F M L G Z l W 1 I f i r L 6 + T X r P h X z W a 9 9 f 1 d q u M o w L O w D m 4 B D 6 4 A W 1 w B z q g C z D I w T N 4 B W / O k / P i v D s f y 9 Y N p 5 w 5 B X / g f P 4 A 3 Q W T M A = = < / l a t e x i t > Training data On-manifold sample Off-manifold sample Data manifold x < l a t e x i t s h a 1 _ b a s e 6 4 = " k O 3 6 F w q n z 6 9 P G e 5 U 1 Y s d T l 4 2 T 9 o = " > A A A B 8 X i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z k q 2 G X B j c s K 9 o F t K Z k 0 0 4 Z m M k N y R y x D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 1 I Y d N 1 v Z 2 1 9 Y 3 N r u 7 B T 3 N 3 b P z g s H R 2 3 T J R o x p s s k p H u + N R w K R R v o k D J O 7 H m N P Q l b / u T m 8 x v P 3 J t R K T u c R r z f k h H S g S C U b T S Q y + k O P a D 9 G k 2 K J X d i j s H W S V e T s q Q o z E o f f W G E U t C r p B J a k z X c 2 P s p 1 S j Y J L P i r 3 E 8 J i y C R 3 x r q W K h t z 0 0 3 n i G T m 3 y p A E k b Z P I Z m r v z d S G h o z D X 0 7 m S U 0 y 1 4 m / u d 1 E w x q / V S o O E G u 2 O K j I J E E I 5 K d T 4 Z C c 4 Z y a g l l W t i s h I 2 p p g x t S U V b g r d 8 8 i p p V S v e Z a V 6 d 1 W u 1 / I 6 C n A K Z 3 A B H l x D H W 6 h A U 1 g o O A Z X u H N M c 6 L 8 + 5 8 L E b X n H z n B P 7 A + f w B + 4 W R F g = = < / l a t e x i t >x < l a t e x i t s h a 1 _ b a s e 6 4 = " p t Y S 2 Y t z p E m 1 9 I J M 5 K r 9 s 0 / v E n 6 r a Z 9 3 W w 9 3 D Q 6 7 b K O K j p H F + g K 2 e g W d d A 9 6 q I e o m i O n t E r e j N y 4 8 V 4 N z 5 W o x W j 3 D l D f 2 B 8 / g D v Q p T 9 < / l a t e x i t > o↵ < l a t e x i t s h a 1 _ b a s e 6 4 = " X f d b J C o J T 3 e M o z 4 S p 5 9 l m 8 G k c I I = " r a W y E e I 4 W w s W H V b A j + + s u b p N d s + D e N 5 u N t v d 0 q 4 6 i C C 3 A J r o E P 7 k A b P I A O 6 A I M p u A Z v I I 3 p 3 B e n H f n Y 9 V a c c q Z c / A H z u c P l n m T m A = = < / l a t e x i t > Mixup sample Interpolation path Figure 2: The on-manifold and off-manifold samples generated by our calibration procedure. Mixup adopts a coarse linear interpolation and the generated data point may deviate from the data manifold. layers (e.g., BERT); let g(·) denote the task-specific layer; and let θ denote all parameters of f and g. We propose to optimize the following objective at the fine-tuning stage: where is the cross entropy loss, and λ on , λ off are two hyper-parameters. The regularizers R on and R off are for on-and off-manifold calibration, respectively. On-manifold Regularization The on-manifold regularizer R on exploits the interpolation of training data within the data manifold to improve the in-distribution calibration. Specifically, given two training samples (x, y) and ( x, y) and the feature extraction layers f , we generate an on-manifold pseudo sample (x , y ) as follows: where δ on and δ y are small interpolation parameters for data and label, and D x is a proper distance for features extracted by f such as cosine distance, i.e., D x (a, b) = a/ a 2 , b/ b 2 , and B(x, δ on ) denotes an ∞ ball centered at x with a radius δ on , i.e., As can be seen, x * is essentially interpolating between x and x on the data manifold, and D x (f (·), f (·)) can be viewed as a metric over such a manifold. However, as f (·) is learnt from finite training data, it can recover the actual data manifold only up to a certain statistical error. Therefore, we constrain x * to stay in a small neighborhood of x, which ensures x * to stay close to the actual data manifold. Algorithm 1 Our Proposed Efficient Stochastic Optimization Algorithm for Solving (4). d is the dimension of features. for # training iterations do Sample a mini-batch B = {x i , y i } from S. // Generate on-manifold samples: This is different from existing interpolation methods such as Mixup (Zhang et al., 2018;Verma et al., 2019). These methods adopt coarse linear interpolations either in the input space or latent feature space, and the generated data may significantly deviate from the data manifold. Note that our method not only interpolates x but also y. This can yield a soft label for x * , when x and x belong to different classes. Such an interpolation is analogous to semi-supervised learning, where soft pseudo labels are generated for the unlabelled data. These soft-labelled data essentially induce a smoothing effect, and prevent the model from making overconfident predictions toward one single class. We remark that our method is more adaptive than the label smoothing method (Müller et al., 2019). As each interpolated data point involves at most two classes, it is unnecessary to distribute probability mass to other classes in the soft label. In contrast, label smoothing is more rigid and enforces all classes to have equally nonzero probability mass in the soft label. We then define the on-manifold regularizer as where S on denotes the set of all pseudo labelled data generated by our interpolation method, and D KL denotes the KL-divergence between two probability simplices. Off-manifold Regularization The off-manifold regularizer, R 2 , encourages the model to yield low confidence outputs for samples outside the data manifold, and thus mitigates the over-confidence issue for out-of-distribution (OOD) data. Specifically, given a training sample (x, y), we generate an off-manifold pseudo sample x * by: where S(x, δ off ) denotes an ∞ sphere centered at x with a radius δ off . Since we expect x * to mimic OOD data, we first need to choose a relatively large δ off such that the sphere S(x, δ off ) can reach outside the data manifold. Then, we generate the pseudo off-manifold sample from the sphere along the adversarial direction. Existing literature (Stutz et al., 2019;Gilmer et al., 2018) has shown that such an adversarial direction points outward the data manifold. By penalizing the prediction confidence for these off-manifold samples, we are able to encourage low prediction confidence for OOD data. Hence, we define the off-manifold regularizer as where S off denotes the set of all generated off-manifold samples, and H(·) denotes the entropy of the probability simplex. Model Training We can adopt stochastic gradient-type algorithms such as ADAM (Kingma and Ba, 2014) to optimize (4). At each iteration, we need to first solve two inner optimization problems in (5) and (7), and then plug x and x into (4) to compute the stochastic gradient. The two inner problems can be solved using the projected sign gradient update for multiple steps. In practice, we observe that one single update step with random initialization is already sufficient to efficiently optimize θ. Such a phenomenon has also been observed in existing literature on adversarial training (Wong et al., 2019). We summarize the overall training procedure in Algorithm 1. Experiments To evaluate calibration performance for in-distribution data, we measure the expected calibration error (ECE) and the misclassification detection score. For out-of-distribution data, we measure the OOD detection score. We detect the misclassified and OOD samples by model confidence, which is the output probability associated with the predicted label P (X). Specifically, we setup a confidence threshold τ ∈ [0, 1], and take the samples with confidence below the threshold, i.e., P (X) < τ, as the misclassified or OOD samples. We can compute the detection F 1 score for every τ: F 1 (τ), and obtain a calibration curve (F 1 (τ) vs. τ). Then, we set τ upper as the upper bound of the confidence threshold, since a well calibrated model should provide probabilities that reflect the true likelihood and it is not reasonable to use a large τ to detect them. We use the empirical Normalized Bounded Area Under the Calibration Curve (NBAUCC) as the overall detection score: where M is the number of sub-intervals for the numerical integration. We set M = 50 throughout the following experiments. Note that the traditional binary classification metrics, e.g., AUROC and AUPR, cannot measure the true calibration because the model can still achieve high scores even though it has high confidences for the misclassified and OOD samples. We provide more explanations of the metrics in Appendix C. We report the performance when τ upper = 0.5 here and the results when τ upper = 0.7 and 1 in Appendix D. Datasets For each dataset, we construct an in-distribution training set, an in-distribution testing set, and an OOD testing set. Specifically, we use the following datasets: 20NG 1 . The 20 Newsgroups dataset (20NG) contains news articles with 20 categories. We use Stanford Sentiment Treebank (SST-2) (Socher et al., 2012) as the OOD data. 20NG 15 . We take the first 15 categories of 20NG as the in-distribution data and the other 5 categories (20NG 5 ) as the OOD data. WOS (Kowsari et al., 2017). Web of Science (WOS) dataset contains scientific articles with 134 categories. We use AGnews (Zhang et al., 2015) as the OOD data. WOS 100 . We use the first 100 classes of WOS as the in-distribution data and the other 34 classes (WOS 34 ) as the OOD data. Yahoo (Chang et al., 2008). This dataset contains questions with 10 categories posted to 'Yahoo! Answers'. We randomly draw 2000 from 140, 000 samples for each category as the training set. We use Yelp (Zhang et al., 2015) as the OOD data. Yahoo 8 . We use the first 8 classes of Yahoo as the in-distribution data and the other 2 classes (Yahoo 2 ) as the OOD data. The testing set of OOD detection consists of the in-distribution testing set and the OOD data. More dataset details can be found in Appendix A. We remark that 20NG 15 , WOS 100 , and Yahoo 8 are included to make OOD detection more challenging, as the OOD data and the training data come from similar data sources. Baselines We consider the following baselines: Our method can achieve high F 1 scores starting from a small threshold which indicates that it indeed provides low confidences for misclassified and OOD samples; the F 1 scores of the baselines peak at high thresholds which indicates that they are poorly calibrated. Implementation Details We use ADAM (Kingma and Ba, 2014) with β 1 = 0.9 and β 2 = 0.999 as the optimizer. For our method, we simply set λ on = λ off = 1, δ on = 10 −4 , δ off = 10 −3 , and δ y = 0.1 for all the experiments. We also conduct an extensive hyper-parameter search for the baselines. See more details in Appendix B. Main Results Our main results are summarized as follows: Expected Calibration Error: Table 1 reports the ECE and predictive accuracy of all the methods. Our method outperforms all the baselines on all the datasets in terms of ECE except for Yahoo, where only ERL is slightly better. Meanwhile, our method does not sacrifice the predictive accuracy. Misclassification Detection: Table 2 compares the NBAUCC 0.5 on misclassification detection of different methods. As shown, our method outperforms all the baselines on all the six datasets. Out-of-distribution Detection: Table 2 reports the NBAUCC 0.5 on OOD detection of different methods. Again, our method achieves the best performance on all the six datasets. The improvement is particularly remarkable on the 20NG dataset, where NBAUCC 0.5 increases from 47.00 to 63.92 compared with the strongest baseline. We also find that detecting the unseen classes from the original dataset is much more challenging than detecting OOD samples from a totally different dataset. Significance Test: We perform the Wilcoxon signed rank test (Wilcoxon, 1992) Table 2: NBAUCC 0.5 on misclassification detection and OOD detection. We report the average performance of 5 random initializations. Parameter Study We investigate the effects of the interpolation parameters for on-manifold data, i.e., δ on and δ y , and the perturbation size for off-manifold samples, i.e., δ off . The default values are δ on = 10 −4 , δ off = 10 −3 and δ y = 0.1. Figure 4 shows the reuslts on 20NG 15 , 20NG, WOS 100 , and WOS datasets. Our results are summarized as follows: • The performance of all metrics versus δ on is stable within a large range from 10 −5 to 10 −2 . When on $FFXUDF\ δ on is larger than 10 −1 , the predictive accuracy begins to drop. • The performance versus δ off is more sensitive: (1) when δ off is too small, ECE increases dramatically becasue the generated off-manifold samples are too close to the manifold and make the model under-confident. (2) when δ off is too large, the off-manifold regularization is too weak and OOD detection performance drops. • In general, δ on should be small to let x stay on the data manifold while δ off should be large to let x leave the data manifold. However, the regularization effect of R on (R off ) depends on both λ on (λ off ) and δ on (δ off ). Therefore, it is not necessary to let δ on be smaller than δ off . We can also tune λ on and λ off to achieve better performance. • The performance versus δ y is relatively stable except for the metric of ECE. When δ y is larger than 0.2, ECE begins to increase. Ablation Study We investigate the effectiveness of the on-manifold regularizer R on and the off-manifold regularizer R off via ablation studies. Table 3 shows the results on the 20NG 15 and 20NG datasets. • As expected, removing either component in our method would result in a performance drop. It demonstrates that these two components complement each other. All the ablation models outperform the BERT baseline model, which demonstrates the effectiveness of each module. • We observe that the optimal δ on is different when using only R on . This indicates that the hyperparameters of R on and R off should be jointly tuned, due to the joint effect of both components. • By removing R off , we observe a severe OOD performance degradation on the 20NG dataset (from 63.92 to 43.87). This indicates that R off is vital to out-of-distribution calibration. Meanwhile, the performance degradation is less severe on 20NG 15 (from 9.69 to 7.94). It is because R on can also help detect the OOD samples from similar data sources. (20NG 5 ). • By removing R on , the in-distribution calibration performance drops as expected. Table 3: Ablation study on the 20NG 15 and 20NG datasets. For OOD detection and misclassification detection, we report BAUCC 0.5 . We set δ y = 0.1 and δ off = 10 −3 . Related Works and Discussion Other Related Works: Lakshminarayanan et al. (2017) propose a model ensembling approach to improve model calibration. They first train multiple models with different initializations and then average their predictions. However, fine-tuning multiple language models requires extremely intensive computing resources. Kumar et al. (2018) propose a differentiable surrogate for the expected calibration error, called maximum mean calibration error (MMCE), using kernel embedding. However, such a kernel embedding method is computationally expensive and not scalable to the large pre-trained language models. Accelerating Optimization: To further improve the calibration performance of our method, we can leverage some recent minimax optimization techniques to better solve the two inner optimization problems in (5) and (7) without increasing the computational complexity. For example, Zhang et al. (2019) propose an efficient approximation algorithm based on Pontryagin's Maximal Principle to replace the multi-step projected gradient update for the inner optimization problem. Another option is the learning-to-learn framework (Jiang et al., 2018), where the inner problem is solved by a learnt optimizer. These techniques can help us obtain x and x more efficiently. Connection to Robustness: The interpolated training samples can naturally promote the local Lipschitz continuity of our model. Such a local smoothness property has several advantages: (1) It makes the model more robust to the inherent noise in the data, e.g., noisy labels; (2) it is particularly helpful to prevent overfitting and improve generalization, especially for low-resource tasks. Extensions: Our method is quite general and can be applied to other deep neural network-based problems besides language model fine-tuning. Conclusion We have proposed a regularization method to mitigate miscalibration of fine-tuned language models from a data augmentation perspective. Our method imposes two new regularizers using generated on-and off-manifold samples to improve both in-distribution and out-of-distribution calibration. Extensive experiments on six datasets demonstrate that our method outperforms stateof-the-art calibration methods in terms of expected calibration error, misclassification detection and OOD detection. All the data are publicly available. We also offer the links to the data as follows: A Dataset Details smoothing, we search the smoothing parameter from {0.05, 0.1} as in (Müller et al., 2019); for ERL, the penalty weight is chosen from {0.05, 0.1, 0.25, 0.5, 1, 2.5, 5}; for VAT, we search the perturbation size in {10 −3 , 10 −4 , 10 −5 } as in (Jiang et al., 2020); for Mixup, we search the interpolation parameter from {0.1, 0.2, 0.3, 0.4} as suggested in (Zhang et al., 2018;Thulasidasan et al., 2019); for Manifoldmixup, we search from {0.2, 0.4, 1, 2, 4}. We perform 10 stochastic forward passes for MCDP at test time. For hyper-parameter tuning, we run all the methods 5 times and then take the average. The hyper-parameters are selected to get the best ECE on the development set of each dataset. The interpolation of Mixup is performed on the input embeddings obtained from the first layer of the language model; the interpolation of Manifold-mixup is performed on the features obtained from the last layer of the language model. C Metrics of Misclassification and Out-of-distribution detection Existing works on out-of-distribution (OOD) detection and misclassification detection (Hendrycks and Gimpel, 2016) use traditional binary classification metrics, e.g., AUPR and AUROC. As we discussed in Section 1 and 2, the output probability of a calibrated model should reflect the true likelihood. However, AUROC and AUPR cannot reflect true model calibration because the model can still achieve high scores even though it has high confidences for misclassified and OOD samples. We argue that it is more reasonable to use the Normalized Bounded Area Under the Calibration Curve (NBAUCC) defined as in Section 4. Table 5 shows an illustrative example. As can be seen, h 1 is better calibrated than h 2 , since h 1 can detect OOD samples under a wide range of threshold (0.15 < τ < 0.9) while h 2 requires an absurdly large threshold (0.85 < τ < 0.9). However, if we use the traditional AUPR and AUROC metrics, we will conclude that h 1 is as well calibrated as h 2 since AUPR h 1 = AUPR h 2 = 0.417 and AUROC h 1 = AUROC h 2 = 1. On the other hand, if we use NBAUCC, we will have NBAUCC We remark that it is more appropriate to use NBAUCC 0.5 than NBAUCC 1 since a calibrated model should provide low confidences for the misclassified and OOD samples and it is unreasonable to use a large threshold to detect them. Table 6 and 7 report the NBAUCCs of all the methods on misclassification and OOD detection when τ upper = 0.7 and τ upper = 1. Table 8 and 9 report the ablation study results of all the methods when τ upper = 0.7 and τ upper = 1. Figure 5 and 6 report the parameter study results of all the methods when τ upper = 0.7 and τ upper = 1. Figure 6: Parameter study of δ on , δ off and δ y . We use NBAUCC 0.7 for OOD and misclassification detection.
9,072
2020-10-22T00:00:00.000
[ "Computer Science" ]
Ikaros-CtIP Interactions Do Not Require C-terminal Binding Protein and Participate in a Deacetylase-independent Mode of Repression* Ikaros and Aiolos are Kruppel zinc finger proteins that play key roles in hemo-lymphoid development and homeostasis. We have previously shown that they can repress transcription through the recruitment of histone deacetylases (HDACs). Here, we provide the first functional evidence that these proteins can also repress gene function in a manner that does not require deacetylase activity. This functionality can be attributed in part to Ikaros interactions with the HDAC-independent corepressor, C-terminal binding protein (CtBP). However, mutations that block Ikaros-CtBP interactions do not abolish Ikaros's repression activity, implicating the involvement of additional corepressors. Consistent with this expectation, we show that Ikaros can interact with a CtBP-interacting protein (CtIP), which has also been linked to a deacetylase-independent strategy of repression. Despite being a CtBP interactor, CtIP's association with Ikaros does not require CtBP but instead relies upon its Rb interaction domain. Significantly, Ikaros can interact with Rb, which itself can repress gene function in a deacetylase-independent manner. A mutation in Ikaros that abrogates CtIP interactions significantly reduces repression, and a double mutation that prevents interaction with both CtIP and CtBP even further alleviates repression. Finally, we show that CtIP and CtBP can interact with the general transcription factors, TATA binding protein and transcription factor IIB, which suggests a possible mechanism for their deacetylase-independent mode of repression. lished that Ikaros proteins play critical roles during hemolymphopoiesis (9 -11). Ikaros is required from the earliest stages of hemopoiesis, at the level of the hemopoietic stem cell (12), to the later stages of lymphoid cell fate determination; in addition, Ikaros proteins regulate lymphocyte proliferation and homeostasis (13,14). Molecular and biochemical studies aimed at understanding the basis for these complex biological roles have revealed that Ikaros, in addition to functioning as an activator (15), can also potently repress gene expression (16). Transcriptional repressors can be categorized in several ways. A common approach is to classify them as long range or short range repressors. Members of the former group, such as Groucho and Sir proteins, are capable of making a promoter resistant to all enhancers regardless of their distance from the promoter, whereas short range repressors, such as Kruppel and Giant, act in a less general manner to block the activity of locally bound activators (17). An alternative but non-mutually exclusive approach to repressor classification is based on the utilization, or the lack thereof, of the activity of histone deacetylase enzymes (HDACs) 1 for repressor function. HDACmediated repression is expected to occur through the removal of acetyl groups from the N termini of histones, which presumably creates a compact chromatin configuration that inhibits transcription. Examples of HDAC-recruiting corepressors include the Sin3 and Mi-2␤ proteins (18,19). Histone deacetylase-independent repressors are believed to function through multiple mechanisms, but the strategy that has been most extensively studied is the interaction of such factors with the basal transcriptional machinery; these interactions affect recruitment of the RNA Polymerase II holoenzyme to the promoter or events associated with promoter clearance and re-initiation (20 -22). Examples of HDAC-independent corepressors include the C-terminal binding protein (CtBP) (23) and two CtBP interactors, the CtBP-interacting protein (CtIP) (24) and the histone deacetylase-related protein (HDRP/MITR) (25). CtBP is a 48-kDa phosphoprotein that was first identified as an interactor of adenovirus E1A and more recently of several Dipteran and mammalian repressors (26,27). Interaction between CtBP and these proteins, in most cases, is mediated through a PXDL(S/R) motif (26,27). Investigation of the mech-* The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 anisms behind CtBP-mediated repression has revealed that, although CtBP can interact with histone deacetylases, it can repress transcription even when deacetylase activity is inactivated, suggesting its ability to use alternative repression mechanisms (26,27). In a search for CtBP-interacting proteins, CtIP, a 125-kDA protein with similarity to DNA repair proteins, was identified (24). In addition to interacting with CtBP, CtIP has also been shown to bind the tumor suppressors, Rb/p130 (28) and BRCA1 (29,34). Rb is a key regulator of the G 1 /S transition of the cell cycle, and CtIP has been implicated in the deacetylase-independent repression pathway of this key regulator (28). BRCA1 is a 1863-amino acid protein composed of an N-terminal RING domain and two C-terminal BRCT domains whose mechanisms of action are poorly understood. Germline mutations of BRCA1 are responsible for many cases of hereditary breast and ovarian cancers (31). CtIP has been shown to interact with BRCA1 through its BRCT domains and to be a component of a BRCA1⅐BARD1 complex (32) as well as a BRCA1⅐LMO4⅐Ldb1 complex (33). Significantly, mutations in BRCA1 found in breast cancer patients prevent interactions with CtIP, suggesting an important role for CtIP in BRCA1's tumor-suppressive function (34,29). CtIP has been implicated in BRCA1's DNA repair function (35). Upon genotoxic stress, such as ␥-irradiation, CtIP becomes phosphorylated by the ATM kinase, which apparently prevents CtIP-BRCA1 interactions, thus allowing BRCA1 to activate genes involved in DNA repair such as p21 and GADD45 (35). These findings have, however, been strongly contested (36). Nevertheless, it is very likely that CtIP plays important roles in regulating the tumor suppressive functions of BRCA1, Rb, and other regulators. We have previously shown that Ikaros can interact with the HDAC-recruiting factors, Sin3 and Mi-2␤ (16,37). Consistent with such interactions, Ikaros repression of the adenovirus major late (AdML) promoter is relieved by the deacetylaseinhibitor trichostatin A (16). However, Ikaros also interacts with CtBP that can repress transcription in a deacetylaseindependent manner (38). Here, we investigate the HDACindependent repression potential of Ikaros. We show that Ikaros-mediated repression of the thymidine kinase (tk), unlike that of the AdML promoter, is insensitive to deacetylase inhibitors. In addition to CtBP, Ikaros can interact with two corepressors, CtIP and Rb, that can work through a deacetylaseindependent pathway. Mutations that abrogate CtIP interactions reduce repression by Ikaros whereas those that prevent both CtBP and CtIP associations even further alleviate repression. We provide evidence to suggest that Ikaros repression through this pathway may involve interactions with the basal transcriptional machinery. EXPERIMENTAL PROCEDURES Plasmids-BXG1, BXG1-Ik1, BXG1-Aio, BXG1-MAD, BXG1-mMAD, the reporters G5tkCAT and G5AdMLPCAT, CDM8-Ik1, CDM8-HA-Ik1, CDM8-FLAG-Aio3, CDM8-FLAG-Helios, CDM8-FLAG-Eos (Daedalus), CDM8-MT-Sin3A, pCMV2-FLAGIk1, pCMV2-FLAGIk1cm, and GST-hCtBP1 have been previously reported (38). Deletion and point mutants of Ik6 were generated by the Stratagene mutagenesis kit using Ik6 in the context of the BXG1 vector, which encodes the Gal4 DBD (amino Transfections-293T and NIH3T3 cell lines were maintained in Dulbecco's modified Eagle's medium with 10% fetal bovine serum (Hy-Clone). Transfections of these cell lines were carried out using the HEPES buffered saline-CaPO 4 method. For repression assays, 1 g of the Gal4 fusion plasmid, 10 g of the Gal4-reporter plasmid, and 0.5 g of the pXGH5 growth hormone transfection efficiency control plasmid were used. Twenty-four hours after transfection cells were fed with fresh media, and 18 -24 h later cells were harvested and processed for CAT assays as described previously (8). In those instances where trichostatin A (Upstate Biotech) was employed, we added the drug to the cells 16 -18 h before harvesting. Growth hormone assays were done as recommended by the manufacturer (Nichols Institute). Transfections were typically performed in duplicate and repeated between three and six times. Immunoprecipitation and Western Analysis-Whole-cell extracts from 293T cells transfected with the relevant plasmids were prepared as previously described (8) and pre-cleared using Protein G-agarose beads (Roche Molecular Biochemicals). The pre-cleared extracts were incubated with the antibody of interest or the relevant isotype control on ice for 1 h. 30 l of Protein G beads was then added to the extract, and the extracts were rotated overnight. The beads were collected by centrifugation and washed four times with TS buffer. The beads obtained after this procedure were treated with SDS sample buffer, boiled at 95°C for 15 min, and loaded onto a SDS-polyacrylamide gel along with 8 -10% of the cell extract used for the immunoprecipitation. The proteins were transferred to a nitrocellulose membrane, probed with the relevant antibody, and examined by autoradiography with ECL (Amersham Biosciences, Inc.). FLAGM2 purification of Ikaros complexes has been described before (37). Antibodies used were: Myc-tag, MT (Roche Molecular Biochemicals), HA (BAbCO), FLAG M2 (Sigma), Gal4, Sin3B (Santa Cruz Biotechnology), HDAC2 (Zymed Laboratories Inc.), Rb (Amersham Biosciences, Inc.), and anti-Ikaros and Mi-2, which have been previously described (37). CtIP antibodies were generously provided by Dr. R. Baer and Dr. W.-H. Lee. Anti-HDRP was provided by Dr. X. Zhou and Dr. P. Marks. GST Interaction Assays-GST, GST-TBPN, GST-TBPC, and GST-TFIIB were prepared using previously described protocols (38). 1-2 g of the GST proteins was incubated with proteins programmed in rabbit reticulocyte lysate (Promega) for 1 h at 4°C and washed extensively with MT/phosphate-buffered saline. The beads were then boiled in SDS sample buffer and fractionated on an SDS-polyacrylamide gel. The gels were then dried and visualized by autoradiography. Histone Deacetylase Assay-Histone deacetylase assays were performed using tritiated chicken reticulocyte histones as described previously (37). Briefly, immunoprecipitates from 293T whole cell extracts were washed 3ϫ in TS buffer and incubated with 100,000 cpm of tritiated acetylated histones for 45 min at 30°C in HD assay buffer. The reaction was stopped by acidification, and the released tritium was extracted with ethyl acetate. Ikaros Repression of the tk Promoter Does Not Rely on Histone Deacetylase Activity-We have previously shown that Ikaros repression of the adenovirus major late (AdML) promoter is relieved in the presence of the histone deacetylase inhibitor, trichostatin A. Thus, we suggested that Ikaros mediates repression of this promoter through the action of histone deacetylases (HDACs) (Fig. 1, left panel) (16). Subsequently, we found that Ikaros interacts with the corepressor CtBP, which can repress transcription in a histone deacetylase activityindependent manner (38). Based on this finding we claimed that Ikaros, in addition to repressing transcription through HDAC, can also function using HDAC activity-independent mechanisms (38). To address the possible ability of Ikaros to repress in a deacetylase activity-independent manner, we set out to identify promoters that Ikaros might repress using this alternate repression mechanism. In a recent report we showed that the Ikaros corepressor, CtBP, can repress the thymidine kinase (tk) promoter in a deacetylase activity-independent manner (38). To determine whether Ikaros's repression of the tk promoter might also utilize a similar strategy, NIH3T3 cells were transfected with G5tkCAT and expression vectors encoding Gal4 DNA binding domain (DBD) fusions of Ikaros and its family member, Aiolos. As controls, we included the empty vector, the vector expressing the Gal4 DBD alone, and Gal4 DBD fusions to the Sin3 interaction domain of MAD (MAD) or a mutant version of this domain that cannot interact with Sin3 (mMAD); the MAD protein serves as a positive control, because it has been shown to repress the tk promoter in a deacetylasedependent manner (39) whereas its mutant variant serves as the negative control. Transfectants were either treated with trichostatin A or left untreated. CAT assays revealed that repression of the tk promoter by Gal4-Ik1 and Gal4-Aiolos, unlike Gal4-MAD, was not relieved over background levels in the presence of the deacetylase inhibitor ( Fig. 1, right panel). Thus, Ikaros-and Aiolos-mediated repression of the tk, unlike the AdML promoter, is not dependent on the activity of histone deacetylases. This observation is highly reminiscent of the corepressors Rb and HDRP, which also repress the tk promoter in a manner that does not rely on HDACs (40,41). This provides the first functional evidence that Ikaros and Aiolos can repress transcription in a manner independent of HDAC activity. Ikaros Can Interact with the HDAC Activity-independent Corepressor CtIP-Ikaros interactions with CtBP can account, in part, for this alternate repression strategy. Nevertheless, it is highly likely that other corepressors are also involved, because mutations that abolish the Ikaros-CtBP interaction still permit significant levels of repression (38). This supposition is further strengthened by the observation that Aiolos, which cannot interact with CtBP (38), continued to repress the tk promoter in the presence of the deacetylase inhibitor. To identify other Ikaros-interacting factors that can account for its deacetylase-independent repression strategy, we screened a panel of seven well-established corepressors for binding the most full-length Ikaros isoform, Ik1, through a co-transfection/co-immunoprecipitation approach. These studies highlight the specificity that Ikaros shows in its interactions with corepressors. Of all the proteins tested, only CtIP and our positive control, Sin3A, were capable of binding Ikaros at detectable levels ( Fig. 2A). Significantly, CtIP has been implicated in effecting HDAC-independent repression through the tumor suppressor, Rb (28). Thus, CtIP interactions with Ikaros may play a role in Ikaros's deacetylase-independent repression mechanism. CtIP Interacts with Ikaros Proteins in Vitro and in Vivo-We next tested whether other Ikaros isoforms, like Ik1, could also interact with CtIP. Co-immunoprecipitation experiments showed that all the tested isoforms, Ik2, -3, and -7, could interact with CtIP (Fig. 3A). To determine whether the CtIP-Ikaros interactions seen in vitro could be recapitulated in vivo, we probed Western blots containing Ikaros complexes immunopurified from resting and cycling T lymphocytes. CtIP was found in complexes from both sources (Fig. 3B). Thus, CtIP appears to be an interactor of Ikaros proteins in lymphocytes. An Rb-binding Motif on CtIP Is Required for Its Interaction with Ikaros-Because both Ikaros and CtIP bind CtBP (24,38), we tested whether the CtIP-Ikaros interaction was mediated through this common corepressor. Wild type Ikaros (Ik1) and a mutant form that cannot interact with CtBP (Ik1cm) were transfected along with CtIP and tested for binding. Interestingly, Ik1 and Ik1cm were capable of interacting with CtIP, albeit less strongly (Fig. 3C, compare IP lanes 1 and 4). Thus, although CtIP may be recruited to Ikaros through CtBP, it can also be recruited through CtBP-independent means. In addition to interacting with CtBP, CtIP also interacts with the tumor suppressor and corepressor, Rb (24, 28). To determine whether CtIP might be recruited to Ikaros through an Rb-dependent mechanism, we co-transfected Ikaros with FIG. 2. Ikaros interacts with the HDAC-independent corepressor CtIP. In A, 293T cells were transfected with the Ik1 and either myc-tagged (MT-) Sin3A, Cabin1, BCoR, Groucho, CoREST, or CtIP with Ikaros. Whole cell extracts were immunoprecipitated with myc antibody and immunoblotted with Ikaros antibody to test for interaction. I, input; C, isotype control IP; B, bound fraction from specific IP. In B, interaction between SMRT and FLAG-Ikaros was tested as in A with the indicated antibodies. FIG. 1. Ikaros represses the tk promoter in an HDAC activity-independent manner. HDAC activity-independent repression by Ikaros. NIH3T3 cells transfected with 1 g of the indicated Gal4 fusions (BXG1), 10 g of G5AdMLPCAT or G5tkCAT, and 1 g of pXGH5 as a transfection efficiency control plasmid. 16 -18 h before harvest, transfectants were treated (ϩTricho) with trichostatin A (100 ng/ml) or left untreated (ϪTricho). CAT activity was corrected for transfection efficiency using the growth hormone assay. This experiment was done in duplicate three times, and variation between experiments was less than 20%. Fold derepression upon trichostatin treatment is indicated below the graph and was calculated as the increase in corrected CAT activity upon trichostatin treatment divided by the corrected CAT activity in untreated cells. The left panel has been previously reported (16) and is included to allow comparison. wild type CtIP and CtIP mutants that are defective for interactions with Rb (ϪRb) and CtBP (ϪCtBP). Wild type CtIP interacted strongly with Ikaros whereas CtIP defective for interactions with CtBP showed a reduced interaction (Fig. 3C, compare IP lanes 1 and 2). Interestingly, CtIP that was defective for Rb interactions was significantly impaired in its interactions with Ikaros (Fig. 3C, compare IP lanes 1-3). Taken together, these data show that Ikaros can bind CtIP through a mechanism that relies upon an intact Rb binding domain on CtIP (summarized in Fig. 3D). So, can Ikaros associate with Rb? FLAG-Ik1 and Rb were co-transfected and tested for association by immunoprecipitation. Rb was indeed immunoprecipitated with Ikaros (Fig. 3E), which lends support to our earlier finding that CtIP interactions with Ikaros require a functional Rb interaction motif. In summary, these data indicate that Ikaros interactions with CtIP do not require CtBP but instead require a functional Rb motif on CtIP. CtIP Interacts with All Ikaros Family Members-Consistent with the finding that Ikaros does not require CtBP to interact with CtIP, the Ikaros family members, Helios, Aiolos, and Eos, which do not interact with CtBP, can bind CtIP (Fig. 3F). Further support for CtBP-independent recruitment of CtIP to Ikaros and its family comes from the finding that exon 7 of Ikaros, which lacks a CtBP-binding motif, can also interact with CtIP (Fig. 3A, lane 5). These data suggest that a region in exon 7 is most likely the CtBP-independent domain through which CtIP associates with Ikaros. Mutations That Abolish Ikaros Associations with CtIP Alleviate Repression-To obtain CtIP interaction-defective Ikaros mutants, we targeted several mutations in exon 7 of the Ikaros isoform, Ik6 ( Fig. 4B and data not shown). Of these mutants, M8, which contains a 20-amino acid deletion spanning residues 416 -435, was found to significantly decrease CtIP binding (Fig. 4, A and B). To determine the role of the CtIP-Ikaros interaction, we tested the effect of this mutation on repression by Ik6, the most potent repressor among the Ikaros isoforms. Like the CtBP interaction mutant (M1), the CtIP mutant (M8) caused a 50% reduction in repression of the tk promoter (Fig. 4B). When both these mutations were incorporated in a single molecule (M9), repression was further reduced to 15% of wild type Ik6 levels (Fig. 4B). These data indicate that CtIP and CtBP are major components of the deacetylase-independent repression strategy of Ikaros on the tk promoter. CtIP Does Not Interact with HDAC2 and Precipitates Low Levels of HD Activity-What is the mechanism of CtIP-mediated repression? Although CtIP has been implicated in deacetylase-independent repression, little is known about its interactions with deacetylases. To determine whether CtIP can interact with endogenous histone deacetylases, we immunoprecipitated CtIP, and as a positive control Sin3A, from 293T cells. CtIP, unlike Sin3A, was not found associated with any significant amount of HDAC2 (Fig. 5A). This was verified by histone deacetylase assays of these immunoprecipitates, which indicated that HDAC activity associated with CtIP was close to background levels. In contrast, Sin3A, which associates FIG. 3. Ikaros can bind the HDAC activity-independent corepressor CtIP through a CtBP-independent mechanism. A, interaction between CtIP and Ikaros isoforms (Ik1, -2, -3, and -7) or exon 7 (E7) was tested by immunoprecipitation. B, in vivo interaction between CtIP and Ikaros in activated (a) and resting (r) T lymphocytes. Immunopurification of Ikaros-containing complexes was accomplished using a FLAGM2 column as previously described (37). The input (I), final wash (W), and eluate (E) were tested by immunoblot analysis using antibodies to CtIP. C, an Rb but not a CtBP motif on CtIP is critical for interactions with Ikaros. Ik1 wild type (Ik1WT) and Ik1 defective for interactions with CtBP (Ik1cm) were tested for their ability to interact with wild type CtIP, CtIP that cannot interact with CtBP (ϪCtBP), and CtIP that cannot interact with Rb (ϪRb) by IP. The numbers below the immunoblot are included to aid the reader in comparing input and IP lanes. D, a summary of the interaction data obtained in C. E, C33A cells were transfected with Rb and FLAG-Ik1 to determine if the two proteins interact. To this end, whole cell lysates were immunoprecipitated with FLAG and an isotype control antibody. F, association between Ikaros family members and CtIP was tested by IP from whole cell extracts prepared from transfected 293T cells. The asterisks in A, E, and F identify the heavy chain of the immunoprecipitating antibody. strongly with HDACs, supported 18-fold higher activity than background levels (Fig. 5A). These data lend support to the suggestion that CtIP likely utilizes HDAC-independent means to repress gene expression. Ikaros, CtBP, and CtIP Interact with Components of the Basal Machinery-A well studied HDAC-independent repression strategy involves interactions with the basal transcriptional machinery that negatively affect pre-initation complex assembly and/or promoter clearance (20,22). Using GST fusions of different components of the basal transcriptional machinery, we found that in vitro translated CtIP can bind TFIIB (Fig. 5B) whereas the other deacetylase-independent Ikaros corepressor, CtBP, can associate with both the N (amino acids 1-128) and C termini of TBP as well as with TFIIB (Fig. 6A). Furthermore, Ikaros itself can interact with the C terminus of TBP (amino acids 128 -328) and with TFIIB (Fig. 5B). Taken together these findings raise the possibility that the HDAC activity-independent repression mediated by Ikaros on the tk promoter may occur through interactions with components of the basal transcriptional machinery. DISCUSSION We have previously shown that Ikaros and Aiolos can repress transcription through the recruitment of histone deacetylases (16). In addition, Ikaros interacts with CtBP, which can repress through HDAC activity-independent mechanisms (38). On the basis of this interaction, we proposed that Ikaros might also be capable of this alternate repression strategy. Here, we provide the first functional evidence that Ikaros and Aiolos can effect repression in a manner that is independent of deacetylase activity. However, mutations in Ikaros that abolished interactions with CtBP could still repress transcription, indicating the involvement of other corepressors (38). Consistent with this expectation, we show that Ikaros interacts with the corepressors, CtIP and Rb, which are capable of deacetylase-independent repression. Finally, mutations that abrogate CtIP interactions with Ikaros alleviate repression, and those that prevent both CtBP and CtIP interactions even further reduce repression of the tk promoter. Because both Ikaros and CtIP contain the CtBP penta-peptide interaction module and because both these factors can bind CtBP (24,38), we considered the possibility that their association was mediated via CtBP. However, Ikaros proteins bearing mutations that prevented interactions with CtBP, were still able to interact with CtIP. Therefore, interaction with CtBP was not essential for CtIP associations with Ikaros. In support of this finding, a domain of Ikaros lacking a CtBP interaction FIG. 4. A mutation in Ikaros that prevents interactions with CtIP alleviates repression. A, 293T cells were co-transfected with CtIP and the indicated Ik6 mutants (diagrammed in B) and tested for binding as described in Fig. 3A. B, effects of CtIP interaction mutations on Ik6 on repression. The indicated Gal4 fusions (1 g) were transfected with the reporter G5tkCAT (10 g) and a transfection control plasmid (0.5 g). Whole cell extracts prepared from the transfectants were assayed for CAT activity. Fold Repression was calculated by dividing the normalized CAT activity of the Gal4 DBD plasmid by that obtained from each Ik6 variant. Transfections were done in duplicate four times. domain (exon 7), as well as the Ikaros family members that cannot associate with CtBP, could bind CtIP. Thus, CtIP is a good candidate for a corepressor that effects repression of the tk promoter by Ikaros and Aiolos. What is the CtBP-independent mode of CtIP association with Ikaros? CtIP interactions with Ikaros require the former's intact Rb interaction motif, suggesting that CtIP may associate with Ikaros through Rb family proteins. In support of this suggestion, both CtIP and Ikaros can interact with Rb. But why might Ikaros need two independent ways to recruit CtIP? It has recently been shown that the binding of CtBP to its interacting proteins is regulated by the levels of nicotinamide adenine nucleotides, NAD ϩ and NADH; agents capable of increasing NADH levels stimulate interactions between CtBP and its interactors and thereby increase repression (42). Based on these findings, we posit that having a CtBP-independent mode of recruitment would permit Ikaros⅐CtIP complexes to function under conditions that do not favor Ikaros-CtBP interactions. How does CtIP mediate repression? CtIP cannot interact with HDAC2 and immunoprecipitates very small amounts of deacetylase activity, supporting its classification as a deacetylase-independent repressor. Significantly, CtIP, CtBP, and Ikaros can interact with components of the basal machinery, namely TBP and TFIIB. Several repressors have been shown to effect repression through such interactions. Studies with Rb have shown that it affects the formation of an effective preinitiation complex possibly through its interactions with the TFIID (22) whereas detailed studies with the nuclear receptor corepressor, NCoR, have indicated that it blocks interactions between TAFII32 and TFIIB, which are crucial for transcriptional initiation (20). NCoR has also been hypothesized to lock interactions with basal transcriptional components into a nonfunctional complex or conformation that abrogates transcription. Future studies using in vitro transcription systems will allow us to address the role, if any, of Ikaros and its corepressors' interactions with basal transcriptional factors in repression. Recently, it has also been shown that the CtIP interacting proteins, CtBP and Rb, can repress gene function in a deacetylase-independent manner through the recruitment of Poly-combs (30). This raises the possibility that Ikaros may also utilize this avenue of deacetylase-independent repression. The importance of the Ikaros-CtIP interaction in deacetylase-independent repression was consolidated by mutational analysis. Mutation of the CtIP interaction site on Ik6 significantly reduced repression of the tk promoter by 50% of wild type levels. Thus, CtIP is a component of the deacetylaseindependent repression by Ikaros of this promoter. Furthermore, mutations that abrogated CtIP and CtBP interactions reduced repression to roughly 15% of levels supported by the wild type protein. Thus, CtIP and CtBP appear to collaborate to repress the tk promoter. The fact that repression is not completely abolished suggests the potential role of still other corepressors. In this context, we have recently found that Ikaros can interact with another corepressor, histone deacetylaserelated protein (HDRP) (data not shown). HDRP was first identified as an interactor of a key muscle regulatory protein, MEF2 (25), and was recently shown to bind CtBP and to repress the tk promoter in an HDAC-independent manner (41). In addition, another Ikaros interactor, Sin3, which usually represses using deacetylases, has also been shown to be capable of HDAC-independent repression; for this function, Sin3 appears to target components of the basal transcriptional machinery (21). An enigma resulting from these studies is the basis for why repression of the tk versus the AdML promoters requires Ikaros to utilize two different repression strategies. If histone deacetylation is involved in repression through effecting DNA compaction, one would have expected HDAC recruitment to repress all promoters. One possible explanation for a promoter-selective function of HDACs is that the promoter context, defined by the other trans-acting factors bound to it, may only permit HDAC recruitment on a promoter like AdML but not one like tk. Thus, selective recruitment of co-factors by a DNA binding factor, influenced by its binding context, may underlie a transcription factor's promoter-specific transcriptional functions. Thus far, CtIP has been shown to interact with two tumor suppressors, BRCA1 (29,34) and Rb (28). We have previously shown that dysregulation of Ikaros expression causes rapid development of leukemias and lymphomas (10). CtIP interactions with Ikaros may be involved, in part, in regulating the tumor suppressor function of Ikaros. The availability of the Ikaros-CtIP interaction mutant will allow this hypothesis to be tested. In conclusion, in this report we have presented several lines of evidence to show that Ikaros can function as a deacetylase-independent repressor in addition to its ability to repress through the recruitment of histone deacetylases (Fig. 6). This is a significant step forward in the attempt to molecularly dissect the workings of this key hemo-lymphoid regulator. FIG. 6. Model summarizing repression strategies of Ikaros. A, in this scenario, Ikaros recruits HDACs through Mi-2␤ and/or Sin3 proteins to a promoter. This recruitment is expected to create a compact chromatin configuration that is not conducive for transcription. B, in this alternate scenario, Ikaros recruits CtBP, CtIP, and Rb to a promoter, all of which can interact with components of the general transcriptional machinery. Such interactions may underlie the HDAC activity-independent mode of Ikaros repression. The corepressors that Ikaros recruits may be dictated by the promoter context.
6,469.4
2002-06-28T00:00:00.000
[ "Biology", "Chemistry" ]
Investigation of the Status of Unit 2 Nuclear Reactor of the Fukushima Daiichi by the Cosmic Muon Radiography We have investigated the status of the nuclear debris in the Unit-2 Nuclear Reactor of the Fukushima Daiichi Nuclear Power plant by the method called Cosmic Muon Radiography. In this measurement, the muon detector was placed outside of the reactor building as was the case of the measurement for the Unit-1 Reactor. Compared to the previous measurements, the detector was down-sized, which made us possible to locate it closer to the reactor and to investigate especially the lower part of the fuel loading zone. We identified the inner structures of the reactor such as the containment vessel, pressure vessel and other objects through the thick concrete wall of the reactor building. Furthermore, the observation showed existence of heavy material at the bottom of the pressure vessel, which can be interpreted as the debris of melted nuclear fuel dropped from the loading zone. existence of heavy material at the bottom of the pressure vessel, which can be interpreted as the debris of melted nuclear fuel dropped from the loading zone. Introduction Following the investigation [1] made for the Unit-1 reactor of the Fukushima-Daiichi, the same team tried to investigate the Unit-2 reactor of the Fukushima-Daiichi by the same technique of the cosmic muon radiography [2], [3]. Although we have successfully demonstrated that the structure of the Nuclear Power Reactor can be visualized by the telescope placed outside of the reactor, we noticed the obtained images are less clear, especially at the lower part of the reactor. To obtain a close-up view of the area, we constructed a new telescope system and located it as close as possible to the building of the Unit-2 reactor. The new system is much smaller than the previous telescope, smaller than 1-m cubic, so that it enabled us to find an appropriate place for the muon radiography, namely to place it avoiding massive obstacles along the view line of objects to be investigated and ensuring necessary elevation angle to the objects. Since the ambient radiation level around the nuclear reactors at the Fukushima-Daiichi has been significantly reduced thanks to the efforts by the TEPCO, radiation protection as much as was employed for the system for the Unit-1 reactor investigation (10-cmthick iron) has become not necessary. Having a thinner radiation protection plate made the telescope weight lighter, which helped to ease the installation work allowing us to place the telescope right next to the building wall. The Muon Telescope In the telescope employed for the Unit-1 reactor, we used 1-m long plastic scintillator bars with the cross section of 1 cm×1 cm [4], [5], [6]. They are arranged in a hodoscope covering an effective area of 1 m 2 . For the new telescope, we used the same scintillator bars of 50cm length instead and they covered an area of 50 cm×50 cm, one quarter of the previous telescope's area. We measured the hit positions of particles on the hodoscope plane in two orthogonal coordinates, X and Y coordinates. The telescope was composed of a pair of XY hodoscopes. The scintillator bars used for the telescope are shown in Fig. 1. Through the hole at the center, a plastic wavelength shifting fiber was inserted, which collected the scintillation light produced upon the passage of particles through the scintillator and sent it to a photo-sensor (Multi-Pixel Photon Counter, MPPC [7]) attached to the fiber end. Although the scintillator hodoscope area was reduced, the acceptance in terms of viewing angle was maintained since the distance between the XY hodoscopes was also reduced from 1.5 m of the Unit-1 detector to 0.7 m. In order to have a similar angle resolution of the muon flight direction under the shortened hodoscope distance, the position resolution was required to be halved while using the same scintillator bars. For this requirement, we took an approach by doubling the hodoscope layers with neighboring layers displaced by 0.5 cm, one half of the scintillator bar width, to each other as shown in Fig. 2. The configuration was intended to require coincidence between the two adjacent layers and hence the area traversed by the cosmic particles can be effectively halved. Therefore, the telescope has in total eight layers of scintillator planes to measure the trajectory of the cosmic muons. [1], [4], [5]. The grey cables are used to feed bias voltages to MPPCs with the bias voltages adjusted automatically to maintain the same MPPC gains accounting for the temperature variation The performance of the XY hodoscope as a telescope was first tested at KEK. The two assembled XY hodoscope units were set horizontally separated by 50 cm in vertical direction. An iron block of 20 cm cubic was placed above them to examine the sensitivity of the cosmic muons. Figure 4 shows the observed rate of cosmic muons plotted as a function of the hit position in the two orthogonal coordinates. The dips in the rates are due to the iron block of 20 cm thickness. The distributions corresponding to the block edges are used to estimate the position resolution, which resulted as good as 0.5 cm, as expected. Telescope installation The telescope was placed at the Fukushima Daiichi facing right to the Unit-2 reactor building where the radiation level was as low as 100 μSv/h. The ambient radiation level had been significantly reduced since the Unit-1 measurement. The telescope container made of 2 mm aluminum was covered by two 2-mm thick lead sheets further to reduce the in-box radiation level to 20 μSv/h. The lower ambient radiation level together with the requirement of eightlayer coincidence in tracking made us possible to use such thin radiation shield to the detector. Figure 7 shows the location of the muon telescope viewing the Unit 2 reactor. The red arrow is the direction of the telescope's view center targeting at the center of the reactor. Images of the Unit-2 Reactor We started the measurement in March 17, 2016 and continued till September 17, 2016. The number of events observed by the telescope was about 7 million. The obtained image of the Unit 2 reactor is shown in Fig. 8. The figure shows the amount of material traversed by the cosmic muons in the unit of density-length (g/cc-m). The procedure of deriving the amount in density-length is described in [1]. The wall of the Primary Containment Vessel (PCV), which is made of at least 1.7 m thick heavy concrete of density of about 2.3 g/cm 3 , is characteristic and imaged successfully. Compared to the PCV, the wall of the RPV made of approximately 14 cm iron is much thinner both in physical and density lengths, especially the material thickness projected along the muon path. Therefore, the observation does not show a distinct image of the RPV, compared to the calculated distribution where muon scattering is not included. Estimation of the amount of the nuclear debris remaining in the RPV The observed strong attenuation of the cosmic muons could be attributed to the debris of the melted nuclear fuel. We estimated the weight of the remaining substance in two methods. Method-1 calculates the weights outside the RPV based on the drawings of the plant and subtracts their contribution from the measured absorption. Method-2 is the procedure to evaluate the weights by subtraction of the side-band absorption, the procedure essentially identical to the Unit-1 investigation [1]. Method-1 Estimation The horizontal density-length distributions were calculated from the plant drawing in the four height slices as illustrated in Fig 10. The calculation results are shown in Fig. 11 where measured density-length values with subtracting the amount outside the RPV are overlaid. In the calculation, the average density inside the RPV was assumed to be either zero (nothing is remaining), 2 g/cc, or 6 g/cc. In Slice-1 (upper part of the fuel loading zone), the vast region is consistent that there remains nothing. In Slice-2 and Slice-3, the average density is 1 to 2 g/cc, and substantial material appears to remain inside the RPV. In Slice-4, there seems to remain more dense material, although the statistical significance is weaker. We also note that the agreement in the regions outside the investigation is fairly good, demonstrating the reliability of the present method. The same technique was used to evaluate the weight in the RPV, separating the entire RPV into three regions; A) volume above the loading zone, B) volume of the loading zone, and C) volume below the loading zone. The estimation results are summarized in Table 1 Method-2 Estimation We divide the image into several regions. The region in |x|<2 m is named "a" where we investigate the amount inside the RPV, and the region between x=-5 m and x=-2 m named "b" is the sideband used to evaluate the contributions outside the RPV. This method is identical as used in the investigation for the Unit-1 reactor [1], and the right sideband was not used as the influence of the fuel storage pool was significant. The two horizontal regions are further divided vertically each into three according to the height. Note the definition of the volume A) was not identical to Method-1, therefore different calculated values are given in the last column. Table 1 summarizes the estimated amounts in tons for the difference between the areas "a" and "b". Here we employ "difference" in evaluation as the main systematics due to muon momentum spectrum uncertainty [1] is reduced significantly in the difference. The quoted systematic uncertainties are explained in the following section. The central value for Volume C) is corrected to 156 tons, which is also explained in the next section. As the material in the volume A) above the loading zone is considered unchanged by the accident, the measured amounts and the amounts calculated from the available drawing information can be used to evaluate the reliability of the present evaluations. The calculated "a"-"b" as 114±6(sys) tons is consistent with the measurement. 2 Systematic uncertainty In This factor is used to correct for the value measured in the bottom of the fuel loading zone C), see Table 1. The relative difference of the correction factor is added in quadrature to the other uncertainties. As shown in Table 1, the measurement and calculation values are consistent for the upper parts of the loading zone. The difference is taken also as a systematics. All the contributions to the systematic uncertainties are summarized in Table 2. Among these, muon momentum modeling is related to the limited knowledge of the muon flux. The detector threshold concerns the difference of the detector efficiencies measured at KEK and Fukushima-Daiichi. The "a"-"b" uncertainty is as explained above for the difference between the measurements and calculations in the regions above the loading zone. The region definition concerns about the definition of the "a" and "b" regions, where the difference when the regions were defined one bin (=25 cm) offset horizontally or vertically. Summary We investigated the status of Unit-2 Nuclear Reactor of the Fukushima-Daiichi by the cosmic muon detectors placed outside of the reactor building. The detector was down-sized from the system deployed for the Unit-1 observation to almost 1 m cubic. The obtained image is consistent that most of the nuclear fuel assemblies do not exist in the original location. We evaluated the amount of the materials left in the fuel loading zone is 17 -49 tons. The amount found in the lower part of the pressure vessel is about 160 tons. The amount of the fuel assemblies originally at the loading zone is estimated to be 160 tons, therefore the observation is consistent that the most of the fuel debris are located at the bottom of the pressure vessel. We have demonstrated that the cosmic muon radiography is very effective to locate the heavy object inside a large complex object and to measure exclusively the amount of material in weight. Appendix Weight distribution in the RPV The weight distirbution inside the RPV was derived using Method-1 by diving the area into projective towers with 50×50 cm cross sction at the center of the RPV. The result is given in
2,823.8
2020-05-12T00:00:00.000
[ "Physics", "Engineering" ]
Theoretical and Experimental Insights into the Tandem Mannich—Electrophilic Amination Reaction: Synthesis of Safirinium Dyes Isoxazolo[3,4-b]pyridin-3(1H)-ones are ‘spring-loaded’ compounds that quantitatively react with iminium salts derived from formaldehyde and secondary amines to yield fluorescent Safirinium dyes. The mechanism and energetics of the above tandem Mannich–electrophilic amination reaction have been investigated experimentally and using theoretical methods. The hybrid B3LYP functional with GD3 empirical dispersion and range-separated hybrid functional ωB97XD, both combined with a PCM model, were applied to acquire the energetic profiles of the studied reaction with respect to the structure of secondary amine and isoxazolone used. Diastereoselectivity of the tandem reactions involving iminium salt derived from L-proline has been rationalized theoretically by means of density functional theory calculations. Introduction The chemical reactions that feature pure kinetic-control of the outcome and utilize 'spring-loaded' reactants are of considerable interest in multiple applications that include drug discovery, combinatorial chemistry, target-templated in situ chemistry, proteomics, DNA research and bioconjugation techniques [1]. The commonly recognized high yielding, thermodynamically favored and wide-in-scope reactions, such as nucleophilic ring opening reactions of epoxides and aziridines, non-aldol type carbonyl reactions, and additions to carbon-carbon multiple bonds, have been termed by K. B. Sharpless as "click chemistry" [2]. In the above context we have recently developed the tandem Mannich-electrophilic amination reaction of fluorogenic 4,6-dimethylisoxazolo [3,4-b] pyridin-3(1H)-one or isoxa zolo [3,4-b] quinolin-3(1H)-one with formaldehyde and secondary amines that leads to zwitterionic UV-fluorescent Safirinium P and Q dyes, respectively [3,4]. The latter upon esterification with N-hydroxy-succinimide (NHS) can serve as fluorescent amine-reactive reagents which are useful as fixed charge derivatization reagents for micellar electrokinetic chromatography (MEKC) and MS proteomic analyses [5], as well as for bioimaging purposes such as stanning of bacterial cells and spores [4,6,7]. The tandem reactions of non-fluorescent isoxazolones, formaldehyde and secondary amines, i.e., syntheses of Safirinium dyes, proceed quantitatively, however the reaction rates strongly depend on the substitution pattern, which results in reaction times ranging from several minutes to dozens of hours [4]. The aim of the present study was to describe the tandem Mannichelectrophilic amination reactions using commonly recognized theoretical quantum chemical methods [8][9][10] and identify the steric factors that would limit applications of these pro-ens of hours [4]. The aim of the present study was to describe the tandem Mannich-electrophilic amination reactions using commonly recognized theoretical quantum chemical methods [8][9][10] and identify the steric factors that would limit applications of these processes in a fast and sensitive detection of formaldehyde and fluorescent derivatization of secondary aliphatic amines. Results and Discussion First, we have proved that 4,6-dimethylisoxazolo [3,4-b]pyridin-3(1H)-one (1) undergoes 1,2 nucleophilic addition in a reaction with formaldehyde to afford hemiaminal 2, the structure of which was unambiguously confirmed by single crystal X-ray analysis ( Figure 1). According to our previous studies, the same reaction performed in the presence of secondary amine (HNR 1 R 2 ) gives rise to the formation 2,2-dialkyl-5,7-dimethyl-2,3-dihydro- [1,2,4]triazolo [4,3-a]pyridin-2-ium-8-carboxylates (3a,b), i.e., Safirinium P dyes, by means of the tandem Mannich-electrophilic amination reaction [4]. Hence, acidic isoxazolone 1 in the presence of a base forms salts 1a,b which further react with formaldehyde to yield iminium salts 1a,b. Furthermore, the ambident nucleophile and iminium cations give the Mannich addition products (aminals 1a,b) that spontaneously undergo electrophilic amination reactions via the transition states 1a,b affording products 3a,b in a quantitative manner. In order to get a better insight into the above chemical transformations, we have performed theoretical studies with use of DFT B3LYP [11] and ωB97X-D [12] methods as well as a Polarizable Continuum Model (IEF-PCM) [13] implemented in a Gaussian 16 software package [14]. The stationary structures that pertain to the chemical entities presented in Scheme 1 were optimized to confirm that all ground structures, except for the transition states, have only real frequencies. The relative energy comparisons in water and methanol solutions are given in Table 1. As a general observation, pure B3LYP/PCM density functional without empirical dispersion failed to reasonably reproduce the investigated chemical transformations since the reaction products 2 and 3a,b were found to be thermodynamically unfavorable in water and methanol with the Gibbs free energies for the latter solvent of 0.6, 5.8, and 0.3 kcal/mol, respectively. This theoretical approach predicted also relatively high energy barriers for the electrophilic amination processes by means of high Gibbs free energies of 28.4 and 22.7 kcal/mol for the transition states 1a,b in methanol. Since the correct determination of large molecular structures and their properties require inclusion of the van der Waals interactions between molecules, we have added to the B3LYP/6-31+G(d) functional Grimme's empirical dispersion corrections [15], which were found to reliable in describing large molecular systems [16]. Consequently, the results obtained show that formation Safirinum P dye (3a) is thermodynamically favorable with ΔG values of −4.0 and -3.1 kcal/mol for reactions carried in water and methanol, respectively. In order to get a better insight into the above chemical transformations, we have performed theoretical studies with use of DFT B3LYP [11] and ωB97X-D [12] methods as well as a Polarizable Continuum Model (IEF-PCM) [13] implemented in a Gaussian 16 software package [14]. The stationary structures that pertain to the chemical entities presented in Scheme 1 were optimized to confirm that all ground structures, except for the transition states, have only real frequencies. The relative energy comparisons in water and methanol solutions are given in Table 1. As a general observation, pure B3LYP/PCM density functional without empirical dispersion failed to reasonably reproduce the investigated chemical transformations since the reaction products 2 and 3a,b were found to be thermodynamically unfavorable in water and methanol with the Gibbs free energies for the latter solvent of 0.6, 5.8, and 0.3 kcal/mol, respectively. This theoretical approach predicted also relatively high energy barriers for the electrophilic amination processes by means of high Gibbs free energies of 28.4 and 22.7 kcal/mol for the transition states 1a,b in methanol. Since the correct determination of large molecular structures and their properties require inclusion of the van der Waals interactions between molecules, we have added to the B3LYP/6-31+G(d) functional Grimme's empirical dispersion corrections [15], which were found to reliable in describing large molecular systems [16]. Consequently, the results obtained show that formation Safirinum P dye (3a) is thermodynamically favorable with ∆G values of −4.0 and −3.1 kcal/mol for reactions carried in water and methanol, respectively. Similarly, the application of dispersion corrections significantly lowered the calculated electrophilic amination barriers revealing transition states with ∆G values of 19.1 and 19.5 kcal/mol. The reaction of sterically constrained 1-methylenepyrrolidinium salt 1b Similarly, the application of dispersion corrections significantly lowered the calculated electrophilic amination barriers revealing transition states with ΔG values of 19.1 and 19.5 kcal/mol. The reaction of sterically constrained 1-methylenepyrrolidinium salt 1b is considerably faster and more exothermic than that involving unconstrained iminium salt 1a (19.5 It should be pointed out that in all cases, except for pure B3LYP functional, the formation of product 3 is thermodynamically favoured over the reversible production of hemiaminal 2. Moreover, the heterocyclic system of 2 is virtually planar with the amino N7 atom of the 5-isoxazolone fragment showing a pyramidal arrangement of its bonds with the sum of the three valence angles equal to 334.4 • . This value is consistent with relatively long bonds formed by N7 to O8 and C6 (1.446 and 1.393 Å, respectively). Our survey of the Cambridge Structural Database (CSD) [17] showed that for N-substituted 5-isoxasolones the N-C bond lengths range from 1.33 to 1.43 Å. Such a broad range of the values shows that the amino N atom can with ease change its hybridization state. In our previous work, we have reported that isoxazolone 1 reacts with iminium salt derived from L-proline only in the presence of a base, e.g., triethylamine (Scheme 2) [5]. Hence, the investigated tandem reaction is a base-promoted process, which in the case of L-proline transformation leads to single diastereoisomer 3c. It should be pointed out that in all cases, except for pure B3LYP functional, the formation of product 3 is thermodynamically favoured over the reversible production of hemiaminal 2. Moreover, the heterocyclic system of 2 is virtually planar with the amino N7 atom of the 5-isoxazolone fragment showing a pyramidal arrangement of its bonds with the sum of the three valence angles equal to 334.4°. This value is consistent with relatively long bonds formed by N7 to O8 and C6 (1.446 and 1.393 Å, respectively). Our survey of the Cambridge Structural Database (CSD) [17] showed that for N-substituted 5isoxasolones the N-C bond lengths range from 1.33 to 1.43 Å. Such a broad range of the values shows that the amino N atom can with ease change its hybridization state. In our previous work, we have reported that isoxazolone 1 reacts with iminium salt derived from L-proline only in the presence of a base, e.g., triethylamine (Scheme 2) [5]. Hence, the investigated tandem reaction is a base-promoted process, which in the case of L-proline transformation leads to single diastereoisomer 3c. The base promoted tandem reaction of isoxazolone 1 with iminium salt derived from L-proline and formaldehyde. In order to investigate the nature of the diastereospecific reaction, we have completed theoretical calculations for the reaction paths that lead to both products. Surprisingly, it was found that the thermochemical factors do not favor formation of any of the examined diastereoisomers 3c (Table 2). Scheme 2. The base promoted tandem reaction of isoxazolone 1 with iminium salt derived from L-proline and formaldehyde. In order to investigate the nature of the diastereospecific reaction, we have completed theoretical calculations for the reaction paths that lead to both products. Surprisingly, it was found that the thermochemical factors do not favor formation of any of the examined diastereoisomers 3c (Table 2). These results prompted us to investigate the structure of iminium zwitterion 1c using quantum chemical calculations (B3LYP/6-31+G(d)). As shown in Figure 2, the highest densities of the lowest unoccupied orbital (LUMO) can be found on the iminium carbon atom. However, the carboxylate group strongly affects the topicity of this atom favoring Re-face reactivity that results in formation of 1R,2S diastereoisomer 3c. An extensive literature review has confirmed the proposed reasoning. Thus, 1-[(2-hydroxy-1-naphthyl)methyl]proline, obtained via Mannich-type condensation from β-naphthol, L-proline and formaldehyde, reacts with boron compounds with high diastereoselectivity [18]. MeOH B3LYP-D3/ 6-31+G(d) 0 These results prompted us to investigate the structure of iminium zwitterion 1c using quantum chemical calculations (B3LYP/6-31+G(d)). As shown in Figure 2, the highest densities of the lowest unoccupied orbital (LUMO) can be found on the iminium carbon atom. However, the carboxylate group strongly affects the topicity of this atom favoring Re-face reactivity that results in formation of 1R,2S diastereoisomer 3c. An extensive literature review has confirmed the proposed reasoning. Thus, 1-[(2-hydroxy-1-naphthyl)methyl]proline, obtained via Mannich-type condensation from -naphthol, L-proline and formaldehyde, reacts with boron compounds with high diastereoselectivity [18]. (Table 3). These values are ca. 4 kcal/mol lower than those estimated for the transition state 1a. Analogous tendencies can be observed when comparing the estimated Gibbs free energies for products 3a and 6. (Table 3). These values are ca. 4 kcal/mol lower than those estimated for the transition state 1a. Analogous tendencies can be observed when comparing the estimated Gibbs free energies for products 3a and 6. Scheme 3. The reaction of isoxazolone 4 with formaldehyde and synthesis of Safirinium Q dye 6 by means of the tandem Mannich-electrophilic amination reaction. It should be pointed out that the obtained theoretical evaluations match the observed chemical experiments. According to our previous report, reactions involving isoxazolone 1 are rather slow and require heat [4,19]. Conversely, tandem transformations involving isoxazolone 4 are fast (Figure 3). It should be pointed out that the obtained theoretical evaluations match the observed chemical experiments. According to our previous report, reactions involving isoxazolone 1 are rather slow and require heat [4,19]. Conversely, tandem transformations involving isoxazolone 4 are fast (Figure 3). Finally, we have evaluated the scope of the tandem reaction in terms of steric factors that would limit its applications. As shown in Scheme 4, isoxazolone 4 has been subjected to reactions with piperazine, homopiperazine and two N,N ' -dialkylethylenediamines. Hence, the reaction with the most sterically constrained piperazine gave rise to the formation of a mono-derivative, i.e., 1H-spiro[ [1,2,4]triazolo[4,3-a]quinoline-2,1′piperazin]-2-ium-4-carboxylate (7) as a single product. On the contrary, the reactions with less constrained diamines produced double Mannich-amination products 8a,b and 9. In order to rationalize the difference in reactivity of piperazine and homopiperazine we have performed theoretical computations, analogical to the experiments presented above. Finally, we have evaluated the scope of the tandem reaction in terms of steric factors that would limit its applications. As shown in Scheme 4, isoxazolone 4 has been subjected to reactions with piperazine, homopiperazine and two N,N'-dialkylethylenediamines. Hence, the reaction with the most sterically constrained piperazine gave rise to the formation of a mono-derivative, i.e., 1H-spiro [1,2,4]triazolo[4,3-a]quinoline-2,1 -piperazin]-2-ium-4carboxylate (7) as a single product. On the contrary, the reactions with less constrained diamines produced double Mannich-amination products 8a,b and 9. In order to rationalize the difference in reactivity of piperazine and homopiperazine we have performed theoretical computations, analogical to the experiments presented above. Albeit, the energy barriers for transformations A -> 9 and 7 -> B were found to be comparable, the formation of product B (4.8 and −0.5 kcal/mol) was estimated to be thermodynamically unfavored in comparison to the homopiperazine derivative 9 (1.3 and −3.6 kcal/mol) ( Table 4). Albeit, the energy barriers for transformations A -> 9 and 7 -> B were found to be comparable, the formation of product B (4.8 and −0.5 kcal/mol) was estimated to be thermodynamically unfavored in comparison to the homopiperazine derivative 9 (1.3 and −3.6 kcal/mol) ( Table 4). The structure of ethylenediamine derivative 8b has been confirmed by single crystal X-ray analysis (Figures 4 and 5). The symmetrical internal quaternary salt 8b crystallizes as a pentahydrate. The asymmetric part of the unit cell consists of two halves of 8b occupying special positions of C i symmetry, two molecules of 8b adopting a non-crystallographic C i -symmetric conformation and located in general positions and 15 water molecules. The -CH 2 -N-CH 2 -CH 2 -N-CH 2 -fragment of all molecules is fully extended. In crystal, π-π stacking interactions between the quinoline systems of 8b organize the molecules into two symmetry independent columns along the direction. The water molecules forming a 1D assembly via O-H·O hydrogen bond along occupy a channel formed between four such columns and bind to the carboxylate groups of 8b ( Figure 5). Since 1 H NMR spectra of compounds 8a,b and 9 reveal single molecules, the absolute configurations at the quaternary nitrogen atoms in 8a and 9 have been assigned analogously to the meso isomer 8b, for which 2R2 S configuration has been proven by single crystal X-ray analysis. However, it cannot be ruled out that the reaction mechanisms that underlay the formation of compounds 8a and 9 are different to that of 8b, and hence, these derivatives are obtained as pure enantiomers or their racemic mixtures. The structure of ethylenediamine derivative 8b has been confirmed by single cr X-ray analysis (Figures 4 and 5). The symmetrical internal quaternary salt 8b crysta as a pentahydrate. The asymmetric part of the unit cell consists of two halves of 8b o pying special positions of Ci symmetry, two molecules of 8b adopting a non-cryst graphic Ci-symmetric conformation and located in general positions and 15 water m cules. The -CH2-N-CH2-CH2-N-CH2-fragment of all molecules is fully extended. In cry π-π stacking interactions between the quinoline systems of 8b organize the molecules two symmetry independent columns along the direction. The water mole forming a 1D assembly via O-H·O hydrogen bond along occupy a channel for between four such columns and bind to the carboxylate groups of 8b ( Figure 5). Sinc NMR spectra of compounds 8a,b and 9 reveal single molecules, the absolute config tions at the quaternary nitrogen atoms in 8a and 9 have been assigned analogously t meso isomer 8b, for which 2R2′S configuration has been proven by single crystal X analysis. However, it cannot be ruled out that the reaction mechanisms that underla formation of compounds 8a and 9 are different to that of 8b, and hence, these deriva are obtained as pure enantiomers or their racemic mixtures. The structure of ethylenediamine derivative 8b has been confirmed by single cr X-ray analysis (Figures 4 and 5). The symmetrical internal quaternary salt 8b crysta as a pentahydrate. The asymmetric part of the unit cell consists of two halves of 8b pying special positions of Ci symmetry, two molecules of 8b adopting a non-crys graphic Ci-symmetric conformation and located in general positions and 15 water m cules. The -CH2-N-CH2-CH2-N-CH2-fragment of all molecules is fully extended. In cr π-π stacking interactions between the quinoline systems of 8b organize the molecule two symmetry independent columns along the direction. The water mole forming a 1D assembly via O-H·O hydrogen bond along occupy a channel fo between four such columns and bind to the carboxylate groups of 8b ( Figure 5). Sin NMR spectra of compounds 8a,b and 9 reveal single molecules, the absolute config tions at the quaternary nitrogen atoms in 8a and 9 have been assigned analogously t meso isomer 8b, for which 2R2′S configuration has been proven by single crystal X analysis. However, it cannot be ruled out that the reaction mechanisms that underla formation of compounds 8a and 9 are different to that of 8b, and hence, these deriva are obtained as pure enantiomers or their racemic mixtures. Conclusions In summary, we have shown that isoxazolo [3,4-b]pyridin-3(1H)-ones form hemiaminals with formaldehyde at the N1 nitrogen atoms. The results of theoretical studies carried out using DFT and PCM methods indicate that the same reaction performed in the presence of secondary amine leads to thermodynamically favored 2,3-dihydro- [1,2,4]triazolo [4,3a]pyridin-2-ium-8-carboxylates (Safirinium dyes). It was demonstrated that theoretical replication of previously reported reactivity of isoxazolones, i.e., the tandem Mannichelectrophilic amination reaction, can be accomplished by application of B3LYP functional augmented with Grimme's empirical dispersion (B3LYP-D3), as well utilization of rangeseparated hybrid functional ωB97X-Dand. Furthermore, it was demonstrated that diastereoselectivity of the tandem reactions involving L-proline results from asymmetric LUMO distribution within the iminium salt. Finally, the performed experiments with a set of ethylenediamine derivatives proved that the studied tandem reactivity of isoxazolones with secondary amines is of a general nature and can be only hampered in sterically constrained starting materials such as N-substituted piperazine. Theoretical Calculations All theoretical calculations have been completed with the Gaussian 16 [14] package pursuant to the following methodological procedure. For each chemical entity, the groundstate structure has been obtained by a standard force-minimization process using default G16 thresholds and algorithms. The vibrational spectra have been obtained to systematically check that all vibrational frequencies are real. Thus, each stationary point was characterized by a frequency calculation, starting materials, intermediates and products proving all positive frequencies and transition structures featuring a single negative (imaginary) frequency. The vibrational mode pertaining to the negative frequency was animated in each case to confirm that it matched to the presumed concerted bond-making/breaking mechanism. The transition states were also affirmed by intrinsic reaction coordinate (IRC) calculations. The standard hybrid Becke-3-Lee-Yang-Parr functional (B3LYP) [11] with and without Grimme's empirical dispersion (GD3) [15,16], as well as range-separated hybrid functional ωB97X-Dand [12], were utilized for these calculations. The bulk solvent effects were taken into account for the DFT calculations by means of a Polarizable Continuum Model (IEF-PCM) [13]. Standard basis sets, i.e., 6-31+G(d) and 6-311+G(d,p), have been used in the course of this project. The energies reported are given relative to the most stable conformers of the reactants. Gibbs free energies (∆G) including zero point correction, temperature correction, and vibrational energy were computed for standard conditions (T = 298.15 K, P = 1.0 atm) using the harmonic oscillator approximation. X-ray Crystallography Diffraction experiments were carried out at room temperature with an Oxford Diffraction Xcalibur E diffractometer using Mo Kα radiation for 2 and at 131 K with an Oxford Diffraction SuperNova diffractometer using Cu Kα radiation for 8b. Diffraction data were processed with CrysAlisPro software [21]. In case of 2 the structure was determined from a twinned specimen. The structures were solved with the program SHELXT [22] and refined by full-matrix least-squares method on F 2 with SHELXL-2018/3 [23] within the Olex2 software [24]. Hydrogen atoms were placed in calculated positions and refined as riding on their carriers, except that of the O-H group in 2 which was freely refined. CCDC 2082365-2082366 contain the supplementary crystallographic data for this paper. These data can be obtained free of charge via http://www.ccdc.cam.ac.uk/conts/retrieving.html (or from the CCDC, 12 Union Road, Cambridge CB2 1EZ, UK; Fax: +44-1223-336033; E-mail: deposit@ccdc.cam.ac.uk). Crystal data for 8b (C 38 , 19659 unique (R int = 0.0341, R sigma = 0.0394) which were used in all calculations. The final R 1 was 0.0563 (I > 2σ(I)) and wR 2 was 0.1561 (all data). One of the carboxylate groups and one of the ethylene bridges are disordered over two sites. The molecule of 8b is shown in Figure 4.
4,886
2021-06-14T00:00:00.000
[ "Chemistry" ]
Trimethylamine-N-oxide (TMAO)-induced atherosclerosis is associated with bile acid metabolism Background Recently, trimethylamine-N-oxide (TMAO) plasma levels have been proved to be associated with atherosclerosis development. Among the targets aimed to ameliorating atherosclerotic lesions, inducing bile acid synthesis to eliminate excess cholesterol in body is an effective way. Individual bile acid as endogenous ligands for the nuclear receptor has differential effects on regulating bile acid metabolism. It is unclear whether bile acid profiles are mechanistically linked to TMAO-induced development of atherosclerosis. Methods Male apoE−/− mice were fed with control diet containing 0.3% TMAO for 8 weeks. Aortic lesion development and serum lipid profiles were determined. Bile acid profiles in bile, liver and serum were measured by liquid chromatographic separation and mass spectrometric detection (LC-MS). Real-time PCRs were performed to analyze mRNA expression of genes related to hepatic bile acid metabolism. Results The total plaque areas in the aortas strongly increased 2-fold (P < 0.001) in TMAO administration mice. The levels of triglyceride (TG), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-c) in TMAO group were also significantly increased by 25.5% (P = 0.044), 31.2% (P = 0.006), 28.3% (P = 0.032), respectively. TMAO notably changed bile acid profiles, especially in serum, the most prominent inductions were tauromuricholic acid (TMCA), deoxycholic acid (DCA) and cholic acid (CA). Mechanically, TMAO inhibited hepatic bile acid synthesis by specifically repressing the classical bile acid synthesis pathway, which might be mediated by activation of small heterodimer partner (SHP) and farnesoid X receptor (FXR). Conclusions These findings suggested that TMAO accelerated aortic lesion formation in apoE−/− mice by altering bile acid profiles, further activating nuclear receptor FXR and SHP to inhibit bile acid synthesis by reducing Cyp7a1 expression. Background Atherosclerosis is one of the most important causes of death and disability throughout the world. Based on careful clinical researches, seven prominent contributory causes (risk factors) for atherosclerosis have been identified: increased serum cholesterol and blood pressure, diabetes, obesity, a positive family history, smoking, and an atherogenic diet [1]. Recent clinical studies have suggested a correlation between elevated plasma trimethylamine N-oxide (TMAO) levels and atherosclerosis [2][3][4][5]. It has been shown that TMAO may exacerbate inflammatory reactions of vascular wall, induce reactive oxygen species production, and impair cholesterol reverse transport, which are involved in development of atherosclerosis [6]. Koeth et al. also showed that TMAO modulated cholesterol and sterol metabolism to promote the progress of atherosclerosis [4]. Bile acids synthesis from cholesterol is the predominant pathway for eliminating excess cholesterol in the body, which contribute to the regression of atherosclerosis [7]. It has been demonstrated that bile acid act as the endogenous ligands for the nuclear receptor farnesoid X receptor (FXR), regulating the activity of genes involved in bile acids synthesis, transport, conjugation and excretion [8]. Individual bile acid has differential effects on bile acid signaling in mice, and the activities of individual bile acids vary markedly under physiological and pathophysiological conditions [9]. Studies have shown that tauromuricholic acid (TMCA) is FXR antagonist [10], and unconjugated bile acids (such as chenodeoxycholic acid (CDCA), lithocholic acid (LCA), deoxycholic acid (DCA) and cholic acid (CA)) act as highaffinity ligand agonists of FXR [9,11]. The activation of FXR by bile acids downregulates the expression of Cyp7a1 to limit the synthesis of bile acids in the liver through a feedback mechanism [12]. Therefore, the composition of bile acid pool plays a key role in cholesterol homoeostasis. In the present study, the aim was to find the association between TMAO-induced atherosclerosis and bile acid metabolism. The bile acid profiles in bile, liver, and serum were examined in apoE −/− mice fed with TMAO. Real-time PCRs were performed to analyze mRNA expression of genes related to hepatic bile acid metabolism. Animals and diets Male apoE −/− mice (C57/BL6 background) aged 9 weeks were purchased from Nanjing qingzilan Co. Ltd. (Nanjing, China). Mice were housed in an air-conditioned room with a 12 h light/dark cycle, a constant temperature of 23 ± 2°C, and relative humidity of 65 ± 15%. All protocols and procedures were according to the guidelines of ethical committee of experimental animal care at College of Food Science and Engineering, Ocean University of China. Mice were randomly divided into two groups, control group and TMAO group (n = 8) fed with control diet containing 0.3% TMAO (Sigma, St. Louis, MO, USA), for 8 weeks. The composition of the diets is shown in Table 1. At the end of the experimental period, mice were sacrificed after a 12 h overnight fasting. Serum was collected from blood by centrifugation at 4000 g at 4°C for 10 min and was then stored at − 80°C. Fresh tissue samples were fixed for histopathology determinations or were quick-frozen with liquid nitrogen. Atherosclerotic lesion quantitation and histologic analysis After perfusion with cold PBS (pH, 7.4) and 4% paraformaldehyde, the entire aorta was rapidly dissected from the proximal ascending aorta to the iliac bifurcation under a dissecting microscope [13]. The dissected aorta was placed in PBS, and fat and connective tissue adhering to the adventitia were removed as much as possible. The presence of atherosclerotic lesions in the aorta was measured using oil red O staining. Images were recorded using a digital camera (Coolpix 990; Nikon Corp, Tokyo, Japan), and lesion areas were analyzed using Image-J program. The aortic sinus was sectioned serially (5-μm intervals) and stained with hematoxylin and eosin (H&E). All images were digitized using a microscope (Olympus AX80; Olympus Optical, Tokyo, Japan) equipped with a high-resolution camera (Nikon D2X; Nikon). Analysis of bile acid profiles using LC-MS The bile acids were extracted from serum, bile, and liver by protein precipitation using ice-cold acetonitrile. Simply, 1 mL of ice-cold acetonitrile was added to 100 μL serum, 100-fold diluted bile and liver homogenate, and the sample was vortexed and centrifuged at 14800 rpm and 4°C for 10 min. The supernatant was aspirated, evaporated under vacuum, reconstituted in 100 μL of MeOH and deionized water (85: 15, v: v), and centrifuged at 14800 rpm and 4°C for 10 min. The [16]. The mass range of FS mode was recorded from m/z 300 to 600. The data analysis was processed with Agilent Qualitative Analysis Workstation software. Quantitative real-time PCR TRIzol reagent (Invitrogen, USA) was using to extract hepatic total RNA. Real-time PCRs were performed as described previously. Relative mRNA expression levels were determined by standard curve method normalized to 18S. Sequences of primers used were shown in Table 2. Statistical analysis The results are presented as mean ± standard error of mean (SEM). All data were subjected to analysis of variance using the SPSS software (version 18.0; SPSS Inc., Chicago, IL, USA). Differences between the means were tested by one-way ANOVA, and all detected significant differences were further evaluated by student's t-test. The level of significance chosen was P < 0.05. Results TMAO increased fat mass in in apoE −/− mice As shown in Table 3, administration of TMAO for 8 weeks did not change body weight in apoE −/− mice but result in a significant increase in fat mass, in which, visceral epididymal and inguinal white adipose tissue increased 22.8 and 25%, respectively. There were no differences in liver weight among the two groups. TMAO accelerated atherosclerosis The total plaque areas in the aortas strongly increased 2-fold (P < 0.001) in TMAO administration mice compared with the control group ( Fig. 1a and b). The tissue sections staining with H&E ( Fig. 1c) showed unusual medial thickening and high infiltration of macrophage to the adventitia, and identified the accumulation of foam cells in the aortic wall in TMAO group. TMAO increased serum lipid concentrations Mice fed with TMAO exhibited significantly higher serum lipid. The levels of TG, TC, LDL-C in TMAO group were significantly increased by 25.5% (P = 0.044), 31.2% (P = 0.006), 28.3% (P = 0.032), respectively, compared with those of the control group ( Fig. 2a-c). The concentration of HDL-C was unchanged by TMAO intervention (Fig. 2d). FPLC was used to fractionate the pooled plasma of male apoE −/− mice. Results showed that the increased TC and TG were primarily associated with the high VLDL and LDL fraction in TMAO group (Fig. 2e-f ). TMAO changed bile acid profiles To determine the fate of cholesterol, the bile acid profiles in the liver, bile and serum were measured by quantitative liquid chromatography coupled to mass spectrometry. As shown in Fig. 3a, the percentages of THDCA and TCDCA in bile were increased, while percentages of TDCA and CA were decreased by TMAO administration. In accordance with bile, TMAO induced high rate of THDCA and low rate of TDCA in liver (Fig. 3b). For serum sample, it was observed that THDCA and DCA in TMAO group presented at higher percentage than in control group (Fig. 3c). Notably, accordance with bile, the proportion of CA in serum was also significantly decreased by TMAO. TMAO inhibited hepatic bile acid synthesis To determine the basis for TMAO-influenced hepatic bile acid synthesis, the mRNA analysis of liver tissue was performed to evaluate the expressions of both the classical and alternative bile acid synthesis pathways (Fig. 4a). Whereas Cyp27a1 expression was unaltered, Cyp7a1 expression was reduced 38.4% in TMAO intervention mice than in control diet mice (Fig. 4a), indicating specific downregulation of the classical bile acid synthesis pathway. Treatment with TMAO resulted in significant induction of Abcb11 and Slc10a1 expression, whereas other genes for bile acid transport (Abcc2, Abcc3) and conjugation (Baat, Slc27a5) were unaffected by TMAO. FXR plays a critical role in the regulation of bile acid synthesis and homeostasis. As shown in Fig. 4d, the expression of Nr1h4 and Nr0b2 were upregulated by TMAO administration. Discussion Several studies have demonstrated that circulating TMAO levels were associated with obesity, type 2 diabetes and atherosclerosis [17][18][19]. It has been reported that FMO3 expression is linked to lipid and glucose metabolism. The decreased insulin levels in the livers of FMO3 −/− mice might result in reduced lipogenesis and further down-regulated PPARα expression [20]. Decreased KLF15, modulating gluconeogenesis [21], with decreased PPARα expression promoted inflammation in the livers of FMO3 −/− mice. Changes in bile acid metabolism could be linked to the inflammatory effects in liver, in which increased TNFα was proved to decrease the expression of Cyp7a1 through activation of the MAPK pathway [22]. Dietary choline, a precursor of TMAO, also reduced bile acid pool size in apoE −/− mice and downregulated expression of CYP7A1 [23]. Based on the previous studies, it is not In the present study, we demonstrate the effects of directly dietray TMAO on bile acid metabolism, especially on bile acid profile. A series of groundbreaking papers suggested that plasma TMAO concentration might be a biomarker of atherosclerosis [2,24], and is closely related to dyslipidemia and impaired glucose tolerance [25,26]. TMAO is formed from dietary TMA-containing nutrients, such as lecithin, choline, betaine, or carnitine, which can be metabolized in gut by bacterial lyases to release TMA, then the TMA was converted to TMAO by liver FMO3 [2,4,27]. In our previous study, it was observed that mice treatment with choline chloride through oral gavage or trimethylammonium chloride by intraperitoneal injection would increase the plasma TMAO level [28]. Wang et al. reported that apoE −/− mice fed with diets rich in either choline (0.5% or 1% wt/wt) or TMAO (0.12% wt/wt) for 20 weeks resulted in increased aortic root lesion size [2]. In accordance with these results, in the present study, apoE −/− mice directly administrated with TMAO for 8 weeks also showed notable progression of atherosclerosis. The elevated levels of serum cholesterol could aggravate atherosclerosis progression [29]. Herein, we further examined the serum lipid profiles in mice fed with TMAO diet leading to the AS. The results showed that TMAO intervention caused high serum cholesterol concentration in normal diet-fed mice, especially LCL-c. It has been reported that dietary supplementation with high TMAO (1.5% TMAO in water) for 8 weeks could cause the hyperlipidemia in normal diet-fed mice with elevated levels of serum TC, TG, and LDL-c [4]. However, our previous study showed that there were no differences in serum lipid levels between the high-fat diet mice with or without TMAO intervention [26]. In addition, the western diet-fed apoE −/− mice expressing hCETP treated with L-carnitine, a TMAO precursor, did not show any differences in lipid content compared with controls [30]. The observed lipid changes may be due to altered response to the diet with high fat or normal fat, and the hypothesis needs to be investigated in the future research. The hepatic conversion of cholesterol to bile acids and ultimate excretion into the feces represent the major route for excess cholesterol excretion that is important in whole body sterol homeostasis [31]. Disruption of normal bile acid synthesis and metabolism is associated with atherosclerosis. Several studies have demonstrated that circulating TMAO negatively related to bile acid pool size and inhibited bile acid synthesis by decreasing CYP7A1 expression. The discovery that specific bile acids differentially activate different nuclear receptors, farnesoid X receptor (FXR), pregnane X receptor (PXR) and vitamin D receptor (VDR) and one G-protein-coupled receptor (TGR5), identified bile acids as hormones that alter multiple metabolic pathways in many tissues [11]. Although the relative importance of individual BAs in regulating these processes is not completely clarified, several bile acid species, such as CDCA and its conjugated forms, has been identified as FXR agonist [9]. In the present study, the bile acid composition in bile, liver and serum were detected by LC-MS, and the results indicated that the relative proportion of TCDCA was higher in TMAO supplementary mice, which might contribute to FXR activation and further inhibit bile acids synthesis. The primary bile acids, CA and CDCA, are synthesized in hepatocytes via the cytochrome P450 (CYP)mediated oxidation of cholesterol [31]. Majority of CDCA is converted to α-muricholic acid (α-MCA) and β-MCA in liver. When bile acids excrete into intestine, β-MCA is 7α-dehydroxylated to form hyodeoxycholic acid (HDCA) [32]. In the present study, the proportion of THDCA was increased and TMCA was decreased in serum of the TMAO group, which indicated that dietary TMAO prone to promote the forming of HDCA from MCA (an antagonist of FXR) in intestinal. It has been reported that reduced levels of TMCAs could promote FXR-dependent FGF15 expression in ileum and further inhibited the hepatic expression of CYP7A1 [10]. Previous studies have shown that FXR activation induces SHP, thereby suppressing CYP7A1 expression and ultimately inhibiting BA synthesis [33]. Our data showed that the mRNA expressions of FXR (encode by Nr1h4) and SHP (encode by Nr0b2) were significantly upregulated by TMAO. Notably, the serum proportion of DCA, as FXR activator, was increased approximately 60% by TMAO compared to control group. Therefore, the alteration of bile acid composition might be the major cause of activation of FXR and SHP, further inhibiting bile acid synthesis by repressing the expression of Cyp7a1. To the best of our knowledge, the gut microbiota plays a key role in the pathophysiology of TMAO-induced AS. Various studies investigated the gut microbiota and key enzymes mediating the formation of TMAO in vivo [23,24]. Here, we focus on the effect of dietary TMAO on bile acid metabolism, suggesting markable changes of bile acid profiles in apoE −/− mice. However, one crucial question is how TMAO influences cellular metabolism and whether this is direct or indirect. In vivo loss-function experiments demonstrated that Flavin Monooxygenase 3 (FMO3) appeared to act as an important regulatory switch integrating cholesterol balance and hepatic inflammatory responses through mechanisms independent of its enzymatic product TMAO [34]. Moreover, a recent paper reported that the gut microbiota could contribute to the conversion of TMAO to TMA in mice gut [35]. Therefore, it reminds us that TMAO maybe not the "culprit" and we should take attention on the direct effects of metabolite on mediating the cellular metabolism. Conclusions These findings suggested that TMAO accelerated aortic lesion formation in apoE −/− mice by altering bile acid profiles, further activating nuclear receptor FXR and SHP to inhibit bile acid synthesis by reducing Cyp7a1 expression.
3,718.4
2018-12-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
TPS: A Topological Potential Scheme to Predict Influential Network Nodes for Intelligent Communication in Social Networks The growing popularity of Online Social Networks (OSN) have prompted an increasing number of companies to promote their brands and products through social media. This paper presents a topological potential scheme for predicting influential nodes from large scale OSNs to support more intelligent brand communication. We first construct a weighted network model for the users and their relationships extracted from the brand-related content in OSNs. We quantitatively measure the individual value of the nodes from both the network structure and brand engagement aspects. Moreover, we have addressed the problem of influence decay along with information propagation in social networks and use the topological potential theory to evaluate the importance of the nodes by their individual values as well as the individual values of their surrounding nodes. The experimental results have shown that the proposed method is able to predict influential nodes in large-scale OSNs. We investigate the top-k influential nodes identified by our method in detail, which are quite different from those identified by using pure network structure or individual value. We can obtain an identification result with a higher ratio of verified users and user coverage by using our method compared to existing typical approaches. I. INTRODUCTION O NLINE Social Networks (OSN) have become increasingly popular in recent years. With the emergence of the mobile Internet, users are able to enjoy OSNs such as Facebook, Twitter, and Weibo at all times and in all places. Extensive online User-Generated Content (UGC) has been produced on social media, and has become an important current aspect of Electronic Word of Mouth (eWOM). Social media has become an important channel through which companies can release information to and maintain contact with their customers. Therefore, eWOM via social media has become a key driver of brand marketing towards consumers, prompting an increasing number of companies to promote their brands and products through OSNs. From the marketing perspective, the importance of the nodes in a large-scale OSN is not equal. There exist some active users in the network, who have a certain influence and are also very concerned about some brands. Obviously, these influential nodes can help companies to perform brand communication through social media by affecting other nodes. Therefore, if the influential nodes can be identified within large-scale OSNs, then companies can rely on them for brand communication. Those influential nodes will act as 'bridges' between companies and other consumers. Although there have been a number of previous studies about identifying or predicting influential nodes in OSNs [1], [2], few have addressed the potential significance to brand communication or how to identify influential nodes that are more suitable for promoting brands through social media. Moreover, massive brand-related data have become a kind of big data [3] in OSNs. Therefore, predicting influential nodes within a large-scale OSN for brand communication is still a problem worthy of further study. In this paper, we propose a topological potential scheme for predicting influential nodes in OSNs by considering both the network structure and brand engagement factors. The preliminary results of this study can help companies analyze and discover the characteristics and rules of OSNs to provide decision support for data-driven or intelligent brand communication in social media. In particular, we have considered the problem of influence decay in OSN and apply a topological potential model to identify influential nodes more suitable for brand communication. The major contributions of this study are summarized as follows: 1) We propose to measure the importance of nodes in OSN by considering both the network structural and contentrelated metrics and quantitatively represent it as individual value. 2) An intelligent topological potential scheme (TPS) is proposed to determine the node influence and predict influential nodes in OSNs for brand communication. 3) We collected a real-world dataset from SMZDM.com including more than 40000 users and 60000 social relations. Comprehensive experiments are conducted to validate the effectiveness of our method. The rest of the paper is organized as follows. In Section II, the motivations are introduced, and the related works are reviewed. Section III describes the process of predicting Details about the performance evaluation are presented in Section IV. Finally, some conclusion and future work are presented in Section V. II. RELATED WORKS Currently, many efforts have been made to identify or predict influential nodes in OSNs. In this section, we briefly review the existing works in several categories. A. Structural Methods Social network analysis mostly relies on topological metrics [4] such as centrality and community concepts, and many of the terms used to measure these metrics are a reflection of their sociological origin [5]. For example, Freeman [6], [7] illustrates that the centrality of a node indicates the connection ability of the node in the social network structure and can be used as a criterion for measuring the importance of the node. Corley & Sha [8] address the problem of n-most vital nodes problem and propose the algorithm to solve the problem of node importance evaluation. Currently, many efforts have been made to discover the most influential nodes for maximizing influence in social networks [9]- [11]. These studies of influence maximization aim to discover nodes that can activate as many nodes as possible, which indicates that the influence of nodes can be propagated as extensively as possible. For example, Zareie et al. [12] introduce two influential node ranking algorithms that use the diversity of the neighbors of each node [13] to obtain its ranking value. Kumar & Panda [14] propose a coreness-based method to find influential nodes by voting. They also compare the performance of their method with some existing popular methods. Salavati & Abdollahpouri [15] take into account the interactions between users and network topology in weighted and directed graphs and consider target users' profit and similarity in identifying influential nodes. Zhang et al. [16] introduce a trust-based influential node discovery method for identifying influential nodes in social networks. However, their idea about trust between nodes is still based on the topological information of the network. Salavaty et al. [17] develop a formula that integrates the most significant network centrality measures in order to synergize their effects and simultaneously remove their biases to identify the most influential nodes in a complex network. However, their method is mainly used in biological systems. Zhou et al. [18] intended to solve the problem of finding the influential nodes which are able to initiate large-scale spreading processes in a limited amount of time. Amnieh and Kaedi [19] try to use two personality characteristics, openness and extroversion, to estimate for network members and find influential nodes. However, their personality characteristics are still computed based on the network structure. There are also a few of methods that take into account the influence of community (or group) structure [20], [21] in the network. Jain & Katarya [22] identify the community structure within the social network and the opinion leader by using a modified firefly algorithm in each community. Srinivas & Rajendran [23] propose an integer linear programming model to detect community structure in real-life networks and also identify the most influential node within each community. Zhao et al. [24] propose an algorithm for identifying influential nodes in social networks with community structure based on label propagation. The proposed algorithm can find the core nodes of different communities in the network through the label propagation process. Generally, these methods identify global influential users regardless of domain-specific information. B. Hybrid Methods The spreading influence of a node on a network depends on a number of factors, including its location on the network, the content of exchanged messages [25], and the character and amount of activity of the node [12]. Therefore, pure network structural methods are quite insufficient for identifying influential nodes in OSNs. In contrast, hybrid methods combining network structure and content seem to be more suitable for this problem. For example, Aleahmad et al. [1] try to detect the main topics of discussion in a given domain, calculate a score for each user, and then calculate a probability of being an opinion leader by using the scores. Liu et al. [26] take into account the dimensions of trust, domain, and time, and propose a product review domain-aware approach to identify effective influencers in OSNs. Advertising cost has also been taken into account, in addition to nodes influentiality, to determine influential users [27], [28]. Zareie et al. [2] measure the interest of users in marketing messages and then propose an algorithm to obtain the set of the most influential users in social networks. Weng et al. [29] propose an extension of PageRank algorithm called TwitterRank, to measure the influence of users in Twitter. They measure the influence taking both the topical similarity between users and the link structure into account. Moreover, many researchers have tried to use the ranking model like PageRank to identify opinion leader detection and especially in combination with topic models, e.g., Influence Rank [30], OpinionRank [31], Dynamic OpinionRank [32], TopicSimilarRank [33] and others. SuperedgeRank [34] is a mixed framework to find the influential users based on supernetwork theory, that is composed of network topology analysis and text mining. Li et al. [35] develop a ranking framework to automatically identify topic-specific opinion leaders. The score for opinion leadership is computed from four measures include expertise, novelty, influence, and activity. Topic-based methods can also be used to mine influential users in OSNs. For example, Hamzehei et al. [36] propose a topic-based influence measurement approach to integrate the user-topic relationships, topic content information, and social connections between users. Fang et al. [37] address the more important topic-level influence and develop a topic-sensitive influencer mining framework in interest-based OSNs. Although these hybrid methods may gain better performance by combining network structural features and contentrelated features, most of them haven't addressed the problem of influence decay along with information propagation [38]. In other words, we should consider the influence of users from the perspective of dynamics in information propagation, rather than single and static user. C. Brand Marketing in Social Media In addition to the methods mentioned above, many efforts have been made to study how social media can be used to support brand communication or how brands can be promoted in social media in the field of marketing. For example, Hajikhani et al. [39] try to investigate the overall polarity of public sentiment regarding specific companies' products by analyzing content from Twitter. Kabadayi & Price [40] study the factors affecting consumers' liking and commenting behaviors on Facebook brand pages. Schivinski & Dabrowski [41] investigate 504 Facebook users in order to observe the impact of firm-created and user-generated social media communication on brand equity, brand attitude and purchase intention by using a standardized online survey. Jim enez-Castillo & S anchez-Fern andez [42] study how effective digital influencers are in recommending brands via electronic word-ofmouth by examining whether the potential influence they have on their followers may affect brand engagement. Gao & Feng [43] examine the differences in Chinese users' gratifications of different social media and the impact of brand content strategies on the quality of brand-consumer communication via social media. Godey et al. [44] study how social media marketing activities influence brand equity creation and consumers' behavior towards a brand. Veirman et al. [45] explore the marketing through Instagram influencers and assess the impact of number of followers and product divergence on brand attitude by two experiments with fictitious influencer accounts on Instagram. Although many studies have been done about brand communication in social media, fewer existing studies have addressed how to identify and make use of influential nodes for brand marketing on social media. III. OUR PROPOSED TPS In this section, we mainly present an intelligent method for predicting influential nodes in OSN for intelligent brand communication. A. Weighted Network Model An OSN can be formally represented as a graph G ¼ ðV; E; W Þ, where V denotes the set of people or users that belong to the network and E represents the set of relations between the users. There is an edge between two nodes if they have a social relation. Given two nodes u i and u j , if u j follows u i , then there is an edge directed from u i to u j . Moreover, if the post of a user is commented on by another user, we consider this interaction as another kind of social relation between two users. For example, if u j comments on a post generated by u i , then there is an edge directed from u i to u j . If u j follows u i or comments on u i 's post, it means that u i is able to affect u j or that information can spread from u i to u j . W indicates a set of weights for the directed edges in E. The value of the weights in W denotes the number of relations and interactions between the users. For a specific brand (e.g., a cell phone or cosmetics), we can extract all posts related to it from an OSN and construct a corresponding weighted network model before we start to identify the influential nodes. Then, the task of mining influential nodes can be constrained in a limited space or community. The detailed process of network model construction is illustrated as follows: 1) First, we crawl the posts about the brand within a period of time (e.g., one month) from an OSN and the set of posts are denoted as P . 2) Then we extract the authors of the posts in P , and get a set of users, so called U. 3) The relations between the users in U are further extracted and added to a set R. Each r in R can be denoted as < u i ; u j > , where u i and u j are the two users have a social relation r in the OSN. To each r in R, we create a corresponding weight w, set w ¼ 1 and add w to a weight set W . 4) To each user u in U, we get the users who follow u in the OSN as a set U uf . We also get all his/her posts in P , marked as P u and we have P u & P . To each post p in P u , we get all the users who have commented on p as a set U uc . Then we have an extended user set it means that this is a newly-found social relation. In this case, we add < u; u i > to R, create a new weight w i ¼ 1 for < u; u i > and add w i to W . Moreover, we add u i to a temporary set U 0 . 6) We update the user set U by performing U [ U 0 . 7) Finally, we get the weighted network model G ¼ ðU; R; W Þ for a specific brand in the OSN. Here U and R can also be represented by V and E. B. Network Structure Characteristics In this article, we take into account two typical and frequently-used structural metrics to support our method, namely, outdegree and betweenness centrality. These two metrics can be used to measure the scope of nodes' influence and their ability to control the community in the network. Given a network G ¼ ðV; E; W Þ , the outdegree of a node can be formally denoted by the following equation: where u i and u j represent two nodes in the network, rðu i ; u j Þ 2 E represents a directed edge from u i to u j , w i;j 2 W represents the weight of the edge, and N & V represents the adjacent node set of u i . The outdegree of a node is mainly related to the behaviors of following and commenting. Users can follow others whom they are interested in. To an active user u i , the more other users who follow u i , the more attractive u i is and thus the greater ability he/she has to influence others. Users can also comment on the posts about which they are concerned. Given a post p j generated by user u i , the more comments that p j gets, the wider the scope of influence of p j is. The more times that u i 's posts are commented on, the greater influence the information generated by u i has. Given three nodes u i u j u k , then the control ability of u i over the communication between u j and u k is computed by the following equation: where g jk represents the total number of shortest paths between u j and u k , and g jk ðu i Þ represents the total number of shortest paths between u j and u k passing through u i . Note that we only consider the case that there exists at least one path between the two nodes u j and u k . We can calculate the sum of the control capability of u i with respect to all node pairs in the network and finally obtain the betweenness centrality of u i as follows: The betweenness centrality of a node considers the degree that counts the occurrence of a node on the straight (or shortest) path between other nodes. That is, if a node is the only way for other nodes in the network to connect with others, it has a more important position in the network. Given an active user u i , the larger the betweenness centrality of u i is, the more important location he/she has in the network. As we have mentioned before, the weight of an edge represents the closeness of the relationship between the two nodes. To simplify the calculation of the distance between nodes, we first determine the maximum edge weight w max in the original network and then use the following equation to update the original weight for each edge: In this way, we obtain an updated weight set W 0 for the network. For any node pair u i and u j in the network, we use an improved Floyd algorithm to calculate all the shortest paths and the corresponding shortest distances between the two nodes. Then, we can calculate the betweenness centrality for each node. To avoid the impact of excessive difference between the two metrics, we perform a maximum-minimum normalization on the two metrics as follows such that both metrics are mapped to the interval [0, 1]: Therefore, we can get the overall network structure score for a node u i by the following equation: where od norm ðu i Þ refers to the normalized value of outdegree and bc norm ðu i Þ refers to the normalized value of betweenness centrality. A larger score network ðu i Þ value implies that node u i has a more important location in the network from the structural perspective. C. Brand Engagement-Based Value In the context of brand communication, only considering the network structural metrics is insufficient to discover the real influential nodes. We should also take into account the content-related metrics to measure the individual value of the nodes in OSN. To identify influential nodes that are suitable for the communication and marketing of a specific brand, we should check whether a user is concerned about the brand. Therefore, we try to measure the value of nodes from the perspective of brand loyalty [46] or brand engagement [47] in addition to network structure. We try to quantitatively measure the brand engagement-based value of a node. As brand engagement is directly related to users' behaviors [48] in OSNs, we mainly consider the following four behaviors: 1) Publishing: A user writes or shares posts. 2) Commenting: A user comments on the posts by others. 3) Liking: A user presses the 'like' button below a post. 4) Adding to favorites: A user adds a post to his/her favorites. It is not difficult to quantify the above behaviors. Given a brand b j and a user u i , we can obtain the number of posts related to b j that u i has actively published on his/her personal page. As a potential influential node, he/she shall publish and share information related to a certain brand (product, event, etc.) frequently. Moreover, we can also obtain the percentage of positive posts related to b j published by u i , which are positively commented on, liked and added to favorites by other users. If many users positively respond to the posts, it reflects that u i is able to evoke the emotional resonance of other users or obtain their support for b j . We illustrate how to measure brand engagement quantitatively by the following steps: 1) Mark the polarity of posts: If the post content is negative about the brand, we mark the post as negative or with '-'. Similarly, if the post content is nonnegative about the brand, we mark the post as nonnegative or with 'þ'. 2) Calculate the support rate of posts: A semantic analysis approach based on sentiment dictionary is used to evaluate the opinions of other users on specific posts. We evaluate the sentiment polarity of each comment on a post and classify the sentiment polarity into negative and nonnegative. Then we calculate the support rate of posts (p support ) by the following equation: where, N pos com is the number of nonnegative comments, N neg com represents the number of negative comments, N favorite represents the number of adding to favorites, and N like represents the number of likes. 1) Obtain the brand engagement-based value for a user: Then, we can obtain the overall brand engagement score for node u i by using the following equation: where, i represents the i-th brand-related post published by u i , post i polar represents the polarity of the i-th post, and p i support represents the support rate of the i-th post. D. Measuring a Node's Individual Value After evaluating each node's characteristics, we can obtain the individual value of each node by the weight sum of the scores of each factor. We can use entropy theory to determine the weight for the two scores of a node, the so-called entropy weight, and then make a comprehensive and objective evaluation of the individual value of the node. Given n nodes in a network with two scores each, we can construct an n à 2 matrix R. Each row in R represents a node, each column represents a score, and item r ij in R represents the j-th influence value of the i-th node. Let f ij ¼ r ij P n i¼1 r ij and m ¼ 1 ln n , with f ij ¼ 0 and f ij lnðf ij Þ ¼ 0. The entropy value of the j-th influence value is defined as follows: Then, the entropy weight of the j-th influence value is defined as follows: We can further measure the individual value of the node by the following equations: ) value indv ¼ score network à w 1 þ score brand à w 2 : (12Þ As we can calculate the individual value for each node in the network, the individual value of the users can be represented as the weights of the nodes. Therefore, we can obtain a dual-weighted network model for brand communication. The corresponding formal representation for the dual-weighted model is as follows: where W 0 represents the updated weight set for E according to 4, and A ¼ fa 1 ; a 2 ; Á Á Á ; a n g represents the set of individual values for the nodes in V . The ultimate purpose of mining influential nodes in big data is to support more intelligent brand marketing. Thus, influential nodes should have a stronger ability to disseminate marketing information for a brand. Although we have proposed to use individual value to measure the importance of each node in OSN, we still cannot guarantee that a node with high individual value always disseminate information efficiently. For example, u is a node with high individual value, but the individual values of the nodes around u are very low. In this case, the marketing information originated from u may not spread well in the network, as the information dissemination capacity of its surrounding nodes is not strong enough. In other words, although the individual value of u is high, we still cannot consider it as an influential node due to the low individual values of its surrounding nodes. Therefore, when we determine whether a node is an influential node, we should also consider not only the individual value of the node but also the individual values of its surrounding nodes. Nodes with high individual values can obviously affect their surrounding nodes, but this effect will decay as the distance increases [49]. Therefore, we need more replay nodes with high individual values to support more efficient information spreading or dissemination in the network [50]- [52]. To address this issue, we try to further make use of topological potential theory to determine influential nodes in our method. According to the topological potential theory, a node will be affected by other nodes in the network. We improve the typical topological potential equation and calculate the topological potential value as follows: where d ij denotes the shortest distance between nodes u i and u j , influence factor s is a parameter used to depict the influence range of each node; v i refers to the individual value of node u i , v j refers to the individual value of node u j , and Fðu i Þ is the topological potential value of u i . The potential entropy can be calculated as follows: where Z ¼ P n i¼1 'ðv i Þ is a normalization factor. If we put 13 into 14, the potential entropy H is a function for s, as illustrated in Fig. 1. According to the entropy theory, when the potential entropy is maximum, the uncertainty is also maximum and the network distribution tends to be uniform. In that case, we have 'ðv i Þ Z ¼ 1 n . Therefore, we will take s when the potential entropy is minimum in our method (see Fig. 1.). According to the definition of potential entropy, we have: 1) When s ! 0 þ , 'ðu i ! u j Þ ! 0, there will be no in teraction between nodes u i and u j , and we have ' ðiÞ ¼ ðm i Þ 2 ¼ M 2 . Thus, the potential entropy will approach the maximum value log ðnÞ; 1) When s ! þ1, 'ðj ! iÞ ! m j , then no matter what the distance between two nodes is, their interaction force will be the same, and we have ' ðiÞ ¼ nM 2 . If we normalize Z, the potential entropy will still approach the maximum value log ðnÞ. Therefore, the potential entropy is a function of s. The range of s is ð0; þ1Þ and the range of potential entropy is ð0; log ðnÞÞ. The value of potential entropy will first decrease monotonically with the increase of s. However, the value of potential entropy will increase monotonically with the increase of s, when the minimum value is reached. The potential entropy reaches the maximum value at both ends of s's curve. Thus, we can further identify influential nodes from OSNs according to their topological potential values. The details for predicting influential nodes is illustrated in Algorithm I. With the algorithm, we can finally get the top n% items as the recommended influential nodes for brand communication. IV. PERFORMANCE EVALUATION AND INDUSTRIAL APPLICATIONS In this section, we present various experiments to evaluate the performance of the proposed TPS method on a real-world dataset from SMZDM.com. A. Experimental Setup To evaluate the proposed TPS, we collected a real dataset from SMZDM.com to carry out the experiments. SMZDM. com is an online shopping guide website in China that also integrates product review services such as Yelp and social network services similar to Facebook and Twitter. We have implemented a crawling program based on Python to crawl brand-related content from the website automatically. The data we extracted are all related to Xiaomi, which is a wellknown and typical mobile phone brand in China. We extracted the posts about Xiaomi within a period of time (until August 25, 2019). Thus, we obtained a brand communication dataset for Xiaomi from SMZDM.com to evaluate the performance of our method. We also processed the original dataset by following the steps illustrated in Section III-A. An open-source Chinese language segmentation tool was used to deal with the posts from the OSN. The number of nodes in the extracted dataset is approximately 40181, and the number of edges is about 60000. Among them, the number of edges with weights greater than or equal to 2 is approximately 37812, accounting for 63% of the total number of edges in the dataset; in addition, the network density is 3:72  10 À5 . The noise-reduced dataset has 15895 nodes and 37812 edges in total. B. Network Characteristics Analysis We first try to divide the dataset into several subcommunities and verify the scale-free and small-world properties of these subcommunities. We used the Gephi software to generate an interaction network diagram for the brand communication dataset, as shown in Fig. 2. There exist many subcommunities in this network. We use the modular function of Gephi to divide the subcommunities. By setting the three parameters of Randomize, Use edge weights and Resolution in the software, we find that the modularity of the network and the modularity with resolution are both 0.757, and the number of subcommunities is 1155 (see Fig. 3). As shown in Fig. 3, the number of nodes in most communities is too small; therefore, we only analyze the eight largest subcommunities. As illustrated in Table I, the sum of the internal degree of each community is much larger than the sum of the external degree. We further analyze the small-world property of the network for brand communication from an empirical perspective. Table II shows the statistical results of the network statistical properties of the eight largest subcommunities; the maximum value of the average path length in the eight subcommunities is 2.5, which means that one node can reach any other nodes only by 2.5 hops in a subcommunity. We also obtain the clustering coefficients C 2 ð0:008; 0:038Þ for the eight subcommunities. In contrast, the clustering coefficients C rand of the random networks at the same scale are relatively small. Therefore, it can be concluded that the 8 subcommunities demonstrate the characteristics of a small world, and the information in a subcommunity can be quickly spread to each part of the subcommunity. The scale-free characteristics of the network are also analyzed through experiments. Fig. 4 and Fig. 5 show the Complementary Cumulative Distribution Function (CCDF) graphs of node indegree and node outdegree, respectively, for the eight subcommunities. By performing a least squares fit on the node set, we can get the expression for the fitted curve as follows: According to 15, we have the power-law exponent a > 0 of the indegree and outdegree distribution for the eight subcommunities (see Table III), which indicates that there are fewer nodes with a larger indegree and more nodes with a smaller indegree, which is consistent with the scale-free feature for social networks. In other words, only a few members have deep participation in the network for brand communication, and they are only the promoters of the development for the brand community. The statistical results show the correlation coefficient g < 0 of the eight subcommunities; that is, the nodes with higher degrees are mostly connected with the nodes with lower degrees. In other words, in the process of information spreading, the information tends to flow from influential nodes to common nodes in the network. C. Influential Node Identification By using the proposed method, we selected the top 20 nodes from the candidate set as the influential nodes, as shown in Table IV. We further divide the top 20 nodes into two groups. The first group of nodes has high individual values. According to 13, the nodes with high individual values are more likely to be identified as influential nodes. For example, it can be seen from Table IV that nodes 9339612697 and 6390492327 have the highest topological potential values among the 20 nodes. Their brand engagement scores are also larger than those of other nodes. This means that they have published many posts related to the Xiaomi brand, which are supported by many other users in the network. The second group of nodes does not have high individual values, and some of them even have a low individual value. After investigating these nodes further, we find that they have published few posts about the brand but have often commented on brand-related content. For example, the brand engagement score of 6195251507 is 0, which means that the user has not published any brand-related content or that the content has not received any positive comments. These kinds of nodes are usually ignored by the existing methods and thus will not be identified as influential nodes. Although these nodes rarely publish brand-related content directly, they are very concerned about the brand, and their comments can also be an important part of brand marketing in OSNs. We have also identified the top 20 nodes by using two different metrics separately rather than their topological potential values (see Table V). The influential nodes identified by using network structure scores and individual values are quite similar, as there are 14 nodes in common for the first and second columns of Table V. Moreover, we can see that the influential nodes identified by using their topological potential values are quite different from those identified by purely using network structure scores or individual values. The first and third columns have 6 nodes in common, while the second and third columns have 8 nodes in common. It makes sense that a node with a high individual value is more likely to be identified as an influential node. However, It is insufficient to consider the individual value of a single node. The proposed method also considers the individual value of surrounding nodes by using the topological potential model and thus can obtain a more accurate result, compared with using pure individual values. D. Performance Evaluation We also compare the performance of the proposed TPS with three existing methods for measuring node importance, namely, Weighted PageRank [53], Weighted HITS [54] and IMUD [2]. The top 20 influential nodes identified by the four different Table VI. There are also 10 nodes in common for the result sets by Weighted PageRak, Weighted HITS and IMUD. In both result sets by both Weighted PageRank and IMUD, there are only 6 influential nodes in common with the that of TPS. The result set by Weighted HITS has 8 influential nodes in common with that of TPS, which is a little bit larger than that of Weighted PageRank and IMUD. We have also checked the top 20 influential nodes by the other three methods and find that few of them had published or shared enough content about mobile phone or the Xiaomi brand. For example, the first influential user identified by Weighted Pag-eRank, Weighted HITS and IMUD is the same and it is the user 4077360552, while the first one identified by TPS is 9339612697. Although both users have published many posts about mobile phones, the p support value of 9339612697 is much larger than that of 4077360552. The posts of 9339612697 receive more positive comments than those of 4077360552. Moreover, 9339612697 has also written more comments on others' posts than 4077360552. Therefore, 9339612697 is more suitable for promoting mobile phone brand like Xiaomi on social media. Moreover, 9339612697 is also the third influential user identified by Weighted PageRank, and the second one by Weighted HITS and IMUD. It also depicts that the influential users identified by TPS are reliable. Both Weighted PageRank and Weighted HITS only address the relationship between nodes, but they do not take into account the content features of users' posts. Therefore, most influential nodes identified by simply using either Weighted PageRank or Weighted HITS are not very valuable for brand communication. Although IMUD has taken into account the content-related features like the topics and the related messages exchanged by users, they have neglected the factors like sentiment and the impact of surrounding nodes. According to our investigation, there are no widely accepted metrics used to evaluate the performance of influential node mining. In this article, we use the ratio of verified users and the ratio of user coverage to evaluate the performance. The ratio of verified users refers to the proportion of verified users among the collection of influential users. The ratio of user coverage refers to the proportion of the users that can be covered or affected by the top n% influential nodes among the complete set of users. As seen from Table VII, the ratio of verified users of TPS is much higher than that of the other three methods. Using the proposed TPS, 1461 out of 2000 influential users are verified. The comparison of the user coverage ratio is illustrated in Fig. 6. The curves of the three methods begin to flatten when n ! 1. Therefore, if the top 1% of the influential nodes identified by the four methods are considered separately, the proposed method can directly cover more than 60% of users in the network. However, we can see that TPS can cover more users than the other three methods when n ! 1. Additionally, it can be seen that the proposed method can cover almost 100% of users in the sample set when n ! 40, while IMUD, Weighted PageRank and Weighted HITS can only cover 89.1%, 86.4%, 86.6% with the same n. Therefore, we can see that TPS performs better than the other three methods from the perspectives of both the ratio of verified users and the ratio of user coverage. E. Industrial Applications With the popularity of OSNs in our daily life, mining and discovering key opinion leaders or influential nodes from large-scale social networks has become a research hotspot. Currently, increasing number of companies tend to promote their brands or products (especially some newly released ones) through social media instead of traditional media. The method proposed in this article can be applied to support intelligent brand communication or marketing in real-life industrial applications. Traditionally, when carrying out social media marketing (or viral marketing), companies' marketing information will be pushed to a group of consumers in OSNs. This group of IV TOP 20 INFLUENTIAL NODES FOR BRAND COMMUNICATION TABLE V TOP 20 INFLUENTIAL NODES BY USING DIFFERENT METRICS consumers is usually selected by the platform. Although some of them may forward the material to others when they receive marketing information, the effect of information spreading is limited due to their influence on OSNs. Therefore, companies want to choose a set of customers to market to that will maximize their Internet profits (profits from sales minus the costs of marketing). The metrics and algorithm proposed in this article can be used to mine and identify influential nodes or users in OSNs (see Fig. 7). After mining a collection of influential users from OSNs, these users are considered seed users. What companies need to do next is to establish trust relationships with these influential users and engage them to promote their brands or products spontaneously on social media. In social media or online communities, eWOM generation can be achieved by influential users after they have positive consumption experiences. As an influential user can always affect a number of common consumers, marketing information can spread quickly through social networks. In this way, companies can promote their brands or products with less cost but better effects on social media. Moreover, companies can even perform personalized recommendations on OSNs through influential users. For consumer-oriented industries [55], companies increasingly rely on social media to promote their brands and products. The technology presented in this article is able to mine influential users from large-scale OSNs, and companies can then improve their marketing strategies with the help of those influential users. V. CONCLUSION AND FUTURE WORK In this article, we mainly address the problem of predicting influential nodes from OSNs for brand communication. We quantitatively measure the individual value of nodes by considering both the network structure and content-related factors. Moreover, an improved topological potential scheme is proposed for predicting influential nodes in OSNs. In the process of mining influential nodes from OSNs, network structure, brand engagement, and topological potential are combined together in our method to overcome the limitations of the existing methods. The computational results suggest that the proposed method is able to predict influential nodes for brand communication in OSNs. We can find out the nodes that have published few posts about the brand but commented on brand-related content a lot, which are usually ignored by the existing methods. Moreover, we can obtain identification results that have a higher ratio of verified users and user coverage by using the proposed method compared to three existing methods. We also consider some possible future directions of this study. For example, we only used the followship and comment relationship between users to model the weighted network. In fact, there exist more deep or potential relationships among users, which can be discovered by using more complex mining algorithms. Therefore, in the future, we can obtain a more complex network model for predicting influential nodes. Additionally, we only investigated the characteristics of users in OSNs statically, but we have not considered the impact of time changes. If we take into account the time factor and study the time-dependent trend of user behaviors in OSNs, we can obtain more characteristic information about influential nodes.
9,853.2
2021-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Genomic and post-genomic analyses of human prion diseases Prion diseases share common features of neurodegenerative disorders, infectious diseases and pathologies linked to misfolded proteins. Whether these aspects are independently and fortuitously present in prion diseases or are somewhat linked together remains unsettled, but the contribution of genomic, proteomic, metabolomic and spectroscopic techniques might give insights into this puzzle, and likely give hope for therapy to patients. Although the prion protein gene (PRNP) governs most of the clinical and pathological features of prion diseases and plays a pivotal role in determining host susceptibility, there are still many uncertainties and unknown risk factors that need to be clarified and identified. Several genes, other than PRNP, have recently been found to be associated with a risk of developing sporadic or variant Creutzfeldt-Jakob disease, but these novel data have been produced in a relatively small number of patients and controls and, therefore, need further confirmation. The same criticism applies to the identification of the over 20 new cerebrospinal fluid or plasma markers of disease. Some of these markers seem related to the massive brain damage that occurs, rather than being specific to prion infection. Nevertheless, genomic and post-genomic approaches have shown that these techniques are very powerful, and the best way to overcome the scantiness of samples would be to encourage strong collaboration between different centers of excellence in prion diseases. In this review, we describe the most recent and outstanding advances offered by genomics and post-genomics analyses in the field of human prion diseases. A Ab bs st tr ra ac ct t Prion diseases share common features of neurodegenerative disorders, infectious diseases and pathologies linked to misfolded proteins. Whether these aspects are independently and fortuitously present in prion diseases or are somewhat linked together remains unsettled, but the contribution of genomic, proteomic, metabolomic and spectroscopic techniques might give insights into this puzzle, and likely give hope for therapy to patients. Although the prion protein gene (PRNP) governs most of the clinical and pathological features of prion diseases and plays a pivotal role in determining host susceptibility, there are still many uncertainties and unknown risk factors that need to be clarified and identified. Several genes, other than PRNP, have recently been found to be associated with a risk of developing sporadic or variant Creutzfeldt-Jakob disease, but these novel data have been produced in a relatively small number of patients and controls and, therefore, need further confirmation. The same criticism applies to the identification of the over 20 new cerebrospinal fluid or plasma markers of disease. Some of these markers seem related to the massive brain damage that occurs, rather than being specific to prion infection. Nevertheless, genomic and post-genomic approaches have shown that these techniques are very powerful, and the best way to overcome the scantiness of samples would be to encourage strong collaboration between different centers of excellence in prion diseases. In this review, we describe the most recent and outstanding advances offered by genomics and postgenomics analyses in the field of human prion diseases. Transmissible spongiform encephalopathies (TSEs), or prion diseases, are a group of fatal neurological disorders that affect humans and animals, and for which there is no available therapy [1]. The basic pathogenic mechanism is linked to post-translational changes of the host cellular prion protein (PrP c ) into a pathological conformer (PrP TSE ) that has a strong tendency to aggregate and form amyloid fibrils [2]. As for the Aβ amyloid present in Alzheimer's disease (AD), it is still unclear whether large aggregates of PrP TSE are more or less toxic to neural cells than small oligomers [3]. In humans, the most common form of disease is sporadic Creutzfeldt-Jakob disease (CJD), which equally affects both females and males of all ages, and of all ethnic groups [4]. Sporadic CJD has an overall mortality rate of approximately 1-2 cases per million people per year, with peak incidence in individuals aged between 60 and 70 years [4]. Approximately 10 to 20% of CJD cases appear within families [4,5] and these forms are always (apart from very few exceptions, for example [6,7]) linked to point or insert mutations in the prion protein gene, PRNP, suggesting that these disorders are strongly linked to PRNP and that, unlike other neurodegenerative disorders such as AD, prion diseases are likely monogenetic. Other rare genetic forms of TSEs are fatal familial insomnia (FFI) and Gerstmann-Sträussler-Scheinker syndrome (GSS). Both sporadic and genetic prion disorders are transmissible to a wide range of laboratory animals (rodents, felines, and non-human primates) by the injection of crude brain homogenates. Depending upon the host, the type of inoculum, and the route of inoculation, the lag period between the time of injection and the development of clinical signs may last for weeks, months or years [8-10]. Around one-third into this asymptomatic period, the host starts producing PrP TSE using its own PrP c as a substrate. At the end of the incubation period the host develops clinical, behavioral, and neurological signs, and finally dies, usually after a few weeks of disease. However, after prion infection, mice with ablated prion protein gene (knock-out mice) do not produce PrP TSE or clinical signs of disease, confirming the pivotal role of PrP c in the pathogenesis of prion disorders [11]. In experimental prion models, treatment with a variety of compounds during the asymptomatic phase of disease delays the formation of PrP TSE and the appearance of clinical signs [12]. In some cases, animals do not even develop disease [12]. However, there is virtually no beneficial effect if the treatment is started after the appearance of clinical signs, suggesting that the only possible approach in humans is prevention rather than therapy [13]. Naturally, prion diseases occur also in sheep and goats (scrapie disease), in cattle (bovine spongiform encephalopathy (BSE), and some very rare variants), and in cervids (chronic wasting disease (CWD)) [2]. A BSE epidemic, sustained by feeding cows with infected rendered meat, has produced a serious worldwide economic and health problem. Thousands of cattle have been killed in Europe and elsewhere to further prevent the rise of the epidemic and the possibility that BSE would transmit to humans. Despite these efforts, however, transmission of BSE to humans occurred in the 1990s and approximately 200 people, mostly in their 20s, died of a novel prion disease (variant CJD) [14]. Patients with variant CJD were probably infected via contaminated food in the late 1980s or early 1990s, but it is still unknown how many individuals are currently silently incubating the disease [15]. Occasionally, transmission of prion diseases occurs from man to man via improperly decontaminated surgical instruments, use of biological products taken from cadaveric human tissues [16], blood transfusion, or possibly plasmaderived products (so far these two modes of transmission have occurred only for variant CJD) [17,18]. G Ge en ne et ti ic c a an na al ly ys se es s i in n h hu um ma an n p pr ri io on n d di is se ea as se es s In humans, the PRNP gene is the only strong factor that determines both susceptibility and phenotypes of prion diseases. This gene presents several point or insert mutations that are responsible for the appearance of familial forms of prion diseases, and often each specific mutation is associated with a specific clinico-pathological phenotype [19]. The most striking example is the mutation at codon 178 (substitution of the aspartic acid with asparagine), which gives rise to two different prion diseases depending on whether the mutation co-segregates with methionine (FFI) or valine (genetic CJD) in the polymorphic codon 129. On the other hand, there is also evidence that within the same family, mutated carriers either develop different clinical phenotypes [20,21], develop disease at different ages, or do not develop disease at all [5]. These findings suggest that some other factors are involved in determining susceptibility to the disease [22], but no specific genomic studies have so far been conducted to exploit the possible involvement of other genes. The only exception is the finding that in a large kindred of GSS-affected patients with the proline to leucine mutation at codon 102 of the PRNP gene, apolipoprotein E4 (ApoE4) carriers have a delay in the age of onset of approximately 10 years without, however, any influence on the clinico-phenotype of the disease [21]. Whether ApoE4 influences age at onset in other forms of genetic prion diseases remains to be determined. G Ge en no om mi ic c f fi in nd di in ng gs s i in n h hu um ma an n p pr ri io on n d di is se ea as se es s The major host player in controlling susceptibility to prion diseases is the PRNP gene. This was clearly shown in the pre-prion era by the pivotal genetic work carried out by Alan Dickinson and colleagues [26], who called the prion protein gene in mice the sinc gene (after scrapie incubation period) and postulated that other genes would likely be involved in the pathogenesis of experimental scrapie [27]. The involvement of other genes has been subsequently confirmed in different models of scrapie-infected mice [28] but, until recently, there have been no data for human prion diseases. In this respect, an interesting genome-wide study of genetic risk in a human prion disease was recently performed by Mead and colleagues [29] in a relatively large cohort of patients with various forms of prion diseases (variant, sporadic, iatrogenic CJD and historical kuru patients [30]) in comparison with healthy British and South Fore (for kuru) people. Genomic DNA was mostly extracted from peripheral blood, though some samples were extracted from brain tissue. The major result of this study is the confirmation that the risk of developing prion diseases is strongly associated with the polymorphic codon 129 of the PRNP gene. The authors also found single nucleotide polymorphisms (SNPs) contributing to disease risk in the intron of PRNP, upstream of the gene RARB, which encodes the retinoic acid receptor-β protein, and upstream of the gene STMN2, which encodes SCG10/stathmin-like 2, a neuronal growth-associated protein. Genetic risk factors for CJD have previously been identified upstream and downstream of PRNP [31,32], while retinoic acid has been shown to regulate the expression of the prion protein in cell cultures [33], and SCG10 to regulate microtubule stability in neuronal cells, which, in turn, might potentially modulate prion neurotoxicity [34]. It is therefore conceivable that a potential deregulation of RARB and STMN2 might be involved in the pathogenesis of prion diseases, and hence lead to an increased susceptibility of variant or iatrogenic CJD from exogenous exposure. However, the authors [29] could not link the presence of SNPs in the upstream regions of RARB and STMN2 to a modification of their expression and, since these genes are not expressed in blood cells, their products cannot be used as possible markers for prion diseases. In two other studies, the same group [35,36] reported two other genes (SPRN and HECTD2) found to be associated with risk of sporadic and variant CJD. SPRN was identified by comparative gene analysis [37]; it encodes Shadoo (Sho, shadow of prion protein), a highly conserved protein that has possible functional links with the prion protein [38], and different genetic variants have been associated with risk for either variant or sporadic CJD [35]. HECTD2 encodes an E3 ubiquitin ligase involved in regulating the incubation time of scrapie-infected mice [36], and a single SNP, located in the intron of the gene, was significantly over-represented in both variant and sporadic CJD [36]. Moreover, a high level of HECTD2 mRNA expression seems to be linked with variant CJD in the UK population [36]. These studies are of great interest but it is somewhat surprising that upregulation of these genes was not found by the same group in their genome-wide association study for the identification of CJD risk-associated factors [29]. In another study, Xiang and colleagues [39] applied global gene expression microarray technology to the frontal cortex of 15 patients with sporadic CJD and compared the global gene expression with frontal cortical samples of patients dying of unrelated diseases without clinical signs of neurological diseases, and with unremarkable neuropathology. They found several upregulated (n = 79) and downregulated (n = 275) genes in sporadic CJD compared to controls. Some of the upregulated genes are clearly linked to the pathological process of degeneration (for example, those encoding GFAP and S100; the latter protein is also increased in cerebrospinal fluid (CSF) and plasma of CJD patients), or to the immune and inflammatory responses that clearly occur in prion diseases [40]. The upregulation of genes encoding cysteine-rich intracellular proteins with a high capacity to bind to zinc and copper (that is, metallothionein-1 and -2) has also been previously reported in human prion diseases [41]. Reduced expression was observed in genes (SNAP-25 and synaptophysin) that are involved in synaptic function and plasticity and that were previously found at decreased levels in the cerebral cortex of CJD patients [42]. This work is of great interest in terms of identifying genes that are involved in the pathological process of prion diseases, but it is necessary to validate these results by using control patients with other neurodegenerative disorders, in order to identify prion-specific genes rather than hundreds of genes that are clearly deregulated during massive brain damage. The only marker that is included in the World Health Organization (WHO) diagnostic criteria for sporadic CJD is 14-3-3 protein in CSF. This marker, alone or in combination with other neuron-specific, brain-derived proteins (neuronspecific enolase, Tau and phosphorylated Tau, and the astrocytic protein S100b), has been extensively evaluated and validated in all forms of human prion diseases (for comprehensive reports see [43][44][45]). However, these tests only reach high levels of sensitivity and specificity if a patient is likely, on clinical grounds, to have sporadic CJD [44]; it is thus important to maintain interest in and focus resources on finding novel and more specific markers for prion diseases. In Table 1, we report novel markers that have been identified in the CSF or plasma of patients with various forms of prion diseases. Data are not always comparable, due to the small number of prion patients and to the choice of controls, often taken from healthy individuals without including patients with different neurodegenerative disorders, rather than being due to the techniques used. These critical aspects were taken into serious consideration by Brechlin and co-workers [46], who applied stringent criteria and appropriate neurological controls for the identification of five possible markers for sporadic CJD using two-dimensional differential gel electrophoresis (2D-DIGE) and matrix-assisted laser desorption ionization (MALDI) mass spectrometry. Interestingly, three of these protein spots were subsequently identified as well-known markers for prion diseases (14-3-3, two spots, and neuronspecific enolase) and the fourth as lactate dehydrogenase, previously reported in sporadic CJD by the same group [47]. The other interesting finding in these studies is that variant and sporadic CJD may present some different biochemical Gene is upregulated in the brain of sCJD; mostly synthesized in the CSF; (not specified) reported normal in AD patients it is also localized in glial cells and neurons ↑sCJD NT [63] ↔sCJD, ↔vCJD NT [54] F2-isoprostanes Markers of lipid peroxidation ↑sCJD, ↑gCJD NT [48] and oxidative stress in vivo ↔vCJD NT [49] Gelsolin Regulator of actin filament assembly ↓sCJD NT [63] No difference between CJD and AD ↓sCJD NT [46] H-FABP Belonging to a family of small, ↑sCJD, ↑vCJD ↑sCJD, ↑vCJD [55] CSF of CJD taken post-mortem while in highly conserved, cytosolic controls taken from living individuals; proteins involved in fatty acid plasma levels do not differ between CJD transport and metabolism and AD ↑sCJD ↑sCJD [56] ↑sCJD NT [69] Hp2-α haptoglobin Binds hemoglobin for physiological ↑sCJD Interleukin 4 and 10 Anti-inflammatory cytokine ↑sCJD NT [71] Not altered in the brain of sCJD Proteomic approaches have also been extensively used to investigate the pathogenesis of prion diseases, but the majority of these studies, even those conducted with experimental animal models, were performed in postmortem brain tissues, and it is therefore difficult to determine whether deregulation of identified proteins is a late result of neurodegeneration or specifically linked to prionspecific lesions. However, the finding that levels of proteins known to interact with Ca 2+ , or whose function is regulated by Ca 2+ , are significantly modified in the brains of affected animals [51][52][53] clearly deserves further investigation. By analogy with other neurodegenerative disorders such as AD, the presence of oxidative stress has been investigated in different tissues, including CSF and blood from prionaffected individuals (Table 1). These studies have shown the activation of several pro-and anti-oxidative mechanisms in prion disorders, but these pathways are shared by other neurological disorders and cannot be regarded as prionspecific biomarkers. Besides oxidative mechanisms, an atypical inflammatory response is activated in the central nervous system of prion-infected individuals, and consequently a number of (pro)inflammatory mediators are deregulated in the CSF of patients with prion diseases (Table 1) [40]. These mechanisms, however, are often common to other neurodegenerative disorders and may be of limited value as specific prion-disease markers. Very few studies have been conducted for the identification of markers in human blood or urine [54][55][56][57]. Among them, the heart-fatty acid-binding protein (H-FABP) has been found, by two different groups, to be increased in both CSF and plasma of individuals with sporadic and variant CJD [55,56]. Blood manganese is another promising marker Ubiquitin Involved in ATP-dependent ↑sCJD NT [63] Elevated levels in CSF of AD patients selective degradation of cellular proteins, maintenance of chromatin structure, regulation of gene expression, stress response, and ribosome biogenesis. S Sp pe ec ct tr ro os sc co op pi ic c a an nd d i im ma ag gi in ng g t te ec ch hn ni iq qu ue es s Proton magnetic resonance spectroscopy ( 1 H-MRS) has been extensively applied for detecting metabolic alterations in the brain of prion-diseased patients [58,59]. These studies, though conducted in a very limited number of patients, are very consistent and always confirm a reduction of N-acetylaspartate (NAA; a marker of neuronal loss), concomitant increase of myo-inositol (MI; an astrocyte marker), and a reduction of the NAA:creatine ratio. Interestingly, in a single asymptomatic carrier of the pathogenic mutation P102L (linked to GSS), Waldman and co-workers found an increase of MI with no variation of NAA [60], suggesting that gliosis starts before massive neuronal loss, and that this compound may be a valid candidate as a preclinical marker of prion diseases. The novel technology of atomic dielectric resonance spectroscopy (ADRS [61]) has been demonstrated to discriminate between blood of CJD patients and that of neurological and healthy controls, as well as between sporadic and variant CJD patients, with 100% specificity and sensitivity. Though these data were blind-validated in only ten patients (four variant CJD, three sporadic CJD, and three non-neurological controls), they confirm data that have been previously reported for the sera of scrapieinfected rodents investigated by Fourier transform-infrared (FT-IR) spectroscopy [62]. It would therefore be interesting to extend the result obtained by Fagge and co-workers [61] to a larger number of patients and possibly to asymptomatic PRNP mutated carriers, to determine whether the ADRS signal might be useful to identify prion disease during the pre-or subclinical phase. C Co on nc cl lu us si io on ns s Sporadic and variant CJD and most of the related prion disorders are relatively easy to diagnose based upon clinical signs and available instrumental and laboratory tools, which include electroencephalography, brain-imaging techniques and detection of the marker 14-3-3 in the CSF, alone or in combination with other neuron-specific, brain-derived proteins. Thus, in clinical practice the search for other markers in diseased patients is of limited extra value. What is missing, however, is highly predictive markers in easily accessible tissues, such as CSF, blood or urine, that would be able to recognize infected but yet clinically healthy individuals. The best candidate marker would be PrP TSE , but this pathological isoform is either not present or difficult to identify in body fluids. Markers of prion infectivity are also essential for the screening of blood for transfusion and for plasma or urine donations before their use for production of medicinal products. As a result, resources have been devoted to the development of markers of infection aimed at screening of animal-and human-derived biological products, improving diagnostic tools to identify infected individuals in their preclinical stage of disease, and controlling disease progression. These two latter goals would most likely enhance the possibility of developing preclinical therapy in prion diseases and having objective tools for measuring the effectiveness of potential treatments. Two other issues in prion diseases that might be solved by the complementary approaches of genomics, proteomics and metabolomics are the search for genes and proteins, other than PRNP and the encoded prion protein, that might increase the susceptibility of developing prion disease. This would apply to the inherited forms of prion diseases, as outlined above, to sporadic CJD, and, of particular importance, to determining why a widespread population exposure to BSE infection has resulted in only approximately 200 cases of variant CJD. Another issue is the identification of genes that influence disease duration. This topic is clearly important for understanding the pathogenesis of prion diseases and might eventually lead to the development of novel anti-prion compounds, but it is also needed in clinical practice to better formulate the prognosis of patients and, finally, to monitor the efficacy of potential drugs in therapeutic trials. The PRNP gene plays an important role in determining survival, as well as the conformational type of PrP TSE that accumulates in the brain [23]. However, these factors do not fully explain the great variability observed in human prion diseases and it is therefore likely that other genetic or environmental determinants are involved. Genomic studies are not yet available on this issue, but their application will certainly be of great utility to add other pieces to the prion puzzle.
5,122.6
2009-06-22T00:00:00.000
[ "Biology", "Medicine" ]
One-Dimensional Plasmonic Sensors Recent advances in surface plasmon sensors have significantly reduced the limitations of conventional optical sensors. With the recent development of micro- and nano-fabrication technology, miniaturized one-dimensional structures become a promising platform for surface plasmon sensors for its compactness and simple structure. In this review, we describe the generation of surface plasmon polaritons and the resonance conditions. Then we categorize surface plasmon sensors by the physical quantities they detect, elaborating their working principle, performance, and current development. Finally, we summarize both limitations and advances of various design methods to provide an outlook on future directions of this field. INTRODUCTION Optical sensors are used for a broad range of applications, ranging from simple distance detection to providing artificial vision for object recognition. One of the critical challenges that modern sensor industry faces are to explore novel nanostructures with designer functions. Among the other nanotechnologies, the idea of utilizing surface plasmon polaritons (SPPs) proves itself useful over other competitors. Metallic nanostructures are promising for the generation and distribution of electromagnetic radiation in unprecedented ways. SPPs, also known in the literature as surface plasma waves (SPWs) [1], are coherent oscillations of free electrons at the interface between metal and dielectric [2]. They possess a series of novel optical properties, such as local electric field enhancement, deep subwavelength confinement of optical fields, etc. The highly confined electromagnetic field could break the optical diffraction limit, making SPP-based sensors exhibit high sensitivity and miniaturized size [3]. Also, the high energy density in the near field of SPPs contributes significantly to the sensor sensitivity for special applications, such as single molecular sensing. Compared to conventional techniques, such as fluorescence analysis, SPP-based sensors are more compatible with analyte and does not involve additional processes like labeling. And the application of SPPs has gained tremendous attention in optical sensing areas since its first gas sensing demonstration [4]. In the visible and infrared region, SPPs can be supported by one-dimensional structures. However, the electromagnetic characteristics of metals in the terahertz band are similar to perfect electrical conductors (PEC), and cannot support SPPs for practical applications [5]. Therefore, pleated subwavelength structures with different geometric features can support spoof SPPs in the terahertz band for sensing applications [6,7]. Compared with these structures, one-dimensional waveguide structure has properties, such as mass production and low cost. Furthermore, one-dimensional structures are important for the integrated plasmonic circuit, which have attracted increasing attentions for flexible and compact applications in optical sensors [8][9][10]. Additionally, one-dimensional waveguide structure can guide SPPs along metal-dielectric interfaces beyond the diffraction limit and confine light to scales < λ/10 along relatively long distance [11], thus high sensitivity can be achieved in one-dimensional sensors. In this review, we start with a brief introduction of the concept of SPPs at the interface of metal and dielectric interface, followed by a description of excitation and coupling schemes used for one-dimensional waveguiding structures. Then we give a short discussion on the distinction between localized surface plasmon polariton (LSPP) for small nanoparticles (NPs) and SPP in elongated nanostructures, such as metallic nanowires (NWs). In the third part, some critical applications for 1-D waveguide are presented, and these include a refractive index, pressure, and biochemical sensing. These demonstrations underline the advantages 1-D nanostructures bring to the nanoscience and nanotechnology field. Finally, we summarize the possible future developments of 1-D waveguide sensors, such as metallic nanowires, etc., in various research areas. Optical Excitation of Surface Plasmon Polaritons To describe these peculiar behaviors of SPPs, we start from the description of the motion of a free electron in metal: where x is the displacement of the electron, m is the electron mass, γ is the damping factor, e is the charge of an electron, E 0 is the amplitude of the external electric field, and ω is the angular frequency of the external electric field. By solving Equation (1), we get the Drude model of free electrons in metal as: where ω p stands for the plasma frequency. We assume γ ≪ ω p and then obtain the relation between the dielectric constant of metal and the frequency of the incident light. SPPs are longitudinal waves propagating along an interface as shown in Figure 1A. The confinement is achieved due to the fact that the wave vector of SPPs is much larger than that of light wave in the dielectric. The wave vector of SPPs propagating along the metal surface is given by where ω is the angular frequency, c is the speed of light in vacuum, ε (ω) , and ε m are the dielectric constants of the dielectric and metal, respectively. For a given wavelength, the light line always lies to the left of the SPP dispersion curve as shown in Figure 1B. The phase-matching condition therefore forbids a direct coupling between 3-dimensional light and 2-dimensional SPP. Various techniques utilizing prisms, gratings, highly focused beam, and optical nanofibers, etc., have been proposed to address this issue. SPPs undergoes severe attenuation in the metal film layer, which decreases the intensity of the electromagnetic field. The propagation length of SPPs is defined as: L typically ranges from 10 to 100 µm in the visible regime [31]. It limits the maximum size of SPP-based devices to ensure that the attenuation of energy is reasonable. The propagation length and penetration depth are both dependent on frequency. For frequencies close to the surface plasma frequency, SPPs exhibit strong field confinement to the interface and a short propagation distance at the same time, which is a trade-off between energy confinement and loss for SPP-based devices. The penetration depth is defined to represent the distance from the interface when the amplitude of SPPs decays by a factor of 1/e. According to the z component of wave vector in the metal layer and that in the dielectric layer solved by Maxwell's equation, the penetration depth is: where k z = k spp 2 − ε i ω c 2 , ε i refers to ε m in the metal layer and ε d in the dielectric layer. In most cases, SPPs penetrate deeper into the dielectric layer than that in the metal layer, as indicated in Figure 1A. In SPP-based sensors, the penetration depth in the dielectric layer determines the actual sensing area. Optical Excitation of Localized Surface Plasmon Polaritons As is shown in Figure 1C, in contrast to SPPs that propagate along continuous metal surfaces, LSPPs are non-propagating excitations tightly confined to the nanostructure. Conduction electrons in the NPs oscillate collectively and locally with a resonant frequency, which depends upon the composition, size, geometry, dielectric environment, and particle-to-particle separation of NPs [32]. The excitation of LSPR gives rise to field enhancement of local electromagnetic fields on the surface of an NP or "hot spots" between NPs, and results in strong scattering and the absorption of the incident light. LSPP shows more significant potential for sensing analytes with small concentrations and provides an approach in surface plasmonenhanced sensing. Here we can use the quasi-static approximation ( Figure 1D) since the radius of an NP is much smaller than the wavelength of the incident light. According to the boundary conditions and the dipole model, the polarizability of the particle can be written as where ε and ε m are the dielectric constant of the spherical particle and that of the environment, respectively. Further deduction gives the absorption cross-section and the scattering crosssection of the particle as As is shown in Equations (7) and (8), the scattering crosssection and the absorption cross-section is proportional to the 6th power and 3rd power of the radius, respectively. That is, light scattering accounts for the main contribution for relatively large particles, and for small particles, the proportion of light absorption is more substantial. The quasi-static model used here treats plasmonic particles as dipoles and neglects the delay effect as well as the damping effect. However, larger particles, especially particles with the diameter comparable to the wavelength, cannot be considered as dipoles. Higher-order modes must be taken into account when dealing with these problems. The sensible polarizability of metallic particles is calculated by the modified long-wavelength approximation model (MLWA) [3], which explains perfectly why the redshift of the LSPR peak position as the size of NPs increase, is a more sensible solution for polarizability of large metallic particles. PERFORMANCE EVALUATION OF SURFACE PLASMON SENSORS The principle of SPP sensing is based on the change of the SPP's spectra or intensity upon the change of environment. The first parameter we would take into account when designing a sensor is the sensitivity (S). It is determined by the ratio of the change in sensor output to the difference in the measured parameter. In the SPP-based sensors, the quantity measured is generally the refractive index (n), and the output quantity (Y), which could be the resonant angle, resonant wavelength, intensity of guided waves, and phase shift. According to Equation (9), the sensitivity of intensity interrogation can be expressed in the unit of RIU −1 (RIU for Refractive Index Unit). In SPP sensors with wavelength modulation, the sensor output is the coupling wavelength and the sensitivity unit is usually µm/RIU or nm/RIU, which indicates the spectra position shifts vs. the change of analyte's RI. Moreover, the sensitivity of angular or phase modulation sensors is described in terms of rad/RIU or deg/RIU. By detecting the propagation constant differences, researchers can also achieve sensitivity in the form of rad/(µm·RIU). Usually, sensitivity takes the global RI into account in physical sensing approaches. But the sensitivity of an SPPbased sensor only considers the RI changes in a local region, as electromagnetic field is confined tightly near the interface of metallic nanostructures, for example, the local RI difference caused by biomacromolecules. It's worth noting that, in LSPP-based biochemical sensors, the distribution of the electromagnetic field is not uniform on the surface of NPs. Generally, the electric field is distributed at locations with small curvature radius, tips, and gaps. Thus, it is essential to attach molecules to these local areas when designing the sensor to enhance sensitivity. Resolution, or detection limit (DL), is another important parameter which is adjusted by the smallest variation in the environmental refractive index that can be detected by the sensor [33]. The noise of the output signal (σ ) and the sensitivity of the sensor (S) determines it together. Therefore, sensors can exhibit high resolution by improving their signal-to-noise ratio or sensitivity. Aside from the above-mentioned parameters, linearity and dynamic ranges are crucial evaluation parameters that describe the stability of SPP-based sensors. The linearity indicates the ratio of the sensor output to the parameter measurement and represents the sensor's stability during the detection process. A high linearity response of the regression line indicates an excellent sensor [34]. The dynamic range describes the span of the values of the measurand that can be measured by the sensor [35]. As for the refractive index sensors, dynamic range refers to the variety refractive index that sensors can measure under specific accuracy. SURFACE PLASMON SENSORS BASED ON ONE-DIMENSIONAL WAVEGUIDE Recent waveguide-based surface plasmon sensors can be categorized based on the physical quantities they measure. Moreover, to achieve high sensitivity and compactness simultaneously, one-dimensional waveguide structures, such as an integrated waveguide, optical fibers, and nanowires are mainly discussed. Refractive Index Sensors Since the invention of the first SPP-based sensor for gas detection [4], these sensors based on Otto structure and Kretschmann structure have been widely used in the fields of physical, chemical, and biological measurements. The refractive index alters when changes in these measured quantities take place. However, the conventional prism SPP-based sensor has bulky optical and mechanical components and has no advantages in integrated applications. Optical Fiber-Based RI Sensors Optical fiber based SPP sensors provide a favorable choice for miniaturized sensing and are incredibly suitable for in vivo applications. In 1993, Jorgenson et al. [15] proposed the first optical fiber-based SPP configuration without the bulk light coupling prism. By partially removing the fiber cladding and depositing a high reflective layer at the exposed position, a fiberbased SPP refractive index sensor was proposed utilizing the interaction of evanescent waves with SPPs. Scientists proposed several approaches [36][37][38] to enhance the sensitivity of fiber-based SPP sensors. Monzón-Hernández et al. [37] deposited a thin metal layer on a single-mode tapered optical fiber, so the fundamental fiber mode can excite different surface plasmon modes to acquire multiple resonance peaks. The fiber-based sensor achieves a RI resolution of 7 × 10 −7 RIU when monitoring the three most profound peaks. Gupta et al. [38] proposed a fiber-based SPP probe consists of a fiber core, silver layer, silicon layer, and sensing medium. This SPP sensor has shown a sensitivity increasing from 2.8452 to 5.1994 µm/RIU when employing a 10-nm-thick silicon layer. Additionally, this silicon layer can prevent the plasmonic layer from oxidation and help tune the resonance. Although optical fiber-based SPP sensors possess the advantages of miniaturization and high sensitivity, their sensing range is usually limited. And the necessary for a spectrometer with an expensive and bulk size makes it challenging to realize the low cost and compact of the overall system. Integrated Waveguide-Based RI Sensors Integrated waveguide SPP sensors are particularly promising in the development of miniaturized multi-channel on-chip sensing devices. Suzuki et al. [39] proposed a sensing system with dual LEDs and monitored the differential signal by photodiodes. This system is low-priced and compact since dual LEDs and photodiodes can replace laser and spectrometer, respectively. The silicon-on-insulator (SOI) rib waveguide with a large cross-section has the characteristics of low transmission loss and integratable with optical fiber communication systems [40]. Yuan et al. proposed an SOI rib waveguide-based sensor by coupling light from single-mode fibers to various units of the SOI rib waveguide array [40]. The analyte refractive index are calculated from the shift of the reflection spectrum. Although the refractive index detection limit is higher (5.3 × 10 −5 RIU) comparing with a single SPP sensor (5.04 × 10 −7 RIU), it is more cost-effective and compact. Imprinting techniques that help fabricate these sensors with high throughput speed further lows the cost [41]. Using this fabrication method, Matsushita et al. fabricated polymer sensor chips with a refractive index resolution of 3.8 × 10 −4 RIU and a noise fluctuation of ∼1.2%. Compared with sensors based on the intensity-detection method, SPP interferometry shows a resolution orders of magnitude higher [42,43] be achieved compared to conventional waveguide SPP sensors when a phase bias is applied in one branch [44]. Based on MZI structure, Nemova et al. [45] explored a sensor tool with the phase Bragg grating imprinted in one branch, which serves for excitation of SPPs. The reported refractive index resolution is 3 × 10 −7 RIU. However, the dynamic range is reduced by approximately two orders of magnitude compared to the intensity measuring sensor. Additionally, interferometry configuration can partially suppress unwanted refractive index changes act on both branches, like temperature or pressure variations. Cheng et al. [46] proposed a novel SPP sensor with an extensive dynamic range, high sensitivity, and compact structure numerically. This sensor includes a GaAs curved waveguide surrounding by an outer gold ring waveguide, as shown in Figure 2A [46]. Since the evanescent field changes with the background refractive index, the background refractive index can be obtained by measuring the output power of the waveguide. In Figure 2B [46], high linearity is achieved in the dynamic range of n = 1-2.36, considering the surface roughness of σ = 5 nm. The numerical resolution is as high as 4.53 × 10 −10 RIU and is the same for both gas and liquid situations. Biochemical Sensors SPP biosensors are the primary technology used to study macromolecules and their functions in life science and medical research. Also, SPP biosensors can be implemented in pollutant detection, social health indicators detection, and food toxin detection. SPP biosensors are composed of an SPP sensor and a suitable bio-recognition element. The sensor tracks the refractive index change around the surface when bio-interactions take place, thus providing us the bio-information as designed. Noble Metal Nanowire Based SPP Biochemical Sensors Noble metal NW naturally acts as one-dimensional optical waveguide [47]. Despite its miniaturized footprint, NWs can confine light field tightly around the metal interface and to produce confinement beyond the diffraction limit. NWs have become a novel candidate for biochemical sensing in recent years since they are highly sensitive and are observable under an optical microscope [48]. Focusing light with parallel polarization onto the end of a NW could excite SPPs propagating in the NW. Here, Figure 3A Another approach uses the transmission spectra collected from the NW sensor. Gu et al. from Zhejiang University demonstrate a single-nanowire plasmonic sensor for hydrogen and humidity sensing [20]. During the sensing process, light is coupled from a silica fiber taper to the NWs and is collected by another fiber taper. For hydrogen sensing, using Pd-coated Au NW with an 80 nm diameter and a 25 µm length, an intensity change of ∼13 dB is achieved as the hydrogen concentration varied from 0 to 1.2%. For humidity sensing, polyacrylamide film-supported Ag NW is employed to achieve response time of 5 ms when relative humidity jumps from 82 to 70%, for its small interaction area and short length. An NW-assembled MZI has been proposed by Wang et al. [50]. Two Au NWs and two fiber tapers forms the MZI by delicate micro manipulation. One NW is immersed in the measured liquid while the other is used as a reference. Based on the MZI structure, the molar concentration of benzene can be measured by detecting the propagation constant differences, achieving a sensitivity of 5.5π/(µm·RIU) with 660-nm-wavelength probing [17]. Two commercial Y-couplers are connected and an NWassembled fiber-based plasmonic probe is inserted in one arm. Figure 3B [17] shows the spectral shift of the interference fringes when the probe is exposed to ammonia gas (NH 3 ) of 80 and 160 ppm. This sensor shows a detection limit lower than 80 ppm for NH 3 and a response time of 400 ms (rising time) and 300 ms (falling time). Nanoparticle-Nanowire Hybrid Nanostructures Based Biochemical Sensors SiO x NW-Au NP composites have shown interesting plasmonic properties. Wang et al. [51] utilized a single gold-peapodded silica NWs structure and proposed a photo-enhanced oxygen sensing method. Compared to the bare SiO 2 NWs, Au-NP@SiO 2 NWs exhibit a significantly stronger LSPP-enhanced E field around the Au NPs surface for both TE and TM mode. The induced absorption originated from LSPR in NPs provides improved response and 750 s faster recovery time compared to bare SiO 2 NWs. A systematic and quantitative analysis of Au-NP@SiO x NWs structure is presented by Gentile et al. [52]. Metal oxide semiconductors (MOSs), such as SnO 2 [53,54] and iron oxides [55], are regarded as promising building blocks in biochemical sensing because of their sensitivity in gas sensing. Their success comes from the high surface to volume ratio and sensitive band structure dynamics in both oxidizing and reducing gasses. Embedded with NPs, the gas response performance of MOS-based gas sensors is improved. The hybrid NWs with a wrinkled γ-Fe 2 O 3 outer shell and embedded Au NPs [56] exhibit excellent performance in ethanol sensing with high sensitivity and selectivity. Another NPs-decorated MOSs-based sensor [25] is presented for bio-sensing by Kim et al. from Dankook University. The sensor is fabricated by growing the ZnO NWs using hydrothermal synthesis and via the immobilization of Au NPs on the NWs. This hybrid structure sensor is especially useful for sensing prostate-specific antigen (PSA), which is a biomarker for prostate cancer detection and has a low reference level. With a sensitivity of 2.06 pg/ml in PSA detection, the hybrid sensor composed of ZnO NWs and Au NPs is expected to have broad applications in real-time label-free biosensors with high sensitivity. Integrated Waveguide-Based Biochemical Sensors In 2001, Dostálek et al. proposed an SPP sensor based on integrated optical waveguide structure, which consists of a channel waveguide covered with layer supporting SPPs [8]. By acquiring the normalized transmitted spectrum of two different sensing medium, variation of resonant wavelength is determined to quantify the RI of the sensing medium. This sensor shows a sensitivity of 2,100 nm/RIU. The integrated waveguide was fabricated by an ion-exchange method on a BK7 glass substrate, and the biosensor was applied in the detection of human choriogonadotropin (hCG). Another SPP sensor based on a miniaturized germanium-doped silicon dioxide waveguide has been demonstrated to show a slightly higher sensitivity (2,500 nm/RIU) [57]. This biosensor was fabricated by using a plasmaenhanced chemical vapor deposition (PECVD) method, which allows to control the RI difference between core and clad precisely. The waveguide-based biosensor works to monitor the interactions of protein A, monoclonal antibody, and avian leucosis virus. Figure 4 [58] shows a novel planar waveguide SPP sensor based on the Otto configuration. The analyte is placed between the core and gold layer, and this configuration does not require any buffer layer, which makes the design of sensor simple. The inset figure [58] illustrates the shift in resonant wavelength for a small change in RI of analyte. The sensitivity of this sensor can then be computed and the value is 4,300 nm/RIU. Researchers have proposed several biosensors for similar structures [8,59,60], which requires light wave to be TM polarized since TE polarized mode cannot excite surface plasma wave. A polarization wavelength interrogation biosensor proposed by Chen et al. [27] can make both TE polarized mode, and TM polarized mode produces surface plasmonic resonance. This biosensor was experimentally demonstrated to sense the medicine for heart disease (beta-blocker), with the sensitivity of 0.027 and 0.08 nm/ppm for TE polarized mode and TM polarized mode, respectively. The double slot hybrid plasmonic waveguide (DSHP) is an integrated waveguide made on a SiO 2 substrate by depositing Ag layer and etching part of it to create nanoscale slots. The plasmonic resonance shifts with the refractive index change of the liquid detected for estimating the presence of substances like diethyl ether ((C 2 H 5 ) 2 O) [16]. Also, this sensor can be used to detect the percentage of biomedical substances, such as hemoglobin in the blood of homosapiens [18]. A maximum sensitivity of 910 nm/RIU is reported. Force and Pressure Sensors Molecular force and pressure waves are used in various areas, including medical diagnosis, tumor ablation and geophysical exploration. To detect these physical quantities, nanostructurebased sensors are proposed. Ma et al. [61] demonstrated a nanofiber-based sensor to detect sound, which is an oscillating pressure wave. The sensor is composed of the SnO 2 nanofiber with compressible polymer cladding deposited on the surface and gold NPs decorating the fiber. Acoustic signatures, i.e., the pressure waves, can be detected by the output intensity of the transmitted light or by the scattering intensity of the individual NPs. This sensor exhibits a sensitivity <10 −8 W/m 2 under an audible frequency of 31 Hz and provides a novel method for acoustic signature analysis in miniaturized systems, such as cells or molecular machines. Based on the similar working principle, a SnO 2 nanofiber based force transducer [62] is developed with a distance sensitivity of angstrom-level and a force sensitivity of 160 fN. Researchers further used the transducer to detect sub-piconewton forces from the swimming action of bacteria with a sensitivity of −30 dB. Since the sensor has the ability to detect forces from multiple nanoparticles on a single fiber and the geometry can be inserted into small analytes, the nanofiberbased pressure sensor has great potential in biomechanical and intracellular studies. Taking advantages of the orientational dependence of LSPR of Au nanorods (NRs), Fu et al. [63] developed a novel pressure sensor, which is a pressure-responsive polymer matrix with Au NRs embedded. Under an applied pressure, the deformation of the surrounding polymer takes place and Au NRs change their orientation, subsequently the intensity ratio of TE mode and TM mode of LSPR changes. The unique NR-based pressure sensor can be utilized for recording local distribution and magnitude of pressure and is particularly suitable for sensing in small areas with complex surface geometries. CONCLUSION In summary, we reviewed low-dimensional SPP sensors in this paper. Table 1 presents the characteristics of some well-known low-dimensional plasmonic sensors. Being a label-free technique with small footprint and high sensitivity, micro-and nanowaveguide-based plasmonic sensing have been demonstrated in numerous areas, such as refractive index sensing, pressure sensing and biochemical sensing, especially. For biochemical sensing, plasmonic NW-based sensors and NPs-NWs hybrid structure based sensors are promising since their ultra-compact structure and high sensitivity for environmental changes. When it comes to the detection limit, medical diagnosis is one of the most demanding fields that require this feature, as SPP sensors with low detection limit can be applied in early detection of biomarkers. These nanosensors may probably find their applications in molecular machines and even cells systems. Another performance parameter, the dynamic range, is crucial for industrial applications, such as environmental monitoring. Despite the high sensitivity compared to other sensing methods it acquires, the signal-to-noise ratio still needs some NR based sensor Au NRs Pressure Record the distribution and magnitude of pressure between two contacting surfaces [63] improvement due to the disturbance from the environment. Notably, the simplicity, specificity, and reliability of NW-based biochemical sensors should all be taken into account when considering the practical sensing devices. The main challenge that SPP sensors face is the high-cost platforms, which is not affordable for small research groups or communities to invest. So, the challenges of designing a portable SPP-based sensor with high sensitivity, low detection limit, broad dynamic range, low cost, and high throughput fabrication still stands out for researchers to address. Taking an outlook of the future trend in SPP sensing, portable sensors that are user-friendly, smart, and convenient for data transmittance could be developed. Even artificial intelligence can be involved to make the signal acquisition and analysis process simpler. For biochemical sensing, the disposability of the sample container should be considered properly in fluid chip technology. Moreover, slower flow-rate and smaller sample volume in real-time detection will contribute to the promising future of biochemical sensing.
6,069
2020-08-14T00:00:00.000
[ "Physics", "Materials Science" ]
Design and Implementation of Autonomous and Non-Autonomous Time-Delay Chaotic System Based on Field Programmable Analog Array Time-delay chaotic systems can have hyperchaotic attractors with large numbers of positive Lyapunov exponents, and can generate highly stochastic and unpredictable time series with simple structures, which is very suitable as a secured chaotic source in chaotic secure communications. But time-delay chaotic systems are generally designed and implemented by using analog circuit design techniques. Analog implementations require a variety of electronic components and can be difficult and time consuming. At this stage, we can now solve this question by using FPAA (Field-Programmable Analog Array). FPAA is a programmable device for implementing multiple analog functions via dynamic reconfiguration. In this paper, we will introduce two FPAA-based design examples: An autonomous Ikeda system and a non-autonomous Duffing system, to show how a FPAA device is used to design programmable analog time-delay chaotic systems and analyze Shannon entropy and Lyapunov exponents of time series output by circuit and simulation systems. Introduction In security communication, the cryptogram generator is a key device. It is showed that in chaotic security communications, this cryptogram generator can be a chaotic system. In chaotic security communications, the utility of the chaotic system is "encryption", thus it is valuable to construct a proper chaotic system for chaotic security communication. A time delayed chaotic system has a simple structure and a hyperchaotic attractor in phase space, which provides a higher level of security in chaotic secure communications [1][2][3][4]. Therefore the design and implementation of a time-delayed chaotic system is of high practical importance for increasing the safety of secure communication. FPAA (Field Programmable Analog Array) is a dynamically programmable Analog Signal Processor device. It has integrator, comparator, amplifier, inverter, multiplier, delay and other blocks. These blocks are constructed from a combination of conventional and switched capacitor circuit elements and are programmed by a host processor [5]. We can easily and flexibly use FPAA's programming software to design an analog circuit including time-delay chaotic system through the combination of blocks. Then these analog circuits pre-constructed can be realized by being downloaded to the FPAA development board in real time. Therefore, this programmable device is easier, more efficient and more economical than using individual operational amplifiers, resistors, capacitances, analog multipliers and other discrete components used for implementing analog circuit systems. FPAA has become more and more popular recently [6][7][8][9]. Caponetto used FPAAs to design and implement a fully programmable Chua's circuit and highlight several advantages of the approach: The design and implementation phases are very simple, and the circuit is totally programmable [10]. Further, Recai Kilic realized FPAA-based Chua's circuit models and jerk circuit using different nonlinear functions in a programmable and reconfigurable form [11,12]. What's more, Recai Kilic introduced a universal approach to design and implement programmable analog non-time-delay chaotic systems based on FPAA [13]. After that, Fatma Yildirim Dalkiran and J. C. Sprott realized a fourth-order hyperjerk system based on FPAA [14]. Chunbiao Li designed and implemented chaotic systems with complete amplitude control and constructed infinitely many attractors in a programmable chaotic circuit based on FPAA [15,16]. But the above researchers had not designed and implemented time-delay chaotic systems based on the FPAA. Further, time-delayed chaotic systems can provide a higher level of security in chaotic secure communications compared with non-time-delay chaotic systems, so the design and implementation of a time-delayed chaotic circuit based on FPAA can be very helpful for researchers of chaos secure communication. Therefore, in this paper, we aim to introduce a universal approach to design and implement programmable analog time-delay chaotic systems based on FPAA. In this context, firstly the design procedure of a FPAA device will be given and then FPAA-based design examples including autonomous and non-autonomous time-delay chaotic circuit models will be introduced. At the same time, we further analyze Shannon entropy and Lyapunov exponents of time series outputs by circuit and simulation systems. We hope that these design notes will be a useful practical guide for researchers who wish to experimentally study time-delay chaotic systems. FPAA-Based System Designs Here in this paper we use the newest integrated circuit technique of FPAA to realize time-delay chaotic systems. A FPAA development software named AnadigmDesigner2 was used to design time-delay systems in Windows. An Anadigm QuadApex development board shown in Figure 1 with four AN231E04 chips was used to construct a circuit implementation of time-delay chaotic system. Figure 2 shows the basic flow chart of the FPAA designs. The analog signals of FPAA are limited to the range −3 V to +3 V. Therefore, we test the system with the numerical simulation before using the FPAA design system. Then we decide to rescale the system or not according to the simulation results. If there is something noticeable, we usually use matlab to simulate the system because the simulation tool of FPAA is too slow and the simulation time is too short. Then the system is designed in the FPAA design software, which is similar to the Simulink module of matlab. After setting up the circuit, the configuration information will be downloaded to the FPAA development board by clicking the download button in the software. The experimental results are compared with the simulation results, and if the results are satisfactory, the implementation is finished, otherwise FPAA modeling needs to be modified. Autonomous Time-Delay Ikeda System Ikeda system is a one-order autonomous time-delay chaotic system [17]. It describes the phase shift in nonlinear optics, and presents a variety of periodic bifurcation and chaotic behaviors. The Ikeda system is one of the few delayed chaotic systems that have been studied deeply. Ikeda's system is defined by the following state equation: where x denote the state variables of the system, α and β are system parameters and τ represents the delay time which plays an important role in the system's chaos mechanism. These parameters are determined as τ = 2, α = 1, β = 2. Before autonomous Ikeda system implementing, the system defined by Equation (1) is tested with a numerical simulation tool. The numerical simulation results of Ikeda system are illustrated in Figure 3 by adopting the Fourth-Order Runge-Kutta Method in matlab. The maximum value of |x| was more than 3V according to simulation results, System (1) is rescaled to x → 10x as following: According to the Equation (2), the circuit is constructed. sin 10x is implemented by a programmable transfer function module. We can design and realize all kinds of nonlinear functions by using the transfer function module. Compared with analog circuit design techniques, it is very easy, efficient and economical to help us realize all kinds of nonlinear systems. At the same time, the delay of circuit was designed by the delay module, so we can change the delay parameter easily by programming. Besides, attention should be paid to the time scale conversion when the analog circuit is constructed by the Equation (2). The delay time is 2 and the integral constant of integrator of FPAA is usually determined as 0.0025 1/us. Therefore, the parameters of the actual delay module are 800 us. The circuit diagram constructed by AD2 software is illustrated in Figure 4. Because the chip resources of FPAA are limited, a chip cannot contain both transfer function module and delay module, it needs two FPAA chips to implement, system state-variable x is output by IO1. The chaotic dynamics and the chaotic attractor have been showed in Figure 5. The parameters are showed in Figure 6. Then we further analyze Shannon entropy and Lyapunov exponents of time series output by Ikeda circuit and simulation system. We sample 130,000 points to be used to test for the Ikeda circuit with a sample rate of 256 kHz, then, the number of point of Ikeda simulation system is also 130,000 and the step of simulation system is 0.01. The results are listed in the Table 1. Experimental results showed that Shannon entropy and Lyapunov exponents of Ikeda circuits and simulation system are approximately equal. Therefore an autonomous Ikeda chaotic system has been implemented successfully by using FPAA programmable device according to pictorial results and quantitative results. Non-Autonomous Duffing System Apart from the autonomous time-delay chaos system implemented by FPAA in the above section, FPAAs are also fit to implement non-autonomous time-delay chaos systems by programming and reconfiguring. In this section, we will introduce how a non-autonomous time-delay chaotic system can be implemented by using a FPAA device. The Duffing system is one of the most typical and important objects in nonlinear dynamics, because it can model the large deformation or similar properties in an engineering structure. A lot of engineering systems could be described by Duffing or Duffing-based oscillators to enlighten their complicated dynamical behaviors and mechanisms. The time-delay equation of Duffing system is as follows according to this paper [18]. In the above equation, x denote the state variables of the system, α, c and k are system parameters and τ represents delay time. These parameters are determined as k = 1, c = 0.2, α = 0.5, ω = 1.2, τ = 0.5, µ = 0.05, f = 0.5. This system is appropriate for programmable and reconfigurable design and implementation. The new designs are implemented more easily and inexpensively by changing system parameters of the Duffing system flexibly through software. FPAA device has an internal wave generator, it can generate many waves such as sin-waves and square waves, and the frequency, amplitude and other parameters of waves can be changed easily by programming. Therefore, it is non-essential to use an external AC source in FPAA implementation of the Duffing system and the amplitude and frequency parameters of the sin-wave can be easily adjusted by programming. As in the above Ikeda system, the necessary system simulation is required before FPAA modeling. The chaotic dynamics and the chaotic attractor of the simulation have been showed in Figure 7. According to the simulation results the System (3) is rescaled to x → 2x as follows: Then the Duffing system is modeled with an FPAA software tool. The Duffing system also needs an AC source module, which is different from autonomous time-delay chaos system. Attention should be paid to the time scale conversion when the analog circuit is constructed by the Equation (4). The ω is 1.2 and the integral constant of integrator of FPAA is usually determined as 0.0025 1/us. Therefore, the frequency of the AC source block is 0.477 kHz. This model is downloaded to the FPAA development board via a serial port, and experimental measurements obtained from I/O connections of the FPAA board are illustrated in Figure 8. The results of experiment have been showed in Figure 9 and the circuit parameters are desplayed in Figure 10. After that, we further analyze Shannon entropy and Lyapunov exponents of the time series output by a Duffing circuit and simulation system. Like the Ikeda system, we sample 130,000 points to be used to test for Duffing circuit and the sample rate is 256 kHz. The number of point of Duffing simulation system is 130,000 and the step of simulation system is 0.01. The results are listed in the Table 2. Experimental results showed that Shannon entropy and Lyapunov exponent of Duffing circuit and simulation system are approximately equal. Therefore non-autonomous duffing chaotic system has also been implemented successfully by using FPAA programmable device according to pictorial results and quantitative results. Discussion Here, we have designed and realized an autonomous time delay Ikeda circuit and non-autonomous time delay Duffing circuit successfully according to the analysis of phase picture, Shannon entropy and Lyapunov exponents. We download time delay chaos circuits pre-designed in software to the development board. After this, different chaos circuits can be implemented in the same development, which greatly reduces our design time. At the same time, the range of realizable chaos circuits based on FPAA is enlarged by constructing chaotic systems with time delay in this paper. This is conducive to researchers of chaos secure communication, because the time-delay chaotic source in chaotic secure communication can be realized by programming, and more importantly, the parameters of chaotic source can be changed easily by a programmable way. Although there are many advantages of FPAA in this paper, there are also limitations, at first, as some complex time-delay chaotic circuits may not be realized because the chip resources of FPAA are limited. Besides, the parameter range of the FPAA delay module is limited, and some time delay chaotic systems with large delay may not be realized. At last, the parameters of the delay used in the time-delay result in discrete delay. Therefore, the time-delay chaotic systems based on FPAA are finite dimension systems. Conclusions In this paper, we chose Ikeda and Duffing models as autonomous and non-autonomous design examples to introduce a universal programmable time-delay chaos system based on FPAA. Experimental results agree with the results obtained from simulation. It shows that this programmable design approach will be very useful in many applications based on time-delay chaotic systems. Many time-delay chaotic systems based on mathematical modeling will not need complex electronic hardware, and the design and implementation of time-delay chaotic systems will be more efficient, simpler and more economical. We hope that these design notes will be useful for researchers who wish to experimentally study time-delay chaotic systems.
3,088.4
2019-04-26T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs) to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR) scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10 − 3 to 10 − 4 . Introduction Low-Density Parity-Check (LDPC) codes are widely used in space communications due to their capacity-approaching capabilities [1].The outstanding performance of LDPC is based on the soft-decoding algorithms [2] which consume a large number of memories.However, the radiation environment will give rise to fault problems for memories when LDPC decoders are used in the spacecraft [3].Such unreliable storage will severely degrade the performance of LDPC codes.Thus, it is important to consider the robustness of LDPC decoders utilizing unreliable memories. There are studies on the effects of unreliable hardware on LDPC decoders.Varshney considered the thresholds and residual errors of LDPC codes with the faulty Gallager A decoding in the earlier stage [4].Extended studies on faulty Gallager B decoders were then developed in [5][6][7].Besides these bit flipping decoding algorithms, the belief propagation (BP) decoding of LDPC on noisy hardware was studied in [8,9], where infinite-precision message with additive Gaussian noise was considered.Finite-precision message for the min-sum decoding of LDPC was studied in [10][11][12].It showed that quantizing messages with more bits was not always beneficial for LDPC decoders with hardware errors. In general, the existing works treated each finite-precision message as an integer, while this paper discusses the various impacts of different bits of the finite-precision message.We develop a discrete density evolution analysis for LDPC decoders with faulty messages.It indicates that the sign bits of the messages play the most important role in the decoding performance of LDPC codes, which means setting protection on sign bits is efficient enough.To protect the sign bit inside each quantized message, the traditional method is the static triple modular redundancy (TMR) scheme as applied in [13].However, since two quantization bits are occupied for protecting the sign bit, the TMR scheme is not always beneficial for various storage error levels due to the loss of quantization precision.By analyzing the convergence process of LDPC decoding as well as referring to the results in [12,14], it shows that when the magnitude of message is small, the precision bits, that is, the least significant bits (LSBs), are nonnegligible for decoding performance, while when the 2 Wireless Communications and Mobile Computing message has a large magnitude value, the sign bit becomes even more critical for the residual errors. Based on the aforementioned observations, we propose an adaptive embedded coding scheme for the unreliable messages to achieve a robust LDPC decoder.First, we put the messages into packages by taking advantage of the parallel message architecture of the quasi-cyclic (QC) LDPC decoders.The structure of message package permits more efficient block coding schemes for the sign bits other than simple TMR method.Then, two LSBs are adaptively employed for sign bits protection based on the magnitude level of message package.Moreover, we introduce a construction of Hamming product code for the adaptive coding, which has a multistage coding structure and outstanding error-correcting capability.We also discuss low complexity iterative decoding algorithms for the Hamming product code.Both theoretical analysis and Monte Carlo simulations demonstrate that the proposed adaptive message coding scheme outperforms the TMR scheme in decoding both thresholds and residual errors for various storage error levels. The paper is organized as follows.In Section 2, the system models are introduced.Section 3 presents the discrete density evolution analysis on unreliable LDPC decoders.The adaptive message coding scheme and construction of Hamming product code are proposed in Section 4. We give the decoding algorithms for adaptive Hamming product codes in Section 5. Monte Carlo simulations are provided in Section 6. Section 7 concludes the entire paper. System Models 2.1.LDPC Decoder.The hardware architecture of the QC-LDPC decoder is shown in Figure 1, which consists of interleaver (Π (LDPC) ), variable node units (VNU), check node units (CNU), and data buffers (RAM).Since the matrix of QC-LDPC is divided into subblocks, the decoders are always implemented with the partially parallel architecture [15][16][17], which means the messages of each subblock are calculated by the same VNU or CNU node in the pipeline operations.The constraint of the LDPC code is executed by the interleaver, which is used to deliver the messages between the VNU and CNU based on the parity-check matrix of LDPC in various decoding algorithms [18,19].To execute the BP decoding of LDPC, the decoder firstly obtains the loglikelihood (LLR) from the channel.Then, VNU and CNU perform iterative computations, where the internal messages V2 and 2V are produced.Specifically, in VNU, while, in CNU, where N(V) and N() are defined as the sets of nodes connected to node V and node , respectively.These messages are stored in memories during the decoding process.To implement LDPC decoders on integrated circuits, all of the messages will be quantized into bits.Existing studies [20] have shown that 4-6 bits' quantization on messages can provide ideal compromise between complexity and performance for LDPC decoders.Among the quantized bits, one is used for the sign, while the rest are used for the magnitude value. Error Model of Memory. For the existing studies on LDPC decoders with faulty hardware, there are several widely accepted error models.As shown in Figure 2, where the model shown as Figure 2(a) is adopted in [4,5,7], the model shown as Figure 2(b) is adopted in [8,9].These models are both assumed to connect the error-free operation results with error channels, such as the binary symmetric channel (BSC) or the additive white Gaussian noise (AWGN) channel. However, these two error models still have limitations for practical LDPC decoders.For example, the BSC error model is mostly utilized in bit flipping decoding algorithms, such as Gallager A decoding and Gallager B decoding, which make more sense in theoretical analysis.The AWGN error model is adopted in infinite-precision soft-decoding algorithms, where the messages are in continuous domain and assumed to be added with Gaussian noise by the faulty hardware. In this paper, we consider the practical LDPC decoders, where finite-precision decoding algorithm is utilized.Following the studies in [11,12], we assume a quantized BSC model for the storage errors, as shown in Figure 3.In the quantized BSC error model, the decoding messages are quantized into bits, each of which is assumed to pass a BSC error channel.The BSC errors for different bits are assumed to be independent, and the error ratios are assumed to be the same.The cross-over parameter of the BSC channel is the flipping probability of the RAM cell, which is relevant to the radiation level and the service duration.As shown in Figure 1, there are 3 memories for the message storage: the LLR message storage, the V2C message storage, and the C2V message storage.In this paper, we assume the same bit flipping probability 0 for all message memories. Analysis on LDPC Decoder with Unreliable Messages 3.1.Discrete Density Evolution.In this section, we define a discrete density evolution method for the analysis of finiteprecision BP decoding of LDPC codes, which will give the performance of decoding thresholds and residual error ratios for LDPC decoders with different message protection schemes. It has been proved by Varshney [4] that the symmetric conditions of density evolution are still suitable for the faulty LDPC decoders with symmetric hardware errors.Therefore, we can utilize the discrete density evolution by assuming all zero sequences are transmitted to analyze the finite-precision LDPC decoders with the BSC storage models.In this paper, only the regular ( V , ) LDPC codes are considered for the sake of simplicity. In the density evolution analysis, we define P = { 1 , 2 , . . ., 2 −1 } as the probability mass function (PMF) of the corresponding message at iteration , where is the number of quantization bits and is the probability of the th quantization symbol.For example, P is the PMF vector of the message shown in Figure 1, which is the LLR from channel, while P V2 is the PMF of message V2 at the th decoding iteration. Since the codewords are assumed to be all zero sequences, to initialize the discrete density evolution, P is calculated as where is the value of the th quantization symbol, 0 = −∞, and 2 −1 = +∞.Meanwhile, P 0 2V is initialized as After the initialization, the density evolution executes its iterations.Firstly, in the VNU nodes, It is worth noting that, after the convolution operations, we shall combine the extra elements of P V so as to ensure a length of 2 − 1. Secondly, in the CNU nodes, the magnitude values of messages are mapped into log-domain by function () = − log(tanh(/2)).The corresponding PMF of the magnitude values is mapped by Λ(P V2 ).Further define and thus the output PMF of CNU is updated by Similarly, the extra elements of Γ V2 shall be combined after the convolution operations. Finally, after the maximum iterations, the decoding decision is made in the VNU nodes, where the PMF is calculated as and the probability of residual error can be obtained by Above is the conventional discrete density evolution method for finite-precision LDPC decoders.However, this paper considers the issue of message storage errors, which means each message will suffer transformation of PMF outside the nodes.In the following, we will model the PMF transformation of unreliable message in density evolution. Define E = { 1 , 2 , . . ., } as the quantization bit error vector, where is the error probability of the th quantization bit ( 1 for the sign bit, for the LSB).For example, we can make 1 = 2 = ⋅ ⋅ ⋅ = = 0 for the VRAM and CRAM error models described in Section 2.2, where all quantized bits experience the same error probability.Further, define PMF transfer matrix Π(E), which is calculated as follows: where is the transfer probability from the th quantization symbol to the th one .And (, ) = 1 − if and have the same bit in the th quantization position; otherwise (, ) = .Since (, ) = (, ), we know that Π(E) is a symmetric matrix.As a result, the PMF transformations between RAM's input and output can be described as We could set various error vectors E for corresponding protection schemes for unreliable messages in discrete density evolution, which will give asymptotic performance of different protection schemes. Analysis on Various Bit Errors for Finite-Precision Messages. Based on the discrete density evolution method defined in Section 3.1, a threshold analysis is provided to demonstrate the various effects of finite-precision message bits.It is shown that the sign bits have the most influence on the decoding thresholds of LDPC codes.We execute the discrete density evolution on a (4, 32) regular LDPC code with the 6 bits' quantized decoder in this paper.To analyze the various effects of quantized bits, we set E = {0, 0 , 0 , 0 , 0 , 0 } for the one highest bit protected memory error model, which means the sign bit is assumed to be error-free.Similarly, several highest bits protected models can be defined, where E = {0, . . ., 0, 0 , . . ., 0 }.The decoding thresholds are obtained by the discrete density evolution, as shown in Figure 4. We can see that if the sign bit is protected, the threshold will not be severely affected, while additional protection on the extra bits can provide little gain. Triple Modular Redundancy Scheme for Sign Bit Protection. For LDPC decoders, the cost is overwhelming to protect every message bit.However, as mentioned before, it is not necessary since the sign bits are demonstrated to be the most important.Thus, following the idea of unequal error protection [21], we can simply set protection on the sign bits to promise a low complexity.In this section, we will firstly introduce traditional TMR protection scheme for the sign bits and then discuss its advantages and disadvantages. As in [13], TMR has been applied in protecting the messages for LDPC decoder on unreliable hardware.However, TMR will charge two extra bits for protecting the sign bit while the messages are typically quantized into only 4 to 6 bits [20].As a result, if we maintain the quantity of message quantization bits under the constraint of complexity, introducing TMR will bring a loss of quantization precision, which is not always beneficial for various storage error ratios. In the following, using the discrete density evolution method described in Section 3.1, we will analyze the performance of TMR scheme for the sign bit protection.We set = 6 for the quantity of quantization bits, which is adopted for most practical LDPC decoders.Moreover, to model the storage error of the TMR-protected messages, the error vector is set to be E = {3 2 0 − 2 3 0 , 0 , 0 , 0 }, which is quantized with 4 bits actually.While the unprotected LDPC decoder is set to be E = { 0 , 0 , 0 , 0 , 0 , 0 }, the results of discrete density evolution under different storage error ratio 0 are shown in Figure 5. From the analysis, it can be observed that when the storage error ratio is high (e.g., 0 = 10 −3 ), the LDPC decoder without protection cannot work anymore, while the TMR-protected one can work with a dramatic degradation of decoding threshold.However, when the storage error ratio is low enough, for 0 = 10 −4 and 0 = 10 −5 , due to the loss of quantization precision, the TMR-protected LDPC decoders will show disadvantages in decoding threshold compared with the unprotected ones.However, the TMR protection scheme still has its advantages: we can observe that TMRprotected decoders have lower decoding residual errors for all levels of storage error ratios. Existing Adaptive Messages Coding Scheme. We noticed that a similar adaptive coding scheme for approximate computing with faulty storage has been proposed in [22].In [22], an adaptive message coding scheme on faulty min-sum LDPC decoders is mentioned.In detail, when the messages were written into RAMs, if the MSB was 1, the last two LSBs were neglected, while the corresponding memory addresses were used for a (3, 1) repetition coding on the sign bit.Otherwise, the messages would be stored in the RAMs directly.When the LDPC decoder reads a message from the RAMs, the MSB was checked.If the MSB was read as 1, a decoding on the (3, 1) code was executed to obtain the sign bit, while the last two LSBs were selected from {0, 1} randomly.Otherwise, the messages were assigned with the read values. The aforementioned scheme makes full use of the LSBs in the messages.It has efficiently protected the unreliable messages without using any storage redundancy.However, there are some disadvantages for this protection scheme.Firstly, the adaptive coding is executed inside the single message, which is typically quantized with no more than 7 bits for the reason of complexity [20].Consequently, there are not enough bits for the efficient coding schemes.For example, when the number of quantization bits is from 4 to 6, only simple repetition codes can be utilized.And this scheme is even inapplicable when the messages are quantized into less than 4 bits.Secondly, whether the adaptive coding is executed or not is totally based on the MSB, which is also subject to the storage errors.In such a case, the decoding of the adaptive code may be incorrectly executed, which further degrades the performance of the sign bit protection. We demonstrate the exact error-correcting performance of the sign bits for this coding scheme as follows.In the first case where the MSB is 1, the encoding will be executed.If the MSB is read correctly, the (3, 1) code will be properly decoded with an output error ratio of 3 2 − 2 3 .If the MSB is read in error, the decoding will be neglected, which results in an error rate of for the sign bit.That means that the expectation of the error rate for the sign bit is In the second case where the MSB is 0, similarly the error rate should be calculated with two cases, which is derived by Unfortunately, since the storage error probability is small enough, when the MSB is 1, this coding scheme cannot achieve the error-correcting capability of the (3, 1) repetition code, while when the MSB is 0, the error probability of the sign bit is even higher than the one without protection. Adaptive Message Coding Scheme In this section, we firstly present the architecture of the proposed adaptive coding scheme.Then, a specific construction of Hamming product code for the adaptive strategy is provided.Next, we analyze the performance of the proposed scheme theoretically. Protecting Sign Bits Utilizing LSBs Adaptively. As analyzed in Section 3.4, protecting the sign bits of unreliable messages by occupying extra bits is not always the best scheme. The degradation of decoding threshold is mainly caused by the loss of quantization precision.However, we notice that quantization precision affects decoding performance specifically when the magnitude of message is small; that is, when the message has a large magnitude, the LSBs are less important.On the other hand, with the convergence of LDPC decoding process, the sign bits of messages are shown to be have significant effects on the decoding residual errors when most messages have large magnitudes.Based on these observations, while referencing the idea of adaptation as in [23], we introduce an adaptive scheme for protecting the sign bits of unreliable messages.The basic idea is that when the message magnitude is small, the LSBs are used for maintaining quantization precision, while when the magnitude is large enough, the storage space for LSBs is used to protect sign bits to ensure a lower residual error.What is more, existing studies execute protection on each single message, where only simple coding scheme (such as TMR) can be utilized.However, we notice that LDPC decoders are usually implemented with a partially parallel architecture, as described in Section 2.1.In other words, a group of messages are produced simultaneously.It inspires us to put the sign bits into packages so that we can introduce efficient block coding schemes instead of the traditional TMR. As shown in Figure 6, the structure of our proposed adaptive coding scheme is described as follows: first, put concurrently produced messages into a package.Then, define 1 and 2 as the adaptive thresholds, where 0 < 1 < 2 < max ( max is the maximum absolute value of quantization).Next, when the messages are written into RAMs, for each message package, calculate the average magnitude value of the messages.Based on the value of , the adaptive coding is divided into 3 stages as below.(i) If 0 ≤ < 1 , all LSBs of the message package are reserved for quantizing messages.(ii) If 1 ≤ < 2 , the storage space for the last LSBs are occupied for coding on the sign bits with a code rate of 1/2.(iii) If 2 ≤ ≤ max , the storage space for the last 2 LSBs are occupied for coding on the sign bits with a code rate of 1/3. Reversibly, when the messages are read from RAMs, adaptive decoding is executed based on the value of .If the storage of LSBs has been occupied for the sign bits, the LSBs of messages are randomly assigned. Construction of Adaptive Hamming Product Code. In this section, we will give a specific code construction for the adaptive coding scheme. To adaptively protect the sign bits of message packages, the ideal block code should have the features of multistage coding structure, as well as low coding complexity and appropriate block length.We introduce Hamming product codes as the adaptive package codes based on the following advantages.First, the product codes are constructed by several subcodes, whose coding process can be easily designed into multistage.Second, Hamming codes have the simplest decoders and encoders among all of the block codes, which only consist of several basic logic gates.Moreover, as the data is usually operated in bytes, where each byte contains 8 bits, in order to make the package codes suitable for the data operations, we choose the modified Hamming product code, which is (, ) = (48, 16).It is worth noting that some other short algebraic codes could be adopted to constitute the product code, such as Gray codes in [24], at the cost of complexity. As shown in Figure 7, the dark points are the sign bits in one message package and the white points are the LSBs.The row and column subcodes are both (8,4) Hamming codes.For such multistage (48, 16) Hamming product codes, package size = 16, the first coding stage is that both row and column subcodes are inactive when 0 ≤ < 1 , while the second coding stage is executed by only activating the row subcodes when 1 ≤ < 2 .And the third coding stage is executed by activating the whole subcodes when 2 ≤ ≤ max . Theoretical Analysis. In this section, we will also utilize the discrete density evolution method to analyze our proposed adaptive package coding scheme.As mentioned before, we should deduce the error vector E for the proposed scheme. As defined in Section 4.1, since the stage of adaptive coding is based on the average magnitude values of message packages, we should firstly calculate the PMF of the summation of magnitude values in one message package.First, the PMF of message magnitudes |P| is derived as ) , P (2 (−1) − 1) + P (2 (−1) + 1) , . ..} .(14) Then, the PMF of the summation of magnitudes can be obtained by Based on the PMF of magnitude summation, we can obtain the probabilities of the 3 stages of adaptive coding, respectively, by where mag() is the corresponding magnitude value of the th element of P sum .Next, we need to calculate the error ratio for each stage of the adaptive Hamming product codes.As for (, , ) block codes with a raw error ratio of 0 , the error upper bound is derived by Based on (17), the error vector for the first coding stage is E 0 = ( 0 , 0 , 0 , 0 , 0 , 0 ), while for the second stage it is E 1 = ( blk (8, 4, 4, 0 ), 0 , 0 , 0 , 0 , 0.5), and for the third stage it is E 2 = ( blk (48, 16, 7, 0 ), 0 , 0 , 0 , 0.5, 0.5).As a result, the eventual error vector is obtained by We set 1 = 0.4 max and 2 = 0.8 max and analyze the performance of our proposed scheme with 0 = 10 −3 , 0 = 10 −4 , and 0 = 10 −5 .Firstly, we simulate with the parameter ( V , ) = (4,32), whose results are shown in Figure 8.Then, as a comparison, we set ( V , ) = (4, 8) to verify the effects of different row weights (or the coding rates) on the performance, the results of which are shown in Figure 9. We can see that, with the proposed adaptive package coding scheme, the performances of both decoding threshold and residual error are significantly improved.What is more, our proposed scheme is effective in different coding rates. Decoding of Hamming Product Codes The Hamming product code we have introduced has an outstanding minimum distance characteristic.However, its error-correcting capability can only be achieved under the maximum likelihood (ML) decoding, which has high complexity and is not practical for LDPC message protection. In this section, we will discuss specific decoding algorithms for the Hamming product code, which achieves good performance with low complexity. Iterative Decoding of the Hamming Product Code. For the subcode (8, 4) Hamming code, the Hamming distance is 4; that is, it can correct any one-bit error in one block.But when there are two error bits, the decoder can only declare a block error without locating the error bits.Based on the Hamming codes' characters of both error correcting and detecting, we define two states for the output bits of the Hamming product decoder: the fixed bits and the erasure bits.The decoding algorithm of the Hamming product code is described as follows: (i) Iterative step: the row subcodes and the column subcodes execute their decoding algorithms iteratively. During the decoding, if the Hamming decoder cannot locate the error bits, keep the block unchanged; otherwise, update the block.After several iterations (we set it to 2 iterations here), stop the iterative decoding.(ii) Decision step: firstly, the error detecting is executed by the Hamming decoders.Define R ⊂ {1, 2, 3, 4} as the set of indices where the row subcodes detect block errors.Similarly, define index set C for the column subcodes.Then, declare the bits located at (, ) in the 4 × 4 information bit matrix as erasure bits, where ∈ R and ∈ C. The rest of the bits are declared as fixed bits. We utilize a low-order approximation method to evaluate the performance of the Hamming product code with the proposed decoding algorithm.Since two states are defined for the output bits, we use two parameters to describe the decoding performance: the bit erasure ratio and the bit error ratio .Both parameters can be approximately derived by the most likely error patterns.The probability of the thorder error pattern can be derived as where 0 is the bit flipping probability of the RAM.As listed in Table 1, the error patterns with orders lower than 4 are analyzed.Apparently, if there are less than two-bit errors in a block of the Hamming product code, no erasure or error bit exists.Therefore, only the error patterns with 3rd order and 4th order are adopted to deduce the approximate Meanwhile, Monte Carlo simulations for the iterative decoding of Hamming product code are provided.The curves of the performance are shown in Figure 10.We can see that r (3,4) and r (3,4) are very close to the results of Monte Carlo simulations.Moreover, the Hamming product code outperforms the traditional TMR scheme dramatically. Enhanced Decoding of the Hamming Product Code. In fact, the performance of Hamming product code can be further improved at the expense of complexity for the iterative decoding.In this section, an enhanced decoding scheme is proposed to obtain better performance by introducing more decision logics.As analyzed in Section 5.1, the 3rd-order error pattern has the most influence on the decoding performance of the Hamming product code.Specifically, the most typical error patterns for decoding erasure and error are depicted in Figure 11.Without loss of generality, we can assume that the row subcodes are decoded firstly in the iterative step.If there are three check bit errors (denoted by the dark points in Figure 11(a)) in one column subcode, one of the column subcode's information bits (denoted by the point marked with an asterisk) will be incorrectly decoded.In such a case, when it comes to the decision step, this incorrect information bit, together with the other three incorrect check bits, will constitute a valid codeword for the column subcode.Thus, it will result in a decoding error.Similarly, as shown in Figure 11(b), if there are three errors (denoted by the dark points) located, respectively, in the row subcode and the column subcode, they will simultaneously disable the decoding of the row and column subcodes.In such a case, the error in the intersection position will be declared as an erasure bit according to the decision logic of the aforementioned decoding algorithm. As a matter of fact, the decoding erasure bits and error bits caused by the 3rd-order error patterns all occurred in the similar ways mentioned above.Based on this analysis, an enhanced decoding scheme is proposed by adding the following two decision logics in the decision step: (i) 3rd-order error bit decision: after the error detecting, if the index sets R = {} and C = {} are both single element sets, the bit located at (, ) is flipped and declared as a fixed bit. (ii) 3rd-order erasure bit decision: if one bit is decoded into different values by the row subcode and the column subcode, it is declared as an erasure bit. The performance of the enhanced decoding scheme is shown in Figure 12.There is an improvement for both and .It should be noted that the additional decision logics are only used to cope with the 3rd-order error patterns.In fact, more decision logics for the higher-order error patterns can be introduced to obtain better performance. Decoding Complexity of the Hamming Product Code. Another key issue is the complexity of decoding Hamming product code compared with traditional TMR scheme.As TMR only consumes a majority decision logic module to decode the duplicate check, it is generally believed that introducing advanced long block codes will definitely increase the hardware complexity.However, in this section, based on the Field Programmable Gate Array (FPGA) implementation, we will see that the hardware complexity of Hamming product code can be even lower than TMR scheme in some cases. Moreover, there will be a flexible tradeoff between hardware consumption and decoding delay for the Hamming product code. In applications, LDPC encoder and decoder are mostly implemented based on FPGA, which is reconfigurable and widely adopted in communication systems.A major difference between FPGA and the Application Specific Integrated Circuit (ASIC) is the structure of combinational logic circuit.For FPGA, the combinational logic is not composed of actual logic gates.Instead, it is based on a structure called Lookup Table (LUT), which is actually a small block of RAM.The input of combinational logic is connected to the RAM's address, and the logical output is presynthesized and stored into the RAM.Thus arbitrary logical operation can be implemented by looking up into the storage for each input logic combination.For conventional FPGA, 4-input and 6-input LUTs are mostly equipped.As a result, the TMR decision is actually processed by a 4-input LUT on FPGA.Next, we will compare the consumption of LUTs for the TMR and proposed schemes.In our proposed adaptive message coding scheme, each 16 messages are grouped into one package.Consequently, the corresponding consumption for TMR scheme is 16 4-input LUTs totally.Comparatively, the consumption of the proposed scheme is shown in Figure 13.We can see that, for the (8, 4) Hamming encoder, only four 4-input LUTs are required, while, for the decoder, four 4input LUTs are utilized to generate the correctors, and then each decoded information bit outputs through a 6-input LUT by logically processing the correctors and original value.To sum up, the total consumption is eight 4-input and four 6input LUTs for (8,4) Hamming encoder and decoder.Based on these analyses, the hardware complexity of proposed Hamming product code is no more than TMR scheme, while roughly speaking it even consumes less resources on FPGA.Actually, in our iterative decoding algorithm for the Hamming product codes, the cost for improving errorcorrecting performance of unreliable message is decoding delay instead of hardware complexity.As the subcodes of Hamming produce code are decoded iteratively, the decoding of each message package will occupy a certain number of clocks.Thus if the LDPC decoder is poor in timing margin, the iterative decoding of Hamming product code will severely degrade the decoding throughput.Fortunately, the subcodes of Hamming product code can be decoded in parallel, which means we can compress the decoding clocks by parallel processing with multiple Hamming decoders.In this case, there can be a flexible tradeoff between hardware complexity and decoding delay.The specific consumption of space-time resource for various arrangements is shown in Table 2. Simulations In this section, Monte Carlo simulations are executed on the finite length codewords of LDPC.We utilize the (8176, 7154) LDPC code defined by CCSDS in [25], which is publicly available and has outstanding performance.In the simulations, the messages are quantized into 6 bits, while the maximum number of iterations is set to 15.The communication channel is assumed to be the additive white Gaussian noise (AWGN) channel.To demonstrate the effectiveness of our proposed scheme under various storage error levels, the flipping probability of BSC model is set from 0 = 10 −3 to 0 = 10 −4 .We compare the adaptive message coding scheme (labeled as "proposed") with both the traditional TMR scheme (labeled as "TMR") and the one without protection (labeled as "no sch").The results are shown in Figure 14.We can see that when 0 = 10 −3 , the proposed scheme has a gain of 0.2 dB compared to TMR scheme, while the unprotected one even cannot work.When 0 = 10 −4 , The proposed scheme still outperforms the other schemes. Conclusion This paper considered the challenge of implementing LDPC decoders on unreliable memories.We explored the effects of various message bits on finite-precision LDPC decoders and introduced an effective adaptive coding scheme based on the magnitude level of messages.We put the messages into packages and proposed a Hamming product code to adaptively correct the sign bits, as well as discussing two low complexity decoding algorithms.The discrete density evolution analysis showed that the proposed scheme outperforms traditional TMR scheme in decoding both threshold and residual errors under various storage error levels.Moreover, Monte Carlo simulations showed that the proposed scheme could at least obtain a gain of 0.3 dB to the static TMR scheme when the storage error probability was from 10 −3 to 10 −4 . Figure 1 : Figure 1: The partially parallel architecture of LDPC decoders. 4 Figure 4 : Figure 4: The thresholds of protecting various message bits. Figure 5 : Figure 5: Performance analysis under different storage error ratios. Figure 6 : Figure 6: Structure of adaptive package coding. Figure 10 : Figure 10: The low-order approximation of and . Table 1 : Low-order error pattern analysis. Table 2 : Tradeoff between complexity and decoding delay.
7,995.4
2018-03-22T00:00:00.000
[ "Computer Science", "Engineering" ]
Anisotropy of Strength and Elastic Properties of Lower Paleozoic Shales from the Baltic Basin, Poland : The paper presents the results of laboratory studies on the strength–strain properties of shales representing four siltstone-claystone lithostratigraphic units occurring in the Baltic Basin. Laboratory studies in a triaxial stress state were conducted as single failure tests on cylindrical samples oriented parallel and perpendicular to lamination within the rocks. Mutually perpendicular samples were cut out from the same drill core sections in order to determine mechanical anisotropy. Samples oriented parallel to lamination were characterised by values of the static Young’s modulus twice as high as from samples oriented perpendicular to lamination. Similar variability was observed in the case of maximum differential stress values and Poisson’s ratio. Samples parallel to lamination registered notably lower axial strains, which influenced increased values of Young’s modulus and Poisson’s ratio. The rocks studied are characterised by VTI type (vertical transverse isotropy) internal anisotropy of the rock matrix, which significantly influences the anisotropy of their geomechanical properties. Introduction In the last twenty years, the dynamic development of geomechanics has made a significant contribution to the research field of prospecting and exploitation of unconventional hydrocarbons [1]. Technological development, most notably 3D seismic horizontal drilling and multi-stage hydraulic fracturing, is crucial for successful unconventional gas extraction [2,3]. Geomechanical properties of gas shales have emerged as being critical factors in drilling and production [4,5]. Economic factors are also important during the exploitation of unconventional hydrocarbons. The creation of financial models and the assessment of prices are crucial for the cost-effective exploitation of gas [6]. Challenges related to this development have led to a significant increase in attention to geomechanical models of rock massif [7][8][9][10], with these models used more frequently during planning, drilling, and exploration of reservoirs [11]. In shale gas reservoir development, a key step toward optimizing both stimulation and production stages is to evaluate elastic-plastic and visco-elastic-plastic properties, including the detailed treatment of anisotropy and rock strength [12][13][14][15][16], as these influence the success of hydraulic fracturing and fracture response during the stimulation and production stages, respectively [17][18][19][20]. Hydraulic fracturing treatments significantly affect the cost of oil and gas extraction from unconventional reservoirs and their global prices. Thus, making decisions on the execution of hydraulic fracturing projects requires a higher level of integration of technical, commercial, and uncertainty analyses [21]. One of the critical aspects of the primary activities in drilling design includes geomechanical studies of rock material in the reservoir and the surrounding rocks. Results obtained during geomechanical studies allow for defining crucial parameters (strength, elastic moduli) for determining the optimal orientation of the horizontal section of the drilling [22][23][24], design of the hydraulic fracturing process [25,26], and assessment of borehole stability [27][28][29][30]. All these elements contribute to a better assessment of the effectiveness of making the deposit available and allow for more economically viable exploitation of natural gas accumulated in shale complexes. Anisotropy is usually determined through a series of triaxial compression tests on rock specimens cored in different directions i.e., 0 • , 45 • , 60 • , and 90 • [27]. Another useful method for determining textural anisotropy and stress-induced anisotropy is the ultrasonic investigation of shale samples [41][42][43][44]. Ultrasonic methodology was used by Hornby [45] to analyse of influence of porosity and confining pressure on quantity of anisotropy. Ultrasonic velocity anisotropy in the rock provides information of variability of the dynamic elastic moduli in modern geomechanics. For example, Moska et al. [46] calculated the Young's modulus and Poisson's ratio from wave velocities and used these dynamic elastic moduli to determine the brittleness index, which is typically used to predict rock susceptibility for hydraulic fracturing. Sone and Zoback [12,13] analysed the anisotropy from the difference in how the farfield stress is distributed (stress partitioning) to the constituent minerals, depending on the loading direction of treated shales as a mixture of soft (clay and organic matter) and stiff (quartz, feldspars, carbonates) components distributed in fine horizontal layers. Sone and Zoback [13] quantified the stress-partitioning to analyse the shale elastic anisotropy and to determine the one-dimensional creep behaviour under uniaxial loading. Trzeciak et al. [47] extended parameters, describing creeping to three dimensions in order to construct shale creep constitutive relations that are more directly applicable to geomechanical field problems. Furthermore, Rybacki et al. [48] consider that long-term creep experiments are required to estimate in situ stress anisotropy and the "healing behavior" of hydraulically induced fractures. This paper presents the strength-strain parameters of shales from the Baltic Basin. The study was focused on determining the mechanical properties of siltstones and claystones, which are significant for gas exploitation from unconventional resources in Poland. It is the first time that the detailed mechanical properties of Baltic shales have been published for that scale. Available papers, so far, describe investigations performed on limited numbers of samples for selected formations only. Baltic shales are extremely variable, so calculating strength-strain parameters and determining anisotropy based only on two samples is not representative for all Baltic shale formations [47]. This work presents the results of 44 strength-strain tests. Each formation is represented by a few samples. Cutting out the samples with a diameter of 1.5 in, perpendicular and parallel to lamination from the same section of the drill core, is also novel. Laboratory geomechanical analyses include also the analysis of mechanical anisotropy based on an assessment of elastic parameters of the studied rocks. Understanding anisotropy and its causes is very important for the correct interpretation of seismic studies and microseismic monitoring [49][50][51][52][53]. Materials and Methods Laboratory analyses were performed on siltstone-claystones (shales), whose sedimentation took place in the early Palaeozoic Baltic Basin (Figure 1). These rocks are characterised by a high content of clay minerals [54], a significant contribution of organic matter [55,56], and low permeability [57,58]. Due to low permeability of the shales, hydraulic fracturing is performed within them [59][60][61][62][63][64]. Hydraulic fracturing causes the development of a dense network of fractures and fissures in the fractured rock layer, allowing for the exploitation of shale gas [16][17][18][19]65]. Hydraulic fracturing is the most common fracture stimulation technique. However, this procedure causes significant environmental problems, such as groundwater contamination [66], wastewater treatment [66], air pollu- [67], and clay expansion. In contrast, liquid nitrogen (LN 2 ) fracking is considered as one of the best alternatives compared to hydraulic fracturing due to its eco-friendly nature [68]. The contact of LN 2 with rock samples sharply decreases the temperature of the rock, thereby resulting in a large number of microcracks and causing an improvement in the pore structure and connectivity. The most striking characteristic of liquid nitrogen fracturing is the supercryogenic characteristic of the fluid, which poses greater damage to the shale in comparison to other conventional fracturing technologies [69]. Energies 2021, 14, x FOR PEER REVIEW 3 of 17 stimulation technique. However, this procedure causes significant environmental problems, such as groundwater contamination [66], wastewater treatment [66], air pollution [67], and clay expansion. In contrast, liquid nitrogen (LN2) fracking is considered as one of the best alternatives compared to hydraulic fracturing due to its eco-friendly nature [68]. The contact of LN2 with rock samples sharply decreases the temperature of the rock, thereby resulting in a large number of microcracks and causing an improvement in the pore structure and connectivity. The most striking characteristic of liquid nitrogen fracturing is the supercryogenic characteristic of the fluid, which poses greater damage to the shale in comparison to other conventional fracturing technologies [69]. Drill cores, from which the samples were cut out for the analyses, came from three boreholes in northern Poland: B-1, M-1, and W-1. The samples were cut out from drill cores collected from various depths in the range of 3600-4000 m. The samples were collected from lithostratigraphic units representing the Upper Ordovician (Sasino Claystones Formation) and lower Silurian (Pelplin Claystones Formation, Pasłęk Claystones Formation, Jantar Bituminous Claystones Member). These units span a stratigraphic interval from the Caradocian Stage to the Wenlock Series [72]. The position of the claystone formations, from which the samples were collected for the studies, is presented on the lower Palaeozoic stratigraphic log for the western slope of the East European Craton (Figure 2). Drill cores, from which the samples were cut out for the analyses, came from three boreholes in northern Poland: B-1, M-1, and W-1. The samples were cut out from drill cores collected from various depths in the range of 3600-4000 m. The samples were collected from lithostratigraphic units representing the Upper Ordovician (Sasino Claystones Formation) and lower Silurian (Pelplin Claystones Formation, Pasłęk Claystones Formation, Jantar Bituminous Claystones Member). These units span a stratigraphic interval from the Caradocian Stage to the Wenlock Series [72]. The position of the claystone formations, from which the samples were collected for the studies, is presented on the lower Palaeozoic stratigraphic log for the western slope of the East European Craton (Figure 2). Specimen Characteristics The mineral composition of particular samples was determined using X-ray diffraction (XRD), based on Rietveld's [73] method, using SIROQUANT software [74]. The organic matter content was determined using Rock-Eval pyrolytic analysis [75]. The results are presented in Table 1. Specimen Characteristics The mineral composition of particular samples was determined using X-ray diffraction (XRD), based on Rietveld's [73] method, using SIROQUANT software [74]. The organic matter content was determined using Rock-Eval pyrolytic analysis [75]. The results are presented in Table 1. 3.38-8.12 [8.4] Samples from the Pelplin Formation are characterised by a similar mineral composition ( Figure 3). They contain about 46.4% clay minerals, 42.0% quartz, feldspars and pyrite (QFP), and 11.5% carbonates, and the average content of organic matter (TOC) is about 1.4 wt.%. Samples from the Pasłęk Formation are characterised by an elevated content of clay minerals (57.5%), a lower content of QFP minerals (35%), and a low carbonate content (7.3%). The average TOC content in this formation does not exceed 1 wt.%. Samples from the Jantar Member and Sasino Formation were subdivided into two groups based on the mineral composition. Samples from subgroup 2 have a higher contribution of clay minerals compared to samples from subgroup 1. Samples from Jantar Member 1 have a high content of carbonates (average of 26.5%) compared to samples from Jantar Member 2, which contain much lower levels of carbonates (average of 4.1%). Samples from Sasino Formation may be distinguished by the QFP (quartz, feldspar, pyrite) minerals. Samples from Sasino Formation 1 contain more QFP minerals (average of 54.3%) than samples from Sasino Formation 2 (average of 40.4%). 3.38-8.12 [8.4] Samples from the Pelplin Formation are characterised by a similar mineral composition ( Figure 3). They contain about 46.4% clay minerals, 42.0% quartz, feldspars and pyrite (QFP), and 11.5% carbonates, and the average content of organic matter (TOC) is about 1.4 wt.%. Samples from the Pasłęk Formation are characterised by an elevated content of clay minerals (57.5%), a lower content of QFP minerals (35%), and a low carbonate content (7.3%). The average TOC content in this formation does not exceed 1 wt.%. Samples from the Jantar Member and Sasino Formation were subdivided into two groups based on the mineral composition. Samples from subgroup 2 have a higher contribution of clay minerals compared to samples from subgroup 1. Samples from Jantar Member 1 have a high content of carbonates (average of 26.5%) compared to samples from Jantar Member 2, which contain much lower levels of carbonates (average of 4.1%). Samples from Sasino Formation may be distinguished by the QFP (quartz, feldspar, pyrite) minerals. Samples from Sasino Formation 1 contain more QFP minerals (average of 54.3%) than samples from Sasino Formation 2 (average of 40.4%). Sample Preparation Triaxial tests required preparation of cylinder samples, 1.5 inch in diameter and 3 inches high. A vertical sample and a horizontal sample were cut out from each section of the drill core in a direction perpendicular and parallel to shale lamination, respectively ( Figure 4). Horizontal and vertical samples were cut out from the same sections of drill cores in order to determine mechanical anisotropy. Polishing and grinding of the cylinder ends ensured that the two surfaces were parallel to one other, according to the ASTM (D 4543-01) standard [76]. Sample Preparation Triaxial tests required preparation of cylinder samples, 1.5 inch in diameter and ~3 inches high. A vertical sample and a horizontal sample were cut out from each section of the drill core in a direction perpendicular and parallel to shale lamination, respectively ( Figure 4). Horizontal and vertical samples were cut out from the same sections of drill cores in order to determine mechanical anisotropy. Polishing and grinding of the cylinder ends ensured that the two surfaces were parallel to one other, according to the ASTM (D 4543-01) standard [76]. Experimental Equipment Triaxial tests were performed with the application of a servo-hydraulic Material Test System (MTS 815). The increase in temperature in the triaxial cell was obtained by three electrical heaters, 2000 W each. The temperature in the cell was monitored by a thermocouple installed in its centre. Confining pressure in the triaxial cell was achieved using compressed oil. The application of a liquid medium required the surface of the samples to be protected against oil immersion into pore space and microfractures; therefore, prior to the analysis, each sample was protected with a heat-shrink jacket against surrounding liquids. Two axial transducers measured axial strain, and a chain type transducer measured lateral strain ( Figure 5). Volumetric strain (1) was determined using the following formula: where εv-volumetric strain, εz-axial strain, εx,y-lateral strain. Experimental Equipment Triaxial tests were performed with the application of a servo-hydraulic Material Test System (MTS 815). The increase in temperature in the triaxial cell was obtained by three electrical heaters, 2000 W each. The temperature in the cell was monitored by a thermocouple installed in its centre. Confining pressure in the triaxial cell was achieved using compressed oil. The application of a liquid medium required the surface of the samples to be protected against oil immersion into pore space and microfractures; therefore, prior to the analysis, each sample was protected with a heat-shrink jacket against surrounding liquids. Two axial transducers measured axial strain, and a chain type transducer measured lateral strain ( Figure 5). Volumetric strain (1) was determined using the following formula: where ε v -volumetric strain, ε z -axial strain, ε x,y -lateral strain. Prior to analysis, the samples were kept at room temperature. Saturation was considered low enough for the poroelastic effects to be considered negligible. The samples were not subject to additional saturation prior to the analyses. Experimental Procedure Triaxial analyses were performed as single failure tests ( Figure 6), according to the suggestions of ISRM [77] and guidelines of American standards (ASTM) [78] and European standards (Eurocode) [79]. The tests were conducted at a constant temperature of T = 85 °C and stable confining pressure pc = 50 MPa in order to reflect the temperature and effective stresses under in situ conditions. Confining pressure was applied on the rock sample at a rate of 10 MPa/min. Temperature and confining pressure were achieved in the cell prior to axial compression and maintained at a stable level during the whole test. Figure 6. Ideogram of single failure tests: green and blue curve-temperature (T) and pressure (pc), respectively, increase to the expected study level; red curve-compression under stable temperature (T) and pressure (pc) conditions. Prior to analysis, the samples were kept at room temperature. Saturation was considered low enough for the poroelastic effects to be considered negligible. The samples were not subject to additional saturation prior to the analyses. Experimental Procedure Triaxial analyses were performed as single failure tests ( Figure 6), according to the suggestions of ISRM [77] and guidelines of American standards (ASTM) [78] and European standards (Eurocode) [79]. The tests were conducted at a constant temperature of T = 85 • C and stable confining pressure p c = 50 MPa in order to reflect the temperature and effective stresses under in situ conditions. Confining pressure was applied on the rock sample at a rate of 10 MPa/min. Temperature and confining pressure were achieved in the cell prior to axial compression and maintained at a stable level during the whole test. Prior to analysis, the samples were kept at room temperature. Saturation was considered low enough for the poroelastic effects to be considered negligible. The samples were not subject to additional saturation prior to the analyses. Experimental Procedure Triaxial analyses were performed as single failure tests ( Figure 6), according to the suggestions of ISRM [77] and guidelines of American standards (ASTM) [78] and European standards (Eurocode) [79]. The tests were conducted at a constant temperature of T = 85 °C and stable confining pressure pc = 50 MPa in order to reflect the temperature and effective stresses under in situ conditions. Confining pressure was applied on the rock sample at a rate of 10 MPa/min. Temperature and confining pressure were achieved in the cell prior to axial compression and maintained at a stable level during the whole test. Figure 6. Ideogram of single failure tests: green and blue curve-temperature (T) and pressure (pc), respectively, increase to the expected study level; red curve-compression under stable temperature (T) and pressure (pc) conditions. Triaxial tests were performed at a constant strain rate of 10-5 s −1 , up to complete destruction of the sample along the shear surface. Due to direct measurements, it was possible to obtain the following deformation curves: differential stress (σ 1 -σ 3 )-axial strain (ε z ), lateral strain (ε x,y ) and volumetric strain (ε v ). Based on these, static elastic parameters were determined, including Young's modulus (E) and Poisson's ratio (ν). Estimation of Strength and Static Moduli All strength analyses were performed under the same temperature (T) and pressure (p c ) conditions. Therefore, the parameter dataset obtained from the performed analyses in a triaxial cell did not depend on temperature and confining pressure. Consequently, parameter values depended on factors related to the lithology of the rock formations, structural features of single samples, and orientation of rock samples with regard to lamination. Young's modulus and Poisson's ratio are elastic parameters. These parameters were determined using an individual interpretation procedure based on the phenomenological description of rock deformation under loading [80] and were also based on guidelines of American standards (ASTM) [81] and the recommendations of ISRM [77]. In this case, deformation curves obtained from strength analyses were applied for estimating these parameters (Figure 7). Average Young's modulus (E av ) was determined on a straight section of the differential strain (σ 1 -σ 3 )-axial strain (ε z ) curve. Average Poisson's ratio (ν av ) was defined as the ratio (quotient) between the value of axial strain (ε z ) and lateral strain (ε x,y ) for a straight section of all three stresses-axial, lateral and volumetric strain characteristics. The ideogram of determining maximal differential stress, Young's modulus, and Poisson's ratio is presented in Figure 7. The results obtained from strength analyses in a triaxial stress state are presented in Table 2. Triaxial tests were performed at a constant strain rate of 10-5 s −1 , up to complete destruction of the sample along the shear surface. Due to direct measurements, it was possible to obtain the following deformation curves: differential stress (σ1-σ3)-axial strain (εz), lateral strain (εx,y) and volumetric strain (εv). Based on these, static elastic parameters were determined, including Young's modulus (E) and Poisson's ratio (ν). Estimation of Strength and Static Moduli All strength analyses were performed under the same temperature (T) and pressure (pc) conditions. Therefore, the parameter dataset obtained from the performed analyses in a triaxial cell did not depend on temperature and confining pressure. Consequently, parameter values depended on factors related to the lithology of the rock formations, structural features of single samples, and orientation of rock samples with regard to lamination. Young's modulus and Poisson's ratio are elastic parameters. These parameters were determined using an individual interpretation procedure based on the phenomenological description of rock deformation under loading [80] and were also based on guidelines of American standards (ASTM) [81] and the recommendations of ISRM [77]. In this case, deformation curves obtained from strength analyses were applied for estimating these parameters (Figure 7). Average Young's modulus (Eav) was determined on a straight section of the differential strain (σ1-σ3)-axial strain (εz) curve. Average Poisson's ratio (νav) was defined as the ratio (quotient) between the value of axial strain (εz) and lateral strain (εx,y) for a straight section of all three stresses-axial, lateral and volumetric strain characteristics. The ideogram of determining maximal differential stress, Young's modulus, and Poisson's ratio is presented in Figure 7. The results obtained from strength analyses in a triaxial stress state are presented in Table 2. Examples of deformation curves for samples cut out parallel and perpendicular to lamination from the same drill core section are presented in Figure 8. Examples of deformation curves for samples cut out parallel and perpendicular to lamination from the same drill core section are presented in Figure 8. Comparison of the obtained sets of deformation curves (Figure 8) for samples cut out parallel and perpendicular to lamination shows a strong strength anisotropy of the siltstone-claystone rocks. This is reflected in the larger values of maximal differential stress and a more vertical (steep) curve of differential stress of axial strain for the samples cut out parallel to lamination compared to the samples cut out perpendicular to lamination. and a more vertical (steep) curve of differential stress of axial strain for the samples cut out parallel to lamination compared to the samples cut out perpendicular to lamination. The Ordovician and Silurian rocks studied are characterised by strength anisotropy (Figure 9). This is confirmed by the results of maximal differential stress. Horizontal samples from all formations reached higher strength values than vertical samples ( Table 2). Disintegration of the structure of vertical samples took place due to application of smaller loading in comparison to horizontal samples. Horizontal samples were destroyed after application of a much larger strength. Strength anisotropy for vertical and horizontal samples cut out from the same drill core section is well reflected on deformation curves obtained on the basis of measurements of strain and deformation as presented in Figure 8. These curves show that destruction of a horizontal sample requires application of much larger strength; therefore, this sample attained a higher value of maximal differential stress than the vertical sample, which was destroyed at a stress 20% lower than applied on the horizontal sample. The Ordovician and Silurian rocks studied are characterised by strength anisotropy (Figure 9). This is confirmed by the results of maximal differential stress. Horizontal samples from all formations reached higher strength values than vertical samples ( Table 2). Disintegration of the structure of vertical samples took place due to application of smaller loading in comparison to horizontal samples. Horizontal samples were destroyed after application of a much larger strength. Strength anisotropy for vertical and horizontal samples cut out from the same drill core section is well reflected on deformation curves obtained on the basis of measurements of strain and deformation as presented in Figure 8. These curves show that destruction of a horizontal sample requires application of much larger strength; therefore, this sample attained a higher value of maximal differential stress than the vertical sample, which was destroyed at a stress 20% lower than applied on the horizontal sample. Figure 9. Variability ranges (minimum, average, maximum) of the value of maximal differential strain (σ1-σ3)max in particular rock units. In current research, the values of Young's modulus were practically twice as high for horizontal samples as for vertical samples. Horizontal samples, parallel to lamination, obtained values of Young's modulus in the range of 33 GPa to 57 GPa, whereas vertical samples, perpendicular to lamination, had values in the range of 15 GPa to 30 GPa (Figure 10). According to Trzeciak et al. [47], the horizontal Young's modulus of the shale layers (Pasłęk, Jantar, and Sasino formations) ranges from 37 GPa to 60 GPa, while the range for the vertical Young's modulus ranges from 21 GPa to 27 GPa. In current research, the values of Young's modulus were practically twice as high for horizontal samples as for vertical samples. Horizontal samples, parallel to lamination, obtained values of Young's modulus in the range of 33 GPa to 57 GPa, whereas vertical samples, perpendicular to lamination, had values in the range of 15 GPa to 30 GPa ( Figure 10). According to Trzeciak et al. [47], the horizontal Young's modulus of the shale layers (Pasłęk, Jantar, and Sasino formations) ranges from 37 GPa to 60 GPa, while the range for the vertical Young's modulus ranges from 21 GPa to 27 GPa. The sizable difference in the values of Young's modulus depending on the measurement direction is caused by the much lower susceptibility of horizontal samples to axial strain than the vertical samples. Axial strains registered during compression of vertical samples were much larger, thus influencing the lower values of the elastic modulus. Comparison of the obtained results of Young's modulus (Table 2) with the mineral composition of particular rock units (Table 1) shows that samples with a higher content of carbonates and QFP minerals attained higher values of Young's modulus than samples dominated by clay minerals and organic matter (e.g., Jantar 1 vs. Jantar 2 and Sasino 1 vs. Sasino 2). These results confirm the studies of Dohnalik et al. [82], performed on the same rock formations from different boreholes in the Baltic Basin. These studies have shown that the values of Young's modulus and Poisson's ratio strongly depend on the mineral composition of the rocks. Values of Young's modulus are higher for samples with a higher content of carbonates, and Poisson's ratio correlates well with the clay mineral content in the rock sample. The presented studies also show a anisotropy in the values of Poisson's ratio depending on the sample orientation ( Figure 11). Vertical samples are characterised by much lower values of Poisson's ratio than the horizontal samples. Poisson's ratio for vertical values is from 0.14 to 0.28, and for horizontal samples, it is from 0.18 to 0.32. There was no positive correlation between Poisson's ratio and the content of clay minerals in the sample. Higher values of Poisson's ratio for horizontal samples are the result of smaller axial strain in horizontal samples during their compression in a triaxial cell. The sizable difference in the values of Young's modulus depending on the measurement direction is caused by the much lower susceptibility of horizontal samples to axial strain than the vertical samples. Axial strains registered during compression of vertical samples were much larger, thus influencing the lower values of the elastic modulus. Comparison of the obtained results of Young's modulus (Table 2) with the mineral composition of particular rock units (Table 1) shows that samples with a higher content of carbonates and QFP minerals attained higher values of Young's modulus than samples dominated by clay minerals and organic matter (e.g., Jantar 1 vs. Jantar 2 and Sasino 1 vs. Sasino 2). These results confirm the studies of Dohnalik et al. [82], performed on the same rock formations from different boreholes in the Baltic Basin. These studies have shown that the values of Young's modulus and Poisson's ratio strongly depend on the mineral composition of the rocks. Values of Young's modulus are higher for samples with a higher content of carbonates, and Poisson's ratio correlates well with the clay mineral content in the rock sample. The presented studies also show a anisotropy in the values of Poisson's ratio depending on the sample orientation ( Figure 11). Vertical samples are characterised by much lower values of Poisson's ratio than the horizontal samples. Poisson's ratio for vertical values is from 0.14 to 0.28, and for horizontal samples, it is from 0.18 to 0.32. There was no positive correlation between Poisson's ratio and the content of clay minerals in the sample. Higher values of Poisson's ratio for horizontal samples are the result of smaller axial strain in horizontal samples during their compression in a triaxial cell. The obtained data show that horizontal samples (parallel to lamination) are stiffer than vertical samples (perpendicular to lamination) and that samples with a higher content of stiffer minerals (QFP and carbonates) reach higher values of Young's modulus compared to more plastic samples with a higher content of clay minerals. Based on the analysis of triaxial tests, it was established that the elastic parameters of shales depend on the orientation of the mineral and organic components in the rock. Horizontal samples are less susceptible to axial strain compared to vertical samples; therefore, shales are characterised by large elastic anisotropy. Anisotropy The occurrence of mechanical anisotropy in shale formations was also tested based on three anisotropy ratios, determined on the basis of values of (σ1-σ3)max, Eav, and νav obtained for samples cut out parallel and perpendicular to lamination from the same drill core section. It should be emphasized that values of the anisotropy ratio above one indicate to the presence of anisotropy. According to Niandou et al. [83], the degree of strength anisotropy for transversely isotropic rocks is determined by the ratio of failure strength in parallel and perpendicular bedding orientation A(σ1-σ3)max (Equation (2)). Additionally the quantitative assessment of anisotropy of the rocks studied was performed based on anisotropy ratios: Young's modulus (AEav) (Equation (3)) and Poisson's ratio (deformation) (Aνav) (Equation (4)). They were determined based on the values of particular parameters, according to the following formulas: The obtained data show that horizontal samples (parallel to lamination) are stiffer than vertical samples (perpendicular to lamination) and that samples with a higher content of stiffer minerals (QFP and carbonates) reach higher values of Young's modulus compared to more plastic samples with a higher content of clay minerals. Based on the analysis of triaxial tests, it was established that the elastic parameters of shales depend on the orientation of the mineral and organic components in the rock. Horizontal samples are less susceptible to axial strain compared to vertical samples; therefore, shales are characterised by large elastic anisotropy. Anisotropy The occurrence of mechanical anisotropy in shale formations was also tested based on three anisotropy ratios, determined on the basis of values of (σ 1 -σ 3 ) max , E av , and ν av obtained for samples cut out parallel and perpendicular to lamination from the same drill core section. It should be emphasized that values of the anisotropy ratio above one indicate to the presence of anisotropy. According to Niandou et al. [83], the degree of strength anisotropy for transversely isotropic rocks is determined by the ratio of failure strength in parallel and perpendicular bedding orientation A(σ 1 -σ 3 ) max (Equation (2)). Additionally the quantitative assessment of anisotropy of the rocks studied was performed based on anisotropy ratios: Young's modulus (A Eav ) (Equation (3)) and Poisson's ratio (deformation) (A νav ) (Equation (4)). They were determined based on the values of particular parameters, according to the following formulas: The assessment of strength anisotropy in the analysed rocks was conducted based on the anisotropy ratio of the maximal differential stress. For most clay units (except Sasino 2), the obtained values of the strain anisotropy ratio only slightly exceeded one (Table 3). For each drill core section from the studied units, samples cut out parallel to lamination had a higher strength than samples cut out perpendicular to lamination, which points to the presence of strength anisotropy. Distinct strength anisotropy was not observed only for Sasino Formation 2. In this case, only one anisotropy ratio was observed, equal to one, for two samples cut out vertically and horizontally from one drill core section. The obtained values of the anisotropy ratio of Young's modulus (A Eav ) (Table 3) confirm the presence of strong anisotropy of Young's modulus in all shale formations. The lowest values of the anisotropy ratio of Young's modulus, in the range of 1.60, were obtained for the Pelplin Formation. This shows that the value of Young's modulus for a sample cut out parallel to lamination is 60% larger than the value obtained for a sample cut out from the same part of the drill core but perpendicular to lamination. In turn, the largest value of the anisotropy ratio of Young's modulus, in the range of 3.06, was obtained for Sasino Formation 1, for which, in the case of a horizontal sample (cut out parallel to lamination), the value of Young's modulus was over three times larger than the value for a vertical sample (cut out perpendicular to lamination). Generally, similar trends were observed when analysing the strain anisotropy ratio (Poisson's ratio-A νav ) ( Table 3), but in this case, the differences were observed in both measurement directions, and thus the values of anisotropy ratio A νav were much lower. The highest values of the strain anisotropy ratio, above 1.6, noted in the Pelplin, Sasino 1, and Sasino 2 formations, were twice as small as the maximal values of the anisotropy ratio of Young's modulus. The value ranges of the strain anisotropy ratio (A νav ) exceeding one indicate the presence of distinct anisotropy of Poisson's ratio in claystones from the Pasłęk and Sasino 2 formations and in Jantar Member 1. In the remaining units (Pelplin, Jantar 2, and Sasino 1), despite the fact that the value of the strain anisotropy ratio (A νav ) attained values below one for samples cut out from the same section of the drill core, the average value of this ratio, above one, also indicates the presence of distinct anisotropy in these rocks. Conclusions The growing demand for hydrocarbons has caused significant intensification of geomechanical studies. These investigations are focused on determining the strength and strain parameters of the rocks building the reservoirs of unconventional gas and oil deposits. The results of geomechanical studies are used mainly for design of the most optimal process of hydraulic fracturing, indispensable for economically viable exploitation of gas from deposits characterised by very low permeability. The results of the mechanical properties obtained herein should enhance gas production from shale gas deposits in Poland. The presented analysis of mechanical anisotropy may be of crucial significance for successful exploitation of gas from unconventional resources. Triaxial tests were performed on cylindrical samples cut out perpendicular and parallel to lamination characteristic of the shales studied. Based on the performed analyses, it may be assumed that the shales are characterised by strong mechanical anisotropy. The analysed claystone units are characterised by strength anisotropy. This is confirmed by the obtained values of maximal differential stress required for destruction of the sample. Horizontal samples from all units had a higher strength than vertical samples. These conclusions can also be confirmed by anisotropy ratios determined on the basis of the values ((σ 1 -σ 3 ) max , E av , and ν av ) obtained for samples cut out parallel and perpendicular to lamination from the same drill core sections. These ratios also confirm the presence of anisotropy in most rock units studied. The performed strength and strain tests allowed for the determination of the elastic and strain properties of the studied shales. Horizontal samples (cut out parallel to lamination) attained much higher values of Young's modulus, in the range of 33 GPa to 57 GPa, than vertical samples (cut out perpendicular to lamination), characterised by values of Young's modulus in the range of 15 GPa to 30 GPa. Elastic properties in the analysed shale units depended on the direction of measurement, which is reflected in the obtained values of Young's modulus. A privileged direction is observed, in which the highest values of Young's modulus were noted. This direction is parallel to lamination, where smaller axial strain was registered compared to the direction perpendicular to lamination. The Pelplin, Pasłęk, Jantar 1, Jantar 2, Sasino 1, and Sasino 2 claystone units are thus characterised by a strong elastic anisotropy (Young's modulus). Based on single failure triaxial tests, Poisson's ratio (ν av ) was determined for the analysed rock units. For horizontal samples, the range of Poisson's ratio (ν av ) is from 0.18 to 0.32, and for vertical samples, the average Poisson's ratio (ν av ) is in the range of 0.13 to 0.28. Analysis of the value of Poisson's ratio for particular lithostratigraphic units shows that in most cases it was much higher for horizontal samples than for vertical samples. The structure of clay rocks results from sedimentation and later diagenesis of sheets of clay minerals. The arrangement of clay minerals horizontally in the shale rock led to the development of internal VTI anisotropy of the rock matrix. This anisotropy causes a privileged direction in all clay units, along which the highest values of the analysed parameters were observed. This direction is parallel to lamination, in which much smaller axial strains were observed compared to the direction perpendicular to lamination. Lower Palaeozoic claystone units are thus characterised by strong anisotropy of geomechanical properties. Laboratory results indicate that mechanical properties of gas shales are variable. It was also established that mineral composition has influence on the strength and strain properties.
8,643.6
2021-05-21T00:00:00.000
[ "Geology" ]
Do interactions cancel associations of subjective well-being with individual-level socioeconomic characteristics? An exploratory analysis using the European Social Survey Using the European Social Survey (2002–2014, 16 countries, N = 146,579), I examine whether significant associations between self-reported subjective well-being (SWB) and thirteen individual-level socioeconomic characteristics still hold in specific population sub-groups. The determinants are age, gender, children at home, education, work status, religiosity, political orientation, trust towards the parliament and the legal system, meeting friends, marital status, health and finances. Based on each characteristic’s values, I divide the sample into sub-groups and run separate regressions. Compared to regressions using the whole sample, only six of the aforementioned characteristics maintain the same association with SWB. For age, gender, children at home, education, religiosity and trust the previous associations with SWB now disappear. These results contradict prior theoretical and empirical findings. Introduction Since the 1950s, subjective well-being (SWB) 1 has become a very popular research field in many disciplines including Psychology, Economics, and Sociology. A considerable amount of relevant research had already been conducted by the early 1980s, as discussed in the seminal paper of the pioneering Diener (1984). In recent years, a plethora of reviews and meta-analyses have focused on the topic (e.g. Dolan et al. 2008, Eger andMaridal 2015;Jorm and Ryan 2014;Lane 2017;Lyubomirsky et al. 2005). Why do researchers study SWB? As Diener and Ryan (2009, p. 392) argue, the main applied goal when studying SWB is improving people's lives 'beyond the elimination of misery'. Research shows that individuals scoring high on SWB are healthier, and live longer. They are also more successful regarding marriage, friendships, income levels, and working career (Lyubomirsky et al. 2005, p. 803). Beyond the individual level, this helps the smooth functioning of work organisations and, in turn, democratic systems. Put differently, high SWB at individual level can spill over and benefit overall society by making it function more effectively (Diener and Biswas-Diener 2008). Stiglitz et al. (2010) warn however that aggregate economic indicators should not be used to measure national wellbeing. For example, a country's GDP may give an overall picture of a nation's wealth and progress, but cannot effectively capture well-being at individual level. Instead, SWB helps us to better monitor social progress and relevant policies (Taylor 2011). Thus, by measuring SWB, we can roughly estimate people's quality of life and then, ideally, design and implement policies for improving it. Why study factors (characteristics) associated with SWB? A fundamental goal of any democratically elected government is to implement policies that maximise citizen well-being (Fleche et al. 2011, p. 5). Evaluation of such policies' goodness interests not only national governments but also international organisations. The Commission on the Measurement of Economic Performance and Social Progress (Stiglitz et al. 2009(Stiglitz et al. , 2010 recommended that, alongside economic data, subjective measures of well-being should be used to assess social progress and evaluate relevant policy. Similarly, the World Happiness Report 2016 (Helliwell et al. 2016) highlighted that SWB measurements can be used to effectively assess a nation's progress. In fact, several economists even propose SWB as a substitute for utility, a central notion in economic theory (Helliwell and Barrington-Leigh 2010). I would, thus, argue that motivations for studying SWB include both pure academic interest and the policy implications of research findings. As Ngamada (2017, p. 377) explains, to design and implement policies that would maximise SWB, it is first imperative to 'identify the most important factors that are associated with it'. Economists, for example, have been investigating the factors that influence individuallevel happiness, and how lower SWB relates to unemployment (Ferrer-i-Carbonell 2013, p. 37). Unemployment can indeed be detrimental to individual SWB, with negative social and health repercussions, whereas a regular salary produces the opposite effect. (Cole et al. 2009;Diener and Chan 2011;Headey and Wearing 1990;Kilian et al. 2012;Tay and Diener 2011). Broyd et al. (2016, p. 429) report that maximising SWB produces, besides economic advantages, obvious benefits for special sub-groups of the population, such as people with severe mental illness. Lukaschek et al. (2017) investigated risk factors associated with low SWB in males and females aged 65 and over. Depression, anxiety, and sleeping problems seemed to be associated with low SWB in both sexes. They conclude that increased mental health interventions are required, especially among lone-dwelling females. Commonly studied factors of SWB For the last 40 years, examination of SWB factors has been the favourite topic of many psychologists. The 'Big Five' personality traits Extraversion, Agreeableness, Openness to experience, Conscientiousness, andNeuroticism (McCrae andCosta 1987, 1997; (SI). I selected these countries because they are the only ones (of 36 in total) to have participated in all seven ESS rounds. I examine whether individuallevel socioeconomic characteristics previously reported to be associated with SWB still have the same effect and show which have the strongest association. I then analyse whether these associations are changed when breaking the sample into smaller groups. I hypothesise that some of the previously reported associations will not persist after such division of the data. Hence, using the split data, the goal is to identify those characteristics whose relation with SWB is unaltered and those for which, contrary to prior findings, the association 1 3 disappears. To this end, I compare the statistical significance of the coefficients of each characteristic between the model for the whole dataset and sub-group models. Goals and hypotheses In the following section, I briefly describe the data and the dependent and independent variables selected for statistical analysis. In Sect. 3, I explain the logic behind the regression models and comment on the results. In the same section, I refer to similar studies on the subject to compare my findings with theirs, focusing mostly on research after 1990. In the final section, I summarise and discuss my findings. Dependent variable In psychology, SWB is a 'general assessment' of how one feels about one's life (Sumner 1996). Life satisfaction describes a cognitive judgement, whereas happiness refers to an emotional state. Happiness and life satisfaction are, thus, basic components of SWB. In the ESS, they are measured via the following Likert-scale questions: Taking all things together, how happy would you say you are? (0 -Extremely unhappy to 10 -Extremely happy) All things considered, how satisfied are you with your life as a whole nowadays? (0 -Extremely unsatisfied to 10 -Extremely satisfied) In previous studies, the correlation between self-reported levels of happiness and life satisfaction has varied. For example, in WVS data for 1981-2005, it was only 0.47 (Eger and Maridal 2015, p. 46). In my dataset, the overall correlation of the two variables was 0.628. Per country, it ranged from 0.393 to 0.672, and per survey-round (year) from 0.600 to 0.650. Following Eger and Maridal (2015), I use their mean value as the indicator of (self-reported) SWB. Independent variables The ESS includes responses for most of the individual-level variables listed in Introduction. However, not all were measured in all seven rounds. Also recall that not all countries participated in each round. Due to these limitations, and after considering each variable's relevant importance to SWB, I selected the following 13 individual-level socioeconomic characteristics: age, gender, children at home, educational level, (daily) activity/work status, religious activity, political orientation, trust towards the parliament and the legal system, frequency of meeting with friends, marital status, self-reported health status, and coping with finances. In the regression models, I also added the respondent's Country, the survey Round, and their interaction as control variables. Descriptives By only including responses from countries that participated in all seven ESS rounds from 2002 to 2014, the dataset was limited to 16 of the 36 countries included in one or more of the survey rounds. However, the benefit of such a restricted approach is a homogenous dataset with no missing values. We can, therefore, use both these variables and their interaction as controls in our regressions, comparing the SWB level of each country per round against all others. Furthermore, the data's homogeneity allows models to be generated without missing coefficients in any country-round combinations. The variables for which responses were provided using a 1-10 Likert scale were treated as continuous. As per Easterbrook et al. (2016Easterbrook et al. ( , p. 1273, I restricted the (daily)activity/ work status categorical variable to those that were working, unemployed, retired, or stayed home looking after family members. To more easily interpret the resulting coefficients and increase the number of observations in the sub-groups, I also aggregated several other categorical variables. (Frequency of) meeting with friends was reduced from the seven original categories to three. I combined the Bad and Very bad categories of Self-reported Health Status, and the categories Finding it difficult to live on present income and Finding it very difficult to live on present income of coping with finances. I also restricted respondents' age to between 21 and 90 years. The categories of marital status varied between ESS rounds, with three somewhat different definitions used in 2002/2004, 2006/2008, and 2010/2012/2014. I chose to use the categories Married/In civil partnership, Divorced/Separated, Widowed, and Never married/Never in civil partnership. Table 1 presents descriptive statistics of the variables used. Fixed effects (OLS) full models I initially ran a fixed effects ordinary least squares (OLS) regression with all the aforementioned variables on the right-hand side of the model. I also added the main and interaction effects of the respondent's country and the response round, thereby creating a separate intercept of the dependent variable (SWB) per country per response year (round). After running the basic model (Model 1), I calculated outliers using the Cook's distance (Cook 1977). From the initial sample of 155,779 observations, I identified 9200 influential observations. I then reran the same model without these observations (N = 146,579: Model 2). I compared the two models using Akaike's information criteria (AIC) and Schwarz's Bayesian information criteria (BIC) (Akaike 1974;Raftery 1995;Schwarz 1978). Model 2 performed better because the values of both criteria decreased. The explanatory power of Model 2 was 41.6%, considerably higher than that of Model 1 (34.6%). The average value of the variance inflation factor (VIF) for both models was smaller than 10, indicating that there was no serious multicollinearity among their independent variables. I visually checked how the error terms are distributed in both models by graphing their kernel density and their normal probability plots. As expected, the distribution of the residuals of Model 2 without the outliers is more normally distributed than that of Model 1. The respected graphs are available upon request. The Breusch-Pagan test (Breusch and Pagan 1979) in Model 1 was statistically significant. Thus, to account for potential heteroscedasticity, I ran Model 2 and all subsequent models calculating robust standard errors. In those same models, I also applied population weights based on the relevant ESS documentation (https ://www.europ eanso cials urvey .org/docs/metho dolog y). Based on Model 2, and to compare the strength of association of each predictor with SWB, I also generated standardised (beta) coefficients. These are measured in standard deviations; thus, their magnitudes can be compared (Table 2). Multilevel (mixed effects) full model The two fixed effects OLS models indicated the magnitude and significance of the relationships between SWB and the predictors. To test robustness, I analysed the data with another regression method. With individual-level observations per country and per round, one can describe the potential association between predictors and the dependent variable using multilevel analysis. The chosen dataset is hierarchically nested, with individual responses recorded per round (year) and per country. With seven rounds of ESS data, multilevel analysis accounts for the time series feature of the responses. I used the statistical package Stata version 15.2. For multilevel models, Stata includes the mixed command that generates a fixed and a random part (see Rabe-Hesketh and Skrondal 2012). Because the dependent variable SWB is in ordinal and discrete form, Wooldridge (2002) suggests that a rank-ordered probit model is most suitable for the analysis. Similarly, Alesina et al. (2004) claim that, when studying happiness, this technique is preferred to OLS estimation (cited in Aassve et al. 2012, p. 76). However, other studies have shown that, in such analyses, there are few differences in the sign and statistical significance of the generated coefficients (Boarini et al. 2012;Ferrer-i-Carbonell and Frijters 2004). Boarini et al. (2012, p. 17) uses OLS regressions since 'the interpretation is more straightforward'. I ran a probit model with the same dependent and independent variables. Confirming prior findings, the sign and statistical significance of each predictor's coefficient were very similar to those of the OLS models. Hence, I did not further pursue ordered probit estimation. As the dependent variable, I again used the above-described composite version of SWB. As independent variables, I used the same 13 individual-level predictors: age (and age squared), gender, children at home, educational level, work status, religious activity, political orientation, trust towards the parliament, trust towards the legal system, meeting with friends, marital status, self-reported health status, and coping with finances. These comprised the fixed part of the model. In the random part, I defined the respondent's Country as the third-level grouping variable and the response round (year) as the second-level grouping variable. The first-level comprised the individual observations. I thus built a three-level random intercept model (Model 3) in which SWB is controlled by the variables in the fixed part but has different intercepts (mean values) in each country and for each round (year) in the data. In this model, the relationships between SWB and all predictors in the fixed part have the same slope (Table 2). Individual-level coefficients in all three models remained stable regardless of the model specification and the regression method: their magnitude, sign, and statistical significance did not change considerably between models. Age The relationship between Age and SWB is not linear (Blanchflower and Oswald 2004, 2008Frijters and Beatton 2012;Steptoe et al. 2015). Age and its quadratic term (age-squared) retained their statistical significance and signs in all three models. Since the coefficient was negative for age but positive for age-squared, happiness declines as one ages up to a certain point, after which it starts increasing again. Thus, the relationship was U-shaped (see also Gerdtham and Johannesson 2001;Subramanian et al. 2005). However, since both coefficients were very small, the decline and rise of SWB as one grows older was slow. Gender Gender was associated with happiness. In all three models, women were happier than men in a statistically significant way. SWB levels were approximately 0.121 units higher for females compared to males. These results are similar to those of previous studies, which also reported females feeling happier than males (Alesina et al. 2004;Helliwell et al. 2015). Children at home People with children at home were clearly happier than those without. In all three models, the coefficient for respondents without children at home was negative and statistically significant. This finding is consistent with prior findings, with parents Full models with and without outliers. Dependent variable: SWB *p < 0.05; **p < 0.01; ***p < 0.001 (1) 16,251.77*** consistently reporting greater SWB in activities with children than without (for a comprehensive review, see Musick et al. 2016). Education Additional education reduces SWB, but only very slightly. Previous studies have reported the opposite. For example, Chen (2012), Easterbrook et al. (2016), and Kuppens et al. (2015) have shown that educational level is associated with more happiness and beneficial personal and sociopolitical outcomes. While this result is interesting, it is later cancelled in some instances when I divide the data into smaller groups. (Daily) activity/work status The coefficients show that working people seem happier than those who do not work (Di Tella and MacCulloch 2006). In their meta-analysis, Paul and Moser (2009) report that several indicators of mental health, such as SWB, were significantly lower among long-term unemployed compare to workers. However, the three models' results also clearly show that those involved in other daily activities, such as attending others at home, or those that are retired reported higher SWB levels than those who work. Religiosity In prior studies, people with strong religious beliefs have, on average, reported feeling happier than others (Abdel-Khalek 2011; Clark and Lelkes 2005; Lechner and Leopold 2015; Mollidor et al. 2015). This is confirmed in the present analysis: the respective variable's coefficient had a positive sign and was statistically significant in all three models, as shown in Table 2. Political orientation It has been reported that conservatives are, on average, happier than those with other political affiliations (Bixter 2015;Burton et al. 2015;Di Tella and MacCulloch 2005;Napier and Jost 2008;Onraet et al. 2013;Schlenker et al. 2012). This was confirmed in the present analysis with respect to respondents' political orientation. As subjects moved from left to right on the political scale, they became increasingly happier, since the respective coefficient was positive and statistically significant. Trust in country's institutions (parliament and legal system) Hudson (2006) found that trust in the national government and in the law each positively impacts well-being. In the three models, two variables are proxies for such constructs: trust towards the parliament and trust towards the legal system. As in Hudson's research, both correlated positively with SWB. Meeting with friends/relatives Regarding respondents' social behaviour, the more often they met with friends and relatives, the greater their reported happiness. The coefficients of the sub-cohorts of the variable were positive, statistically significant, and increased as the frequency of meetings increased. This findings accords with previous research (Gundelach and Kreiner 2004;Leung et al. 2013). Marital status Married people have generally been found to be happier than others [see Helliwell et al. (2017, pp. 5-7) for a brief but comprehensive overview]. At the same time, people who are generically happier are more likely to find and attract partners to marry (De Neve et al. 2013) This in turn, denotes possible selection bias and has been addressed elsewhere by using fixed effects regressions. The positive relationship of Marital Status with happiness nonetheless remains (Clark and Georgellis 2013). Furthermore, the extent to which married people are 'happier' depends on the comparator group. Based on this study's aggregation of the marital status categories in the data, married and civilpartnered people report significantly higher SWB levels compared to all other groups. That is, the coefficients of the divorced or separated, the widowed, and those who have never cohabited have a negative sign and are statistically significant when the reference category is married and/or civil-partnered respondents. Self-reported health status Previous research indicates that sick people are less happy than healthy ones (Deaton 2008;Steptoe et al. 2015). This was also confirmed in the three models' results. All three sub-cohort coefficients had a negative sign, were statistically significant, and were getting smaller than the reference group (very good self-reported health status). As expected, respondents who believed their health to be optimal were also the happiest (linearly). Coping with finances Prior research suggests that an individual's financial stability and wealth positively influence their SWB (e.g. Senik 2014; Stevenson and Wolfers 2013). In the three models, those who reported being financially comfortable were happier than those just able to make ends meet, and even more so compared to those experiencing financial difficulties. As for self-reported health status, the coefficients of the sub-cohorts of coping with finances were statistically significant, had a negative sign, and decreased as difficulties coping with finances became more acute. Country and round (year) Although this analysis focused on the effects of individual-level variables, the country, round, and their interaction were used as controls in the OLS fixed effects models, while in the mixed effects estimation, country and round were respectively used as third-and second-level grouping variables. Utilising both variables is justified because international comparisons confirm the intuitive hypothesis that people in different countries report varying SWB levels. In the OLS Models 1 and 2, the coefficients of these two variables were statistically significant per country, per round, and for their interaction. Their reported variances were also statistically significant in the mixed effects Model 3. For more on cross-national comparisons see for example Borooah (2006), Diener and Suh (2003) and Jorm and Ryan (2014). Beta coefficients In Model 2, age, self-reported health status, and coping with finances were the independent variables with the largest absolute beta coefficients (|0.20| or more); that is, these predictors had the strongest influence on SWB, thus reaffirming a finding recently reported by Ngamaba (2017). Models and interactions for sub-groups of the population The analysis detailed above confirms the previously established, statistically significant associations with SWB for most of the utilised predictors. However, some authors have described a more complex relationship between these determinants and SWB, with potentially unclear underlying directions of causality. For example, Kuppens et al. (2015Kuppens et al. ( , p. 1260 mention that formal education level is positively related to health, wellbeing, social attitudes, and interest in politics. Steptoe et al. (2015, p. 640) found that SWB and health are closely linked to age, and that the relation between physical heath and SWB is bidirectional. Löckenhoff and Carstensen (2004) claim that physical health, ageing, and well-being are closely related, especially if interpreted through the prism of the socioemotional selectivity theory (Carstensen et al. 1999). Hirschauer et al. (2015, p. 657) point out that high incomes are often correlated with high stress and little time for leisure and family activities, implying that the relation between financial means and SWB depends on other interacting factors. Lechner and Leopold (2015, p. 172) find that, by attending more religious events, one can cushion the initial drop in life satisfaction caused by unemployment, and can better adapt if worklessness persists. Inglehart (2002) analysed 148,991 individual-level observations from the WVS in 65 countries covering 1981-1997. He found clear evidence of an interaction effect of age and gender on SWB. As he explains, evidence that younger women tend to be happier than men (especially in richer countries) is offset by the evidence that older women tend to be less happy than men in these same societies, producing very small overall gender differences. This interaction tends to conceal statistically significant and theoretically interesting gender differences in subjective well-being. (p. 407). Dolan et al. (2008) find that having children is negatively associated with SWB in specific groups of the population that face greater efforts to raise them, including single mothers. Finally, Layman (1997) reports that political affiliation is correlated with religiosity, which is, in turn, associated with better health (Green and Elliott 2010). The common message in the aforementioned studies is that, in many cases, socioeconomic determinants associate with SWB not only directly but also through interactions among themselves. This poses significant challenges to correctly statistically modelling such relationships. To account for and test such complexity, I assert that when the data of the whole population is broken down (divided) into smaller sub-groups, such associations (and non-associations) with SWB continue to be valid for only some of the predictors (those for which the association is quite robust). For some other predictors, I hypothesise that the previously found associations change. Through this process, I can also identify sub-groups of the population whose socioeconomic characteristics have similar coefficients to the whole population and other sub-groups, where these coefficients change their statistical significance and sign when SWB is regressed against them. Put differently, I can reveal which sub-groups behave similarly to the whole population and those which behave dissimilarly in relation to SWB. Deciding between the OLS fixed effects and multilevel (mixed) regression models To test these hypotheses, I initially compared two models which data for the whole population are used: Model 2 (OLS fixed effects) and Model 3 (mixed effects). The aim was to choose the most suitable model specification and analysis method for the subsequent tests. Based on the AIC and BIC, Model 3 performed slightly better than Model 2. Nonetheless, when using a mixed (random) model, the number of categories in the grouping variable can potentially be problematic. According to Maas and Hox (2005), having fewer than 50 categories leads to biased estimates of the second-level standard errors. Similar results were reported more recently by Bryan and Jenkins (2013). They recommend avoiding hierarchical/random effects models if there are fewer than 25 level groups. The data utilised here were collected from 16 countries over seven survey rounds. In any event, the present study focuses on individual-level effects which came out very similar regardless of method. Hence, I continued the analysis using only the robust OLS fixed effects regression models. Brambor et al. (2006, p. 64) discuss interactions in regression models. Most relevant for present purposes, they state that: Interactions of the socioeconomic characteristics in 41 models Analysts should include interaction terms whenever they have conditional hypotheses. A conditional hypothesis is simply one in which a relationship between two or more variables depends on the value of one or more other variables. Perhaps the simplest conditional hypothesis is: H1: An increase in X is associated with an increase in Y when condition Z is met, but not when condition Z is absent. Thus, I divided the data into smaller groups based on the values of the 13 individual-level predictors. For each new regression using only one sub-group's data, I essentially interacted a fixed value of the respective predictor with the rest, always in relation to SWB. For each new sub-group, I ran a fixed effects OLS regression with the same dependent variable and the same predictors as in the full Model 2. Since the total number of groups was large (41), the confidence level of all regressions was set to 99.99% (p = 0.001). This compensated for the increased probability of Type I error in repeatedly calculating coefficient estimates from the same sample. My reasons for dividing the sample into smaller groups, rather than running interactions with all the data, include simplicity, clarity of interpretation, and attempting to avoid potential methodological pitfalls encountered in similar prior studies. In fact Boarini et al. (2012) apply the same breakdown methodology. However, their detailed analysis only includes a small number of sub-groups formulated from the socioeconomic characteristics of individual-level respondents (ibid., p. 26, Table 6; p. 28, Table 7). Finally, to re-check the robustness of both methods, I also ran the 41 sub-group models applying multilevel regressions, using the exact same specification as for Model 3 with the whole dataset. The coefficients of the individual predictors were very similar to those in the OLS fixed effects models, in terms of sign, magnitude, and statistical significance. The 41 models in this study are listed in Tables 3, 4, and 5. For comparison, the coefficients of the full Model 2 are presented in the second column from the left. How the 41 groups were created For Age, the sample was divided into three groups: 21-30, 31-60, and 61-90 years old. This division contrasts people still studying or beginning their working careers with those who are working and those close to or in retirement. Four groups were created according to educational level: those with basic schooling (9 years or less), those that studied at high school (10-13 years), those with university-level education (14-18 years), and those who studied for more than 18 years. For each of the variables religious activity, political trust, legal trust, and political orientation (all originally measured on a 1-10 Likert scale), the sample was divided into three groups according to their responses: up to 4, from 5 to 7, and from 8 to 10. For the remaining categorical independent variables (gender, children at home, (daily) activity/work status, meeting with friends, marital status, self-reported health status, and coping with finances), subgroups were created based on their own classification. Dummy variables of the 16 countries, the seven rounds, and their interaction were included in all models. Interpretation The results presented in Tables 3, 4, and 5 are interpreted by examining and comparing the behaviour of each predictor in the sub-group models when matched against its counterpart in the full Model 2. This enables the predictors robust to such sub-sampling to be identified, namely, those that retain their statistical significance and sign in the full Model 2 in all sub-group models. Initial comparison showed that the unemployed were consistently less happy than workers, women were always happier than men, SWB increased with greater conservativism in political views. SWB was also found to increase the more the respondent met with friends and relatives, and to be higher for those who were married or in a civil partnership. Finally, SWB increased with greater self-reported health and ability to cope with finances. For the rest of the predictors, the results were mixed. The non-linear statistically significant association between Age and SWB ceased among the older (Model 6), the retirees (Model 17), and the widowed (Model 36). In the full Model 2, those with children living at home were generally happier. However, the SWB difference disappeared in nine different sub groups, namely, when the respondent: was older (Model 6), was female (Model 8), had little or extensive education (Models 11 and 14), was retired or taking care of others at home (Models 17 and 18), had little social skills (Model 31), was married or in a civil partnership (Model 34), or had been widowed (Model 36). Such results may be attributed to various factors. One is the burden of raising children. For example, additional children, after the first born, do not increase mothers' SWB while they increase fathers' SWB (Aassve et al. 2012, p. 82). The burden of having children at home could also be greater for elders, who may lack sufficient energy to take care of them, and for those already attending others at home. The results for retirees and the widowed might also be indirectly related to age, since they are typically older. Conversely, this contradicts the notion that having someone at home in old age combats loneliness, hence increasing SWB (Singh and Misra 2009). The result for those with little social skills is plausible: those less inclined to spend time with friends might also be reluctant to have children around. The results in the Education Level and Marital Status sub-groups are not easily interpretable. It could be that people with little education generally work in lower-paying professions or are unemployed, resulting in financial difficulties that cause them to regard taking care of children to be an extra burden. For well-educated individuals, it could be that busy work schedules or extensive other activities prevent them deriving additional happiness from the presence of children in their household. Finally, co-habiting with either a spouse or civil partner apparently cancels the happiness of having children. It could be that, compared to those with children at home, the existence of a co-habiting partner is enough to reduce the difference in SWB to non-statistically significant levels. Interesting results emerge on examining the other predictor variables, especially those used as continuous in our models. Outlier groups seem to behave differently compared to those in the middle of the response distribution. In the full Model 2, Education Level had a small negative association with happiness. As mentioned earlier, this is opposite to prior research findings. However, when dividing the sample into smaller sub-groups, the coefficients for Education Level changed sign and significance for the less-educated (Model 11) and most-educated (Model 14) respondents. That is, those with basic education do not change their SWB levels with an extra year of study. The same seems to be the case for those with at least 19 years of schooling. For those in between-with 10-18 years of education-results were similar to those of the full model. This indicates that the relationship between Education and SWB is non-linear. We can hypothesize why this is so. One reason might have to do with how well one copes with his/her finances which-in turn usually-correlates positively with SWB. Basic skilled workers get general low salaries. Also in many study disciplines those with higher degrees (e.g. PhD) do not necessarily earn more money on the margin compared to those with just Master level education. Religiosity was positively related to SWB, overall. On dividing the sample into three sub-groups, a non-linear association with SWB was found. Those reporting little religious activities (0-4) were less happy with a little extra religious activity (Model 19). The coefficient changed sign from negative to positive in the second sub-group (Model 20), and more than tripled in value among Table 3 Sub-group analysis of effects on subjective well being of age, gender, education, children at home and daily activity Dependent variable: SWB *p < 0.05; **p < 0.01; ***p < 0.001 Table 4 Sub-group analysis of effects on subjective well being of religious activity, political orientation, trust in parliament, and trust in legal system Dependent variable: SWB *p < 0.05; **p < 0.01; ***p < 0.001 Table 5 Sub-group analysis of effects on subjective well being of social meetings, self-reported health status, and coping with finances the very religious respondents (Model 21). Why this might be? The obvious answer is that another variable (or variables) affects differently the three groups of Religiosity and their SWB. For example, non-religious people get married less often than religious ones. We also know that those who are married are happier, on average. On the other hand, just being in a civil partnership relationship is also strongly associated with higher SWB. At the same time, people who do not get married are less religious, on average, Hence all evidence indicates contradictory and complex associations. A similar non-linear association was identified for the other three continuous predictors, namely, political orientation, trust in parliament, and trust in the legal system. While positive coefficients were found for the overall sample, groups reporting values from 0 to 4 have negative and statistically significant coefficients (Models 22 and 25) and one positive coefficient which nonetheless is statistically insignificant (Model 28). These relationships change and become strongly positive only in the 8-10 value groups (Models 24, 27, and 30, respectively). Attempting to explain these results I would argue that political orientation is also associated with marital status and coping with finances. Left wing voters are on average less wealthy than conservative ones and are not that religious. Which in turn it could mean that they do not believe in the concept of marriage. Trust in the parliament and in the legal system are again positively associated with political orientation and coping with Finances which as noted earlier, correlate positively with SWB. All in all some of these out-of-norm results could have been caused by higher than just two-way interactions of observable independent variables in our models. Summary and discussion In the present study, I analysed data from 16 countries and seven rounds of the ESS (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014). Since prior research has found significant associations between self-reported SWB and individual-level socioeconomic characteristics, this study's objective was to explore whether these associations persist in different sub-groups of the population. That is, I attempted to identify so-called 'robust' and 'weaker' determinants of SWB. The former group's statistically significant effect on SWB remains intact regardless of the sub-population examined, whereas the latter group's effect changes in several population sub-groups. The sample was divided into 41 sub-groups, based on separate value ranges for the continuous individual characteristics and on cohorts for other categorical predictors. The logic behind the approach is simple. A complex phenomenon becomes easier to examine and comprehend once you "break" it into smaller, more manageable parts. By dividing the data into smaller groups each of which was similar in a specific characteristic, l was able to study the respective group's behavior and compare it against results of other sub-groups as well those of the total sample. For example, if the results came similar to those when analyzing data from the whole sample at hand, l concluded that this characteristic of that specific subgroup was not associated with SWB. Conversely, the whatever-differences-found, could be attributed to that one characteristic with more confidence. In statistical jargon this specific characteristic potentially interacted with all other observable factors examined. When running models for each group and applying the same specification as for the whole sample, I identified robust and weak socioeconomic characteristics with respect to their association with SWB. The robust characteristics comprise work (versus unemployment), female (versus male), political orientation, social interaction with friends, marital status, self-reported health status, and coping with finances. All continued to behave as in the full model, and were positively associated with SWB. When it comes to the rest of the individual-level characteristics however, the associations of age, gender, children at home, education, religiosity and trust with SWB disappeared. Such results are opposite to prior theoretical and empirical findings. Reflecting on why only some individual-level socioeconomic characteristics maintain robust associations with SWB in all 41 population sub-groups, it is apparent that the relation between the SWB and some characteristics is not straightforward. As discussed earlier, there are obvious interactions among them. In some cases, such interactions affect how specific characteristics associate with SWB; in some other cases, they play no role whatsoever. The nature of the paper is purely descriptive and exploratory. The goal at this stage was to identify characteristics and sub-groups in which previously identified associations with SWB no longer hold. Apart from arguing that the divergences are due to complex interactions, I do not attempt to explain this in a more conceptual way due to space limitations, this will be undertaken in the future. The study has a few limitations which are linked to the type of the data utilised. Although the sample size was sufficient to study sub-groups each based on one characteristic, it was not possible to combine several characteristics in sub-groups. Therefore, more complicated simultaneous interactions were not studied. The existence of such complexity is evident from careful examination of Tables 3, 4, and 5. For example, the statistical significance of Education Level ceases in the models of sub-groups whose respondents report bad health (Model 41) or poor coping with finances (Model 44). This indicates that interactions likely exist among more than two determinants of SWB. In addition, some might argue that examining data from 16 only countries, and using the multivariate method of analysis with fixed-effects OLS regressions, restricts inferences from the results to the population at hand. Finally, the study uses cross-sectional time series data, since the individuals surveyed are not the same in each ESS round. Responses from the same individuals (a panel) might yield somewhat different results. For example, Ferrer-i-Carbonell and Frijters (2004) report that the income effect on life satisfaction falls by as much as one third when controlling for individual-level fixed effects, compared to other estimation methods. It is, thus, more informative and accurate to measure SWB levels of the same respondents over time. Ferrer-i-Carbonell and Frijters (2004) also contend that the effect of unemployment on happiness is more accurately estimated when examining individuals' happiness changes when they lose their job, rather than comparing happiness reports of unemployed and employed individuals (cited in Ferrer-i-Carbonell 2013, p. 60). The value of panel data in such research is also emphasised in Dolan et al.'s (2008) comprehensive review of economic literature. They assert that, without panel data, the direction of causality of certain determinants of SWB is sometimes unclear. To conclude, this paper's principal contribution is that some of its results contradict those of previous studies. It is evident that indirect associations and interdependencies exist among the examined socioeconomic characteristics with respect to SWB. In this respect, this research is exploratory in nature. More analysis is warranted in the future to further scrutinise and then conceptually explain the detailed interactions of SBW predictors. One possible approach is to use different datasets, apply the same methodology-that is, use the same predictors and dependent variable, and divide the data into similar subgroups-and then compare the results. Similar kinds of comparisons have been conducted, for example, by Easterbrook et al. (2016) where they analysed data from the British Social Attitude Survey (BSAS), the British Household Panel Survey (BHPS) and the International Social Survey Program (ISSP). I would also argue that such exploratory analysis is useful, especially for policy design and implementation. Empirical research on how socioeconomic characteristics are related to SWB has previously been considered in specific public interventions concerning unemployment and health. As Hirschauer et al. (2015, p. 671) discuss: The manner in which evidence from happiness research is to be used towards enlightening policy makers in their quest to find adequate policies, cannot be determined in general but depends largely on the respective policy field and problem under consideration. The identification of robust and weak determinants of SWB in special sub-groups of the general population is, of course, not the only criterion based on which such policies are designed. Nonetheless, when such programmes concerning those most-in-need are implemented, such identifications have a complementary role and can give valuable feedback in enhancing efficiency and effectiveness.
9,473.8
2019-07-31T00:00:00.000
[ "Sociology", "Economics" ]
Beneficial Effects of Paeoniflorin Enriched Extract on Blood Pressure Variability and Target Organ Damage in Spontaneously Hypertensive Rats Blood pressure variability (BPV) is associated with the development and progression of severe target organ damage (TOD). This study aims to evaluate the protective effect of paeoniflorin enriched extract from Radix Paeoniae Alba (PG) on BPV and TOD in spontaneously hypertensive rats (SHR). All SHR were orally treated with distilled water, metoprolol (MP, 20 mg/kg), and PG (PG-H, 90 mg/kg or PG-L, 30 mg/kg) for a single time or daily for 7 weeks. The 24-hour dynamic blood pressure was monitored and then calculated BPV including long- and short-term systolic blood pressure variability (SBPV), diastolic blood pressure variability (DBPV), mean blood pressure variability (MBPV), and heart rate variability (HRV) as well as the 24-hour-SBP, 24-hour-DBP, and 24-hour-MBP. The protective effects of PG on TOD were observed by histopathologic and biochemical detection. The results indicated that long- and short-term SBPV, DBPV, MBPV, and HRV as well as 24-hour-SBP, 24-hour-DBP, and 24-hour-MBP showed no significant changes after single-dose administration of PG and significantly decreased after administration with PG for 7 weeks. PG could also markedly improve the damage of aorta, heart, kidney, and brain. This study suggested that PG could notably reduce BPV, stabilize blood pressure, and mitigate TOD in SHR. Introduction Hypertension is one of the most popular risk factors affecting cardiovascular disease. Blood pressure (BP) sustaining high systolic pressure ≥140 mmHg or diastolic pressure ≥90 mmHg may be defined as hypertension [1]. According to epidemiology research, the occurrence of hypertension is persistent and shows an increasing trend. The global population suffering from hypertension is predicted to reach 1.56 billion by 2025 [2]. About 25.2% of people over 18 years old were diagnosed with hypertension in 2012 in China and almost 35.5% in Beijing [3]. The problem is shown not only by the large number of illness, but also by severe incidental symptoms of target organ damage (TOD). Fortunately, studies have been conducted on blood pressure variability (BPV) as an important risk factor inducing TOD [4,5]. BPV is associated with the development and progression of severe target organ lesion of the brain, heart, kidney, and vessels, together with increased risk of cardiovascular disease and increased mortality. Thus, the treatment of hypertension should aim not only at decreasing BP but also at reducing BPV [6]. In recent decades, drug treatment of hypertension has focused on controlling BPV. Visit-to-visit research had found that antihypertensive therapeutic drugs of calcium channel blockers (CCB)/diuretics could reduce variability in systolic blood pressure (SBP) and angiotensin-receptor blocker (ARB)/CCB could reduce the long-term variability [7]. Radix Paeoniae Alba (RPA), with paeoniflorin as principal bioactive component, is a traditional Chinese medicine that lowers blood pressure and exhibits anti-inflammatory and antioxidative properties, among others [8]. An increasing number of studies found that paeoniflorin or extract of RPA exerts a positive effect on cardiovascular function [9][10][11]. The literature reported that the methanol extract of RPA and paeoniflorin could activate nitric oxide synthase (NOS) pathway, thereby increasing nitric oxide (NO) and NOS levels, as well as relaxing blood vessels in vitro [12]. Paeoniflorin and glucosides in RPA can reduce the pressure myocardial vascular remodeling and even myocardial remodeling [13]. In addition, paeoniflorin could reverse hypotensive Wistar rats induced by guanethidine, revealing a two-way method of regulating the effects of BP [14]. In our previous study, we evaluated the potential of paeoniflorin enriched extract from Radix Paeoniae Alba (PG) as an antihypertensive agent in spontaneous hypertensive rats (SHR) [15,16]. We found that PG could decrease blood pressure (SBP, DBP, and MBP) in hypertensive rats, and the mechanism of action was close to its liver protection activity and improvement of endothelial function by reducing ET-1 and increasing NO concentrations. Our survey of literature confirmed that no studies of PG had been conducted to explore BPV and target organ damage in SHR. In our study, we further explored the effect of PG on long-and shortterm variability by administering PG at one time or over a long duration to verify the positive effect on BPV. As BPV could cause severe TOD, histopathologic observation of brain, kidney, heart, and thoracic aorta was also performed. Animal and Materials 2.1. Animal. Male spontaneously hypertensive rats (SHR) (forty-eight weeks old) were all purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd., and the licenses numbers of experimental animal were SCXK (Jing) 2012-0001. All procedures were performed according to protocols following the guidelines for the Use and Care of Laboratory Animals published by the Zhejiang province (2009). HPLC Analysis the Purity of Paeoniflorin. Paeoniflorin enriched extract from Radix Paeoniae Alba (PG) was analyzed with HPLC-DAD. The analysis could simply describe that PG sample was diluted with methanol and filtered through a 0.22 m membrane filter before put into the system. The Agilent HPLC1200 (Agilent Technologies Inc., Palo Alto, American) was used to determinate the content of paeoniflorin in extracts by C18 column (250 mm × 4.5 mm). The mobile phases were composed of acetic acid and phosphoric acid solution (19 : 81, / ), and solvent flow rate was 1 ml/min at column temperature of 25 ∘ C. The injection volume was 5 l. The photodiode array detector was set at 230 nm with a total runtime of 20 min. The HPLC chromatogram of extracts was showed in Figure 1(a). Implantable Telemetry Technology to Monitor BPV in SHR. Postoperative recovery for one week, selecting three successful surgical SHR, randomly was used for the experiment, based on 24 h dynamic blood pressure (data were not shown). Rats were sequentially given water (model group, MG), metoprolol (20 mg/kg, MP), and PG (90 mg/kg, PG-H; 30 mg/kg, PG-L) every 24 hours according to body weight. After administration, implantable telemetry technology (Data Sciences International, DSI) collected 24-hour blood pressure (BP) immediately, including systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial blood pressure (MBP). Long-term BPV (SBPV, MBPV, and DBPV) was defined as dividing 24-hour BP into 48 sections, each section for 30 min, and calculated the standard deviation of 48 average values, and the short-term BPV (SBPV, MBPV, and DBPV) was defined as the average values of the 48 sections. The long-and short-term heart variability (HRV) were calculated with the same way the long-and short -term BPV, respectively. The operation was carried out following the procedures presented in literature [17,18]. Firstly rats were anaesthetized with 3% pentobarbital sodium; then abdominal cavity was exposed and abdominal aorta was separated. The second step was to implant C50-PXT device in the intraperitoneal location. The last step was to suture wound and connect the signal to data quest software in computer to monitor BP for 24 hours. During the surgery and recovery, rats were injected intraperitoneally every day to prevent infection with 0.5 ml penicillin (160 thousand U/ml) and rats could take food and water freely. An illustration of intraperitoneal implant site and transmitting signal device was presented in Figure 2. randomly assigned to four groups of eight rats each following as model group (MG), metoprolol positive group (20 mg/kg, MP), PG high dose group (90 mg/kg, PG-H), and PG low dose group (30 mg/kg, PG-L), based on SBP (data were not shown). Model group was given water, while others were given corresponding drugs according to body weight daily for seven weeks. With postoperative recovery for 24 hours, level-headed sober animals dynamic blood pressure analysis system (Shanghai Alcott Biotech Co. Ltd., China) recorded 24-hour BP, including SBP, DBP, and MBP. The computing method of long-and short-term SBPV, MBPV, DBPV, and HRV was the same to former part. Conscious and Freely The operation was carried out following the procedure presented in literature [19]. Firstly, rats were anaesthetized with 3% pentobarbital sodium; then the femoral artery was separated and cannula was inserted with the length of weight body × 1% + 1 cm, about 5 cm. Secondly, subcutaneous needle would fix the catheter. At last, level-headed sober animals dynamic blood pressure analysis system was connected and used to monitor BP for 24 hours. During the surgery and recovery, rats were injected intraperitoneally every day to prevent infection with 0.5 ml penicillin (160 thousand U/ml) and rats could take food and water freely. These procedures were displayed in Figure 3. Renal Function Detected with Biochemical Indexes. After administration for six weeks, SHR were fasted overnight; then 1.25 ml blood was obtained from ophthalmic venous plexus. Blood were water-bathed for 2 hours at 37 ∘ C and centrifuged at 3500 rpm for 10 min. The serum was separated to detect Evidence-Based Complementary and Alternative Medicine 5 the biochemical indexes of BUN, Cr, and UA by TBA-40FR automatic clinical chemistry analyzer (Toshiba, Japan). Histopathology and Immunohistochemistry Observation of Heart, Brain, Kidney, and Aorta. Meanwhile, aorta, heart, kidney, and brain were put into 4% formalin for fixation. Then, the organs were cut into applicable slice and went through washing and dehydration and embedded to make of tissue wax chunks (MEIKO EC360 Tissue Embedder, Germany). All the specimens were cut into 4 m thickness (LEICARM2245 slicing machine, Germany) and stained by hematoxylin-and-eosin (H&E). In addition, the specimens of aorta, heart, and kidney were also used for Masson's trichrome staining. For immunohistochemistry (IHC), Mouse and Rabbit Specific HRP/DAB (ABC) Detection IHC kit was used for the development of reaction of eNOS in aorta, as well as COX-2 in heart and kidney, and tissues were counterstained with hematoxylin. Histopathology observation was used with biological microscope B5-223IEP (Germany). Statistical Analysis. All data were expressed as the mean ± standard deviation and subjected to -test. When compared with the model group before treatment, the data were subjected to paired-sample -tests. When compared with the same period of the model group, the data were subjected to independent-sample -tests. value of <0.05 was considered statistically significant. All analyses were performed by using an updated version of SPSS 15.0 software. Paeoniflorin Enriched Extract Has Slight Effect on 24-Hour Total Blood Pressure of SBP after Single Administration. First, we evaluated the effect of 24-hour dynamic BP in SHR with single dose of metoprolol and paeoniflorin enriched extract (PG). As a result, metoprolol 20 mg/kg could decrease SBP, DBP, and MBP at the time of 0.5, 1, and 2 h, compared with the model group before treatment ( < 0.05 and 0.01), and PG 90 mg/kg exhibited significant influence on SBP at the time of 0.5 h ( < 0.05) (Figures 4(a)∼4(c)). In contrast, PG 30 mg/kg had no statistic difference on SBP, DBP, and MBP at any time. Meanwhile, the data of 24-hour total SBP, DBP, and MBP showed that PG and metoprolol, administration for just once, had no significantly effect on those parameters (Figures 4(d)∼4(f)). Paeoniflorin Enriched Extract Does Not Aggravate Longand Short-Term Blood Pressure Variability of SBPV, DBPV, MBPV, and HRV after Single Administration. After single dose, when compared with the model group before treatment, metoprolol 20 mg/kg could notably increase SBPV, DBPV, and MBPV ( < 0.05), while PG had no visible influence on long-and short-term of SBPV, DBPV, MBPV, and HRV ( Figure 5). Paeoniflorin Enriched Extract Ameliorates 24-Hour Total Blood Pressure of SBP, DBP, and MBP after Administration for Seven Weeks. From the former part experiment, PG had slight effect on blood pressure for single administration; then the effect of long-time administration performed further. The results of 24-hour dynamic BP suggested that PG and metoprolol all significantly amended the BP at monitoring time, while PG (30 and 90 mg/kg) were provided with the high stability of BP obviously (Figures 6(a)∼6(c)). The data of 24-hour total SBP, DBP, and MBP showed that PG 90 mg/kg could significantly lower those parameters ( < 0.05, 0.01), and PG 30 mg/kg could also remarkably decrease 24-hour total SBP and MBP ( < 0.05), compared with the model group (Figures 6(d)∼6(f)). Protective Effect on Aorta Pathological Changes after Administration with Paeoniflorin Enriched Extract for Seven Weeks. Masson's trichrome staining was used to estimate overall collagen deposition in the aorta as indicated by the density of blue staining (Figures 8(a) and 8(b)). Collagen deposition significantly increased in the model group, while PG mitigated it in a dose-dependent manner. In addition, we also examined pathological changes of the aorta with H&E staining as manifested by the aorta endothelial shedding, the thickness of the media increase, and vascular smooth muscle cells (VSMC) hypertrophy in the model group (Figure 8(c)). To confirm the aorta endothelial lesions furtherly, then we performed the expression of eNOS in the aorta endothelium with IHC, which revealed that the eNOS expression in the aorta endothelium decreased in the model group ( Figure 8(d)). And PG could reverse those lesions. In common, those data provided favorable evidence that PG could alleviate the hypertension induced histopathology injury of aorta. Protective Effect on Heart Pathological Changes after Administration with Paeoniflorin Enriched Extract for Seven Weeks. As shown in the heart H&E and Masson's trichrome staining, the cardiac muscle cells were wider and the nucleus was larger occasionally accompanied by inflammatory cell infiltration and there were lots of collagen deposition in the model group (Figures 9(a) and 9(b)). To further demonstrate inflammatory lesions, we examined the expression of COX-2 in heart with IHC. The results showed that COX-2 was highly expressed in the model group (Figure 9(c)). In contrast, PG could markedly alleviate those symptoms, which hinted that PG protected SHR against hypertension induced cardiac injury. Protective Effect on Kidney Pathological Changes and Function after Administration with Paeoniflorin Enriched Extract. In this part, we defined the impact of PG on kidney injury by biochemical analysis of the serum BUN, UA, and Cr levels in SHR. Compared with model group, PG 90 mg/kg and 30 mg/kg had an significant effect on decreasing the level of UA ( < 0.01) (Figure 10(e)). However, PG had no significant effect on serum BUN and Cr. Then, we performed histological analysis with H&E and Masson's trichrome staining of renal sections. H&E staining revealed the presence of glomerular wall thickening (Figure 10(a)) and luminal stenosis in the arterioles (Figure 10(b)). Moreover obvious collagen deposition in glomerular wall but no intertubular fibrosis was noted in model group as manifested by the Masson's trichrome staining (Figure 10(c)). And with highly expressed in the model group, the expression of COX-2 in kidney was similar in heart ( Figure 10(d)). Of note, PG significantly attenuated these pathological changes. Collectively, our data provided convincing evidence that PG protected SHR against hypertension induced kidney injury. Protective Effect on Brain Pathological Changes after Administration with Paeoniflorin Enriched Extract for Seven Weeks. In this section, we examined the protective effect of PG on brain injury with H&E staining. In the model control group, the cortical cells arranged in disorder and decreased (Figure 11(a)). The cortical vascular endothelial cell was swelling (Figure 11(b)). Meanwhile, by observing the hippocampal CA1 area, the pyramidal cell layer became thinner, less, and disordered, and the neurons were degenerated and necrosed obviously (Figure 11(c)). Compared with the model control group, the PG (90 mg/kg) in different degrees improve the cortical cells lesions, alleviate the swelling of cortical vascular endothelial cells, and attenuate the pyramidal cell layer in hippocampal CA1 area. Although we only used H&E staining to observe brain lesions, severe brain injury could be clearly observed in the model control group and PG had significant protective effect on the brain injury in SHR. Discussion Hypertension is a disease characterized by high arterial pressure. Sustained high blood pressure leads to cerebral embolism, cardiac failure, renal failure, and other complications. TOD is caused not only by hypertension but also by BPV, which is independent of low mean systolic blood pressure [4,20]. Therefore, the treatment of hypertension should focus not only on the effective control of blood pressure, but also on the protection of target organs to reduce complications [6]. Many researchers confirmed that BPV could cause TOD independently, even within normal blood pressure [21,22]. There is emanating evidence that BPV is an independent predictor of hypertensive TOD and cardiovascular events [23]. An illustration of TOD is presented in Figure 12. In the early stage of our research, PG exhibited a definite antihypertensive effect on SHR through liver protection activity and improvement of endothelial function by regulating serum NO and endothelin (ET) levels. However, the effect of PG on blood pressure fluctuation has not been evaluated. On the basis of the previous research, singledose and long-term administration of PG were conducted to investigate its effect on BPV in SHR in this study. The experimental results showed that single-dose administration of PG, unlike metoprolol, significantly reduced blood pressure in rats initially, without aggravating long-and shortterm BPV. Long-term administration of PG could not only significantly reduce the 24 h blood pressure but also decrease BPV (SBPV, DBPV, and MBPV). By contrast, metoprolol significantly reduced the 24 h blood pressure; however the trend fluctuated, showing no significant effect on SBPV, DBPV, and MBPV. This effect might be the advantage of using PG as an antihypertensive. The effect of -blockers on blood pressure fluctuation remains inconclusive. Betablockers might increase in enhanced BPV [20], which may be attributed to nonselective -blockers; high selectivity in -blockers exerts no such effect [24]. Vascular cell proliferation, apoptosis, inflammation, fibrosis, and other complex processes change the vascular structure in patients with BPV [25]. The SHR is a stable model for examining the development and complications of hypertension. Increased collagen deposition, endothelial cell abnormalities, and abnormal aortic wall cell proliferation of the media were observed in the 42-week-old SHR [26]. The SHR exhibited a significant reduction of endothelial nitric oxide synthase (eNOS) protein expression in the aortic endothelium [27]. An earlier study suggested that paeoniflorin could promote blood vessel wall function by releasing the relaxation factor of NO on isolated thoracic aorta rings of SD rats [12]. Likewise, our previous research proved that PG could upregulate serum NO in SHR [15]. Endothelial NOS is the main limiting factor of NO generation, forcefully causes vasodilation, and inhibits the proliferation of vascular smooth muscle cells [28]. The findings in the current study suggested that PG could improve endothelial shedding, relieve hypertrophy of smooth muscle cells, improve collagen fiber hyperplasia, and increase eNOS expression in the aorta, which hinted that PG could alleviate the hypertension induced histopathology injury of aorta. Enhanced BPV could trigger a change in pathological cardiac hypertrophy via mechanical stress fluctuation in the cardiomyocytes [29]. That is to say, increased BPV may be responsible for the pathogenesis of hypertrophic cardiac response [30]. The clinical test showed that an increase in BPV over 24-hour evaluation period with ambulatory blood pressure monitoring was linked to a higher degree of hypertensive cardiovascular complications [31,32]. Studies indicated that BPV was highly correlated with cardiovascular complications, and short-term variability could predict the close links to BPV and early left ventricular systolic dysfunction [33,34]. BPV also changes in the myocardial structure [35]. SHR aged 56 weeks developed end-stage hypertensive heart disease, enlargement of cardiomyocytes enlargement, and fibrosis [36], which were also were found in SHR aged 20 weeks [37]. Moreover, the expression of COX-2 in cardiomyocytes was significantly correlated with their size [38], and COX-2 might be one of the important indicators of systemic inflammation [39,40]. Paeoniflorin has been reported to prevent upregulations of proinflammatory mediator COX-2 in ischemia-induced brain damage and rheumatoid arthritis rats [41,42]. The results obtained in this study revealed that PG could improve myocardial inflammation, ameliorate collagen deposition, and decrease COX-2 expression in heart. The kidneys target organs that not only are prone to hypertension induced injury but also are also involved in exacerbating the development of hypertension. An increase in short-term BPV may be positively correlated with impaired renal function, as determined by microalbuminuria or glomerular filtration rate [27,28]. Glomeruli, tubules-interstitium, and renal vascular lesions were significantly increased in 12-week-old SHR [43,44]. Renal COX-2 expression was also increased in hypertension mouse [43]. In this study, histopathologic observation of renal tissues indicated that glomerular arteries led to stenosis, thickening of the glomerular wall capsule, collagen deposition, and increase in COX-2 expression in the SHR model. PG could improve the aforementioned symptoms suggesting that PG exhibited a renal protective effect in SHR. BPV could also lead to brain damage, including cerebral vascular lesions and histomorphological changes in the brain [45]. Cerebrovascular lesions are mainly manifested as hypertrophic and remodeling lesions. Histomorphological changes mainly occur in the frontal lobe, occipital lobe, and hippocampus. Previous studies have indicated that SHR share behavioral and neuropathological characteristics in 22-week-old SHR [46]. Paeoniflorin could attenuate brain damage in rats and mouse via inflammatory signaling pathways [47,48]. In this study, PG could variably increase the number of cortical cells, alleviate the swelling of cortical vascular endothelial cells, and attenuate the pyramidal cell layer in the hippocampal CA1 area. The current animal models, used to study for hypertension, are mainly hereditary hypertensive animal models (spontaneously hypertensive rats (SHR) [49,50], strokeprone spontaneously hypertensive rats [51], Dahl saltsensitive hypertensive rats [52], etc.), renovascular hypertensive animal models (2-kidney-1-clip, 2K1C [53] and 2-kidney-2-clip, 2K2C [54], etc.), drug-induced hypertensive animal models (angiotensin-induced hypertension [55], L-nitro arginine methyl ester induced hypertension [56], etc.), metabolic hypertensive animal models (excessive alcohol intake and high fat diet induced hypertensive rats [16], high-purine diet induced hypertensive rats [57], high-glucose/fat diet induced hypertensive rats [58], etc.), and so on. SHR, hypertensive spontaneous rate of 100%, were nurtured form Wistar rats by Okamoto and Aoki in 1963 [59], which is internationally recognized as the most comparable in characteristics with human essential hypertension (EH). With the development of the disease, SHR present heart [37], brain [46], kidney [43,44], blood vessels [26], and other types of target organ damage. Classic BPV animal model, with surgery to remove sinus nerve bow (sinoaortic-denervated, SAD), was created successfully by the Krieger in 1964 [60], with the limitations of a high mortality rate and pure neurogenicity [61]. However, the BPV of SHR was positively correlated with it age. The BPV in 40-week-old SHR is higher than that in 16-week-old SHR [62] and the BPV in 7-month-old and 5-month-old SHR is higher than that in 3-month-old SHR [63]. So, we selected the 48-week-old SHR to evaluate the protective effect of PG on BPV and TOD in present research. Continuously monitoring 24-hour blood pressure is the basic guarantee of evaluation BPV. Methods are mainly application with noninvasive telemetry system [64], implantable telemetry technology [17,18], and conscious and freely moving animals dynamic blood pressure analysis system [19]. Noninvasive telemetry system is mainly used for monitoring 24-hour blood pressure of human and large animals as dogs and monkeys, with a vest and no surgery. Implantable telemetry technology, with the signal transmitter and the pressure signal device embedded in the abdominal cavity, can collect data in conscious and freely moving animals for a long-term, when the animals return to normal after surgery. Using implantable telemetry technology could reduce pain and stress of animals and reduce the number of animals by improving data accuracy and selfcontrol [65]. Conscious and freely moving animals dynamic blood pressure analysis system, with arterial catheterization and continuous heparinization to ensure the transmission of the signal, will produce a certain degree of pain and stress and could not collect data for a long-term [63]. Therefore, to evaluate the protective effect of PG on BPV, implantable telemetry technology ( Figure 2) was performed to monitor the 24-hour BPV in 3 SHR by self-control, and conscious and freely moving animals dynamic blood pressure analysis system ( Figure 3) were performed to monitor the 24-hour BPV in 32 SHR. In conclusion, abnormal BPV and TOD of heart, brain, kidney, and aorta were observed in SHR of this study, which is consistent with the other researcher reports. And paeoniflorin enriched extract (PG) could reduce BPV, stabilize blood pressure, and reverse the eNOS or COX-2 expression to mitigate target organ damage (TOD) in SHR. These findings provide convincing evidence that PG, with protective effect on BPV and TOD in SHR, can be used to treat hypertension. However, the mechanisms of the increased BPV resulting in TOD were not elucidated in the present study, which may be related to chronic inflammation [66] and microcirculation [67].
5,600
2017-01-24T00:00:00.000
[ "Biology", "Medicine" ]
Effects of swimming in cold water on lipolysis indicators via fibroblast growth factor-21 in male Wistar rats This study aimed to investigate the effects of swimming in cold water on the release of FGF21 from various tissues and its impact on fat metabolism. Twenty Wistar rats were randomly divided into three groups: untrained (C), trained in thermo-neutral water (TN, 30 °C) and trained in cold water (TC, 15 °C). The training groups swam intervals (2–3 min) until exhaustion, 1 min rest, three days a week for six weeks, with 3–6% bodyweight load. The mRNA expression of variables was determined in white fat tissue (WAT), and FGF21 protein was also measured in the liver, brown fat tissue (BAT), serum, and muscle. The experimental protocols resulted in lower body weight gain, associated with reduced WAT volume; the most remarkable improvement was observed in the TC group. Swimming significantly increased FGF21 protein levels in WAT, BAT, and muscle tissues compared to the C group; substantial increases were in the TC group. Changes in FGF21 were highly correlated with the activation of genes involved in fat metabolisms, such as CPT1, CD36, and HSL, and with glycerol in WAT. The findings indicate a positive correlation between swimming in cold water and the activation of genes involved in fat metabolism, possibly through FGF21 production, which was highly correlated with fat-burning genes. Introduction Extreme environments disrupt the body's homeostasis, leading to increased sympathetic nervous system activation [1].Exposure to cold ambient stimulates the release of cortisol and norepinephrine (NE) [2], increasing the basal metabolic rate and enhancing mobility and oxidation of glucose and free fatty acids [3].Repeated exposure to a cold environment may result in adaptive changes helping organisms to resist stress-induced damage [4].Cold-stimulated cytokines such as fibroblast growth factor 21 (FGF21) may mediate these changes [5].A 12-h exposure to a mild to cold environment (19 • C) compared to a moderate ambient (24 • C) environment led to increased plasma FGF21 levels in healthy adults.FGF21 was correlated with increased energy expenditures and lipolysis [6].Therefore, the degree of cold exposure would determine the magnitude of responses and acclimatization [7]. FGF21 is a polypeptide involved in energy balance, glucose uptake, and lipid metabolism [8].The predominant source of serum FGF21 is the liver, but it is released from white adipose tissue (WAT), brown adipose tissue (BAT) [9], and skeletal muscles [10]; however, it was reported that the predominant source of serum FGF21 is BAT in the cold environment [11].FGF21 functions in an autocrine, paracrine, and endocrine manner by binding to its co-receptor βKlotho (KLB) [12] and causing a tissue cross-talk.It enhances glucose uptake by inducing glucose transporter 1 (GLUT1) expression via activating ERK1/2 in adipocytes [13,14] and myocytes [15].Thus, it is involved in lowering insulin-independent blood glucose [15], improving lipid profile (increasing HDL and decreasing LDL) [16], and increasing adiponectin and bone formations markers [17]; so FGF21 may be effective for the treatment of metabolic disorders such as obesity and diabetes.In addition, it was shown FGF21 stimulates the expression of genes involved in lipolysis by increasing hormone-sensitive lipase (HSL) [18].In this regard, increased lipolysis and lipogenesis in the Siberian hamster in response to treatment with FGF21 were reported [14].On the other hand, it has been reported a short-term FGF21 treatment resulted in a marked increase in AMP-activated protein kinase (AMPK) and Peroxisome proliferator-activated receptor (PPAR)δ/γ signaling pathways, which in turn stimulates the expression of carnitine palmitoyltransferase I (CPT1), fatty acid translocase (CD36) [19] in BAT; hence it increases fat oxidation.However, less is known about the metabolic roles of FGF21 in cold exposure. Acute and chronic exercise also affects the release of FGF21 from various tissues [12,20].Research has shown that acute endurance exercise increases serum FGF21 levels in mice and healthy men [10,20]; in contrast, it was reported that the FGF21 protein content of the systemic circulation and skeletal muscle was unchanged in response to eccentric exercise [21].Findings for regular exercise were contradictory and depended on the type of exercise, the duration of intervention, and exercise intensity.In this regard, an animal study showed an eight-week moderate-intensity training compared to high-intensity training was more effective at enhancing FGF21 and β-Klotho (KLB) expression in the liver, BAT, and muscle at both mRNA and protein levels [12].In contrast, some studies reported that endurance training led to no changes or decreased serum and muscle FGF21 levels [22,23].In rodents, it was reported that a period of moderate-intensity treadmill running did not significantly change serum FGF21 levels [24,25].However, FGF receptor and co-receptor KLB expression were upregulated by exercise training in WAT and BAT [24,25] in obese mice.Therefore, there are conflicting findings regarding the effect of exercise training on FGF21 release. Swimming in cold water is a stressful physiological condition that could exacerbate the body's response to exercise [26]; repeated cold-water swimming may result in beneficial adaptive changes in organisms' [26].Increased metabolism and a thermogenic effect characterize exercise in a cold environment [26].However, the potential effects of long-term swimming in cold water on physiological adaptations have not been studied, and most analyses used mild-cold water [27].It was reported that a 5-week period of swimming in 24 • C water led to increased expression of mitochondrial biogenesis-related genes in soleus muscle and inguinal WAT of mice [28].In addition, da Silva et al. (2020) showed eight weeks of swimming in mild-cold water (20 • C) does not exacerbate the independent effects of mild-cold exposure and swimming on browning-related markers in WAT and BAT in mice [27]. Therefore, it might be assumed that repeated cold-water swimming may amplify the potential effects of cold exposure and swimming on FGF21-stimulated metabolic indices.To verify this hypothesis, we examined the effects of swimming in cold water on FGF21 from secretory tissues, BAT, WAT, the liver and active muscle, and FGF21stimulated fat metabolisms factors like AMPK, CPT1, CD36, and HSL in white fat tissue to determine whether there is a correlation between FGF21-primarily secreted tissues (the liver, muscle) with WAT.Thus this strategy may be used to manage overweight. Animals Twenty male Wistar rats were housed in conventional cages and kept on a 12-h light/dark cycle in temperature-controlled situations.After two weeks of orientation, they were randomly divided into three groups: untrained (C, n = 6), trained in thermo-neutral water (TN, n = 7), and trained in cold water (TC, n = 7).The C group, as the control group, was sedentary during the intervention.The ambient temperature of the laboratory was 25 ± 2 • C. The TN group was kept at normal room temperature (25 ± 2 • C) and swam in the water with temperature (30 ± 2 • C), three days per week.The TC group was also kept at normal room temperature and swimming in cold water for three days per week.Using a few pieces of ice, the water was cooled to 15 ± 1 • C for the TC group.A waterproof digital thermometer was embedded in the water to display the target temperature.Animals were allowed ad libitum access to water and standard commercial chow.The same amount of chow for rats was put into the cages (50 g per rat) every time.Body weights were measured weekly by digital scale. Incremental test At the end of the protocol, an incremental swimming test was adapted from Almeida et al.'s study [29].The test consisted of 3-min swimming intervals with increased loads separated by 1-min rests.Swimming started with an initial external load of 1, 2, and 3% of the rat's body weight, which fastened to the rat's tail for the first three stages, respectively.Then increments of 0.5% body weight in the following stages until animal exhaustion.Exhaustion was determined by the frequent submergence of rats.The percentage of body weight (%BW) that the rats could bear while swimming was used for statistical analysis. Exercise protocol The swimming protocol was conducted in a glass aquarium with a length of 100 cm, a width of 50 cm, and a water depth of 50 cm.The training protocol consisted of 2-min swimming intervals until exhaustion separated by 1 min of rest.The initial load was 3% of the rat's body weight, and it increased by 1% if they could swim ten successful repetitions.In addition, if they reached ten repetitions of 6% of their body weight, the work interval durations were increased to 3 min.They trained three times per week for six weeks [30].The TN group could only swim ten repetitions with 3-6% of their body weights and swam 3-min intervals in the two last weeks.The TC group swam only 3% of their body weight, and the number of intervals increased from four to eight. Blood and tissue sampling The animals were anesthetized 48 h after the last session with ketamine (100 mg/kg) and xylazine (5 mg/kg).The rats were deprived of food 8 h before they were sacrificed, but they were allowed ad libitum access to water.Then, blood was immediately withdrawn intracardially.Blood samples were centrifuged for 10 min at 4000 rpm, and serum was gathered and stored at − 20 • C. Subcutaneous white fat, interscapular brown fat, the liver, and soleus muscle were excised, and a slice of fats was fixed in 10% formaldehyde, passaged, and embedded in paraffin.A portion was frozen in nitrogen and stored at − 80 • C. Histological analysis The white fat tissue was excised and fixed in 4% paraformaldehyde.The tissue samples were then dehydrated by the gradient concentrations of alcohol, cleared with xylene solvent, and embedded in paraffin. Paraffin blocks were sectioned (5-10 μm) by microtome (Leica, Germany), mounted on slides, deparaffinized, and stained with hematoxylin and eosin based on instructions.The stained tissue samples were visualized under a light microscope (Nikon, Tokyo, Japan), and the provided images were analyzed using Image J software.The average diameter of fat cells was measured using a graduated lens and suitable measurement software, such as Dyno Capture or Image G (Fiji).To calculate the histomorphometric parameter, 6 figures (n = 6) were used to obtain the results for each group.Also, the number of each adipocyte was counted in a 1 mm 2 of area in each tissue section. Gene expression Total RNA was extracted from 100 mg of WAT with Trizol solution according to the instructions (Invitrogen).RNA purity and quantity were confirmed by spectrophotometry using a NanoDrop ND-1000 (VWR, Radnor, PA, USA).A Qiagen cDNA synthesis kit (cat: K1622) was used for cDNA synthesis.QRT-PCR using the SYBR Green dye (Amplicon, 4309155) was performed to determine the mRNA relative expression of GLUT1, HSL, AMPK, CPT1, CD36, and KLB.The thermal cycling program was as follows: 95 • C for 15 min followed by 40 cycles of 95 • C for 0.5 min, 60 • C for 1 min, and 72 • C for 0.5 min.GAPDH mRNA was used as a normalized gene.The sequence of PCR primers is presented in Table 1.The 2 -ΔΔCT formula determined the fold change expression. Statistical analysis The descriptive data are presented by mean ± standard deviation (SD).We used a one-way analysis of variance (ANOVA) to analyze the main effects of interventions on the variables.If significant results were obtained, Tukey post-hoc tests were done.To determine the magnitude and direction of the linear relationship between the serum marker and other variables, the bivariate Pearson correlation coefficient (r) was calculated.We considered the significance level at p ≤ 0.05 to accept the main effects.The Statistical Package of Social Sciences (SPSS, IBM, v19) was used to analyze the data. Bodyweight and fat tissue changes At the start of the study, there were no differences in body weight between the groups (F = 0.38, p = 0.687).The body weight significantly changed during the research.Fig. 1 shows the trend of body weight changes during the orientation and an a-6-week period of intervention.Significant differences emerged from the fifth week until to end of the protocol (p < 0.001).In the fifth week (F = 4.75, P = 0.030), the significant difference was between the TC group with the two other groups.In the sixth week (F = 20.41,P = 0.001), and seventh week (F = 26.24,P = 0.001) the difference was between the TC group with the other groups (p < 0.001).The weight of the TC group decreased and remained constant, while the weight of the other two groups increased.In the last week (F = 36.45,P = 0.001), the significant difference was between the C group with the other trained groups (p > 0.001). The changes in white adipose volume were in line with bodyweight changes.The diameter of fat white cells differed between groups (F = 39.43,p = 0.001, R 2 = 0.83).As shown in Table 2, the largest diameter of white fat cells belonged to the C group (Fig. 2).The Tukey post-hoc test showed the volume of white fat cells in the C group was significantly different from the other groups (p < 0.05).There was no significant difference in the number of white fat cells between groups (F = 3.57, p = 0.061, R 2 = 0.12).In addition, there was a significant correlation between body weight and the diameter of white fat cells (r = 0.64, p = 0.015). To determine the training efficiency, we used the incremental swimming test, and the weights that rats could bear while swimming are presented in Table 2.The duration of swimming to exhaustion in the C, TN, and TC groups were 13:00, 26:00, and 21:00 min, respectively.There was a significant difference between groups in the swimming to exhausting test (F = 29.82,p = 0.001, R 2 = 0.92).The differences were between both training groups, TN and TC groups, with untrained group, the C group (p < 0.05). Protein level Table 2 presents the serum concentrations of NE, glucose, and white fat glycerol in groups.There was no significant differences between groups at NE levels (F = 2.53, p = 0.122, R 2 = 0.29).There was also no significant difference between the groups in serum glucose levels (F = 0.14, p = 0.962, R 2 = 0.04).The glycerol levels significantly differed between groups after the interventions (F = 139.90,p < 0.001, R 2 = 0.96).The Tukey post-hoc test showed that the levels of glycerol in the two experimental groups had significantly increased compared to the C group (p < 0.01).Also, it was noticed that the TC group had a noticeable difference in glycerol levels compared to the TN group (p < 0.01).(Table 2). Fig. 3 presents the FGF21 protein levels in the selected tissues.Outputs of one-way ANOVA revealed there were significant differences between groups at FGF21 levels in some tissues.In soleus muscle (F = 41.47,p = 0.001, R 2 = 0.93), the FGF21 protein levels in both trained groups, the TN (p = 0.002) and TC (p = 0.001), had significantly increased compared to the C group.In serum (F = 8.48, p = 0.018, R 2 = 0.74), there was a significant increase in the TC group compared to the other two groups (p < 0.05).In WAT (F = 14.04, p = 0.005, R 2 = 0.82), a significant difference was between the two trained groups, TC (p = 0.006) and TN (p = 0.016) groups, with the C group.In BAT (F = 19.24,p = 0.003, R 2 = 0.86), the FGF21 protein levels in two trained groups, the TN (p = 0.015) and the TC (p = 0.002) groups, increased significantly compared to the control.However, there was no significant difference in the FGF21 level in the liver between the groups (F = 1.09, p = 0.395, R 2 = 0.27). Gene expression As shown in Fig. 5 a, there was a significant difference in the KLB mRNA levels in WAT (F = 9.93, p = 0.001, R 2 = 0.65).The Tukey post hoc test showed substantial differences between the C group with the TN group (p = 0.003) and with the TC group (p = 0.019).The KLB mRNA content were significantly increased in two training groups.Fig. 5 b shows the GLUT1 mRNA levels in WAT.Outputs of one-way ANOVA revealed significant differences in GLUT1 gene expression in the WAT (F = 4.07, p = 0.044 R 2 = 0.40).Post-hoc analyses showed an upregulation of the GLUT1 gene in the TC group, which differed significantly from the C group in the WAT (p=0.037).The AMPK mRNA levels in WAT are also presented in Fig. 5 c.We found significant differences between groups in the AMPK mRNA levels in WAT (F = 14.75, p = 0.001, R 2 = 0.71).The post-hoc test showed that AMPK mRNA expression increased significantly in the TC (p = 0.001) and TN (p = 0.002) groups compared to the C group. Fig. 6 presents the mRNA level of genes involved in fat metabolism in WAT.There was a significant difference between groups in the CPT1 mRNA levels (F = 3.98, p = 0.048, R 2 = 0.40), as shown in Fig. 6 a.The post-hoc test showed CPT1 mRNA expression increased significantly in the TC groups compared to the C group (p = 0.039).There were significant differences in CD36 mRNA levels (F = 11.55,p = 0.002, R 2 = 0.66) between groups.The post-hoc test demonstrated that the increase rate in the TC group differed significantly from the C group (p = 0.001) and the TN group p = 0.033) in the WAT.Expression of HSL mRNA markedly differed between groups (F = 41.76,p = 0.001, R 2 = 0.87), as shown in Fig. 6 c.The post-hoc test demonstrated that the upregulation of HSL expression in the TC group had differed significantly from the C group (p = 0.001) and the TN group (p = 0.018); in addition there is significant difference between the TN group with the C group (p = 0.001). Discussion This study hypothesized that the six weeks of cold-water swimming might magnify the potential effects of cold exposure and exercise on FGF21 production from various tissues and fat metabolism factors.The finding demonstrated that swimming resulted in lower body weight gain and reduced white adipose volume; swimming in cold water amplified these effects.Swimming in tepid water significantly increased FGF21 protein levels in WAT, BAT, and muscle tissue compared to the C group.In all tissues, TC group had relatively higher increase and only serum FGF21 levels had a significant difference between TC and TN groups.High correlations were observed between serum FGF21 and increased expression of KLB, AMPK, GLUT1, HSL, CPT1, and CD36 genes in the WAT.In addition, a positive impact of swimming in cold water on the upregulation of genes involved in fat metabolism was observed.Therefore, our finding partially confirmed the primary hypothesis that swimming in cold water might intensify the secretion of FGF21 from various tissues, and activates genes involved in fat metabolism in WAT.It may need more duration (>6 weeks) for meaningful effect. Swimming in tepid water significantly increased the FGF21 levels in BAT, WAT, and muscle tissues compared to the C group.Hence, it seems that exercise training is an important factor in activating FGF21 in muscles and adipose tissues.In this regard, Xiong et al. (2020) showed that eight weeks of endurance training increased the expression of FGF21 from BAT, not WAT, in obese mice [12].The effects of exercise on BAT activation are still unclear; however, some studies have proposed exercise training increases FGF21 secretion by repeated activation of the sympathetic system and NE secretion, activating BAT via the FGF21/PGC1α/UCP1 pathway [31,32].In our study, we showed for the first time that swimming in tepid water led to an increase in the FGF21 protein in BAT and WAT and also KLB expression in the WAT.In this regard, Xiong et al. ( 2020) also reported increased KLB expression in adipose tissues and muscles of obese mice due to endurance training.In addition, a significant increase in the expression of AMPK and HSL genes in WAT following swimming in tepid water indicates swimming might primarily stimulate lipolysis in WAT through the FGF21/KLB signaling pathway.In contrast, the lack of a significant upregulation in CPT1 and CD36 genes in WAT probably indicates that oxidation lipid may need a longer training duration or cold stress in the WAT; however, adipose tissue is inactive during exercise training.Compared to tepid water, swimming in cold water with less training duration caused a significant increase in FGF21 protein levels in the muscle, WAT, BAT, and serum, not the liver, compared to the C group.The increases in circulation were also significantly different from the TN group.When swimming in cold water, the body must overcome the cold stress in addition to the exercise load.With exposed to cold stress, stimulation of FGF21 secretion occurs through the NE-cAMP-dependent mechanism [33].Exposure to a cold stress is a stressful situation, which releases stress hormones such as catecholamines [34] by activating the sympathetic nervous system, and leads to increase metabolism and non-shivering thermogenesis.Thus, swimming in cold water is a potent stimulus, through the release of NE, for the production and release of FGF21 into the bloodstream.The source of serum FGF21 concentration in the TC group could be considered as a considerable increase of FGF21 in muscle, BAT, and WAT.However, a decrease [23] or no change [35] and an increase [36] in serum FGF21 concentration were reported following prolonged training.Therefore, there is a positive correlation between swimming in cold water and increasing FGF21 from tissues and its release into the circulation.Although the liver is the primary source of FGF21 release, the effect of regular exercise on hepatic FGF21 expression is unclear.Research has reported increased hepatic FGF21 expression immediately after an acute exercise [20], but the chronic effects of training on hepatic FGF21 expression have not been studied.In this study, regular swimming had no impact on hepatic FGF21 levels.Therefore, the lack of significant correlation between serum FGF21 and the liver may be that the release of FGF21 from the liver did not respond to interventions and the liver produces a constant amount of FGF21.In addition, swimming in cold water increased KLB gene expression in WAT compared to the C group.Studies have reported KLB gene expression up-regulates via activating PPRG in WAT [24,25] in obese mice following moderate to high-intensity endurance training, and KLB gene expression up-regulates in the liver, muscle, and BAT, but not WAT after moderate endurance training [12].Therefore, the duration and intensity of exercise and cold stress affect the KLB gene expression from fat tissues. The findings showed that GLUT1 expression was increased in the TC intervention compared to the C groups.However, swimming had a relative effect on GLUT1 expression in WAT, but swimming in cold water caused to significantly increase in the GLUT1 expression in fat tissue.Researchers have shown that three weeks of training was associated with increased GLUT1 expression in WAT [37].However, in our study, swimming in cold, not tepid water, increased the expression of the GLUT1 gene in WAT.Therefore, due to the lack of research that explored the expression of GLUT1 following swimming in cold water in WAT, more studies are needed to provide a definitive result. Moreover, swimming in cold water significantly enhances the expression of HSL, CPT1, and CD36 in WAT compared to the C group; hence, lipid metabolism might increase.As the metabolic rate increases almost threefold during exposure to cold water [38], and FGF21 is one of the stimulants of increasing metabolism and energy homeostasis [13,15,19] during exposure to cold stress, the upregulated HSL, CD36, and CPT1 may be attributed partly to the increased FGF21 [19,39].This confirms our observations of the high correlation between FGF21 levels and these genes.High levels of glycerol in this group also confirm the occurrence of lipolysis.Therefore, the lower weight gain observed in the TC group could be attributed to the high fat-burning rate.The findings showed that following the training protocol; there was a significant increase in swimming repetitions to exhaustion in the training groups compared to the untrained group.Regardless of water temperature, swimming training increases aerobic capacity, delays fatigue, and prolongs exercise duration; this finding is consistent with previous animal studies [27,29].Moreover, rats swimming in lukewarm water (30 • C) improved swimming to exhaustion compared to cold water.Swimming in cold water is a stressful situation.Thus, substantial energy is spent on overcoming cold stress, such as higher oxygen demand, maintaining temperature, and increased metabolism [40]; therefore, fatigue occurs earlier.As a result, aerobic fitness's magnitude is not the same as training in natural water. We acknowledge that there were some limitations in the present study.First of all, we did not have access to devices that measure the body composition of rats, which could be of great help in interpreting the data.Secondly, blood fatty acids were not measured in this study.The lack of measurement of our variables at the end of the first week was another limitation of this study because we observed acute weight loss of animals in the first week in the group exposed to cold ambient.Therefore, in a future study, it is suggested to measure the acute effect of such interventions on the lipolysis rate by measuring fatty acids and glycerol in the blood. Conclusion Overall, our findings showed that swimming in cold water had a favorable impact on managing body weight by burning fat tissues.In addition, the TC intervention causes an increase in the release of NEinduced FGF21 from various tissues, especially serum, which has a positive relationship with the indicators related to the lipolysis, transfer, and oxidation of fats glucose transfer in WAT.Although swimming in cold water may release many cytokines involved in fat metabolism, we highlighted the FGF21 signaling pathway as one of the possible pathways in this study.Therefore, swimming in cold water may increase the secretion of FGF21 from all tissues in a longer duration.Overall, due to the findings in the TC group with less swimming volume, interval swimming in cold water is recommended for weight loss. Ethical approval The Sport Sciences Research Institute of Iran (approval number: IR.SSRI.RE.1400.964)approved all research procedures.This study was conducted in accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals. Fig. 1 . Fig. 1.The rat's body weight during the intervention.C: Untrained, Trained in thermo-neutral water and TC: trained in cold water; a significant difference with the C group; b significant differences with the TN group. Fig. 2 . Fig. 2. Subcutaneous white fat tissue in research groups.C: control, TN: Trained in thermo-neutral water, and TC: trained in cold water. Fig. 3 . Fig. 3. Level of FGF21 in the selected tissues in the groups.C: Untrained, TN: Trained in thermo-neutral water and TC: trained in cold water; WAT: white adipose tissue; BAT: brown adipose tissue.a significant difference with the C group; b significant differences with the TN group. Fig. 5 . Fig. 5. Gene expression of KLB (a), GLUT1 (b), and AMPK (c) in the white fat tissues.C: Untrained, TN: Trained in thermo-neutral water and TC: trained in cold water; WAT: white adipose tissue; KLB: beta-klotho, GLUT1: Glucose transporter 1; AMPK: AMP-activated protein kinase; a significant difference with the C group; b significant difference with the TN group. Fig. 6 . Fig. 6.Gene expression of CPT1 (a), CD36 (b), and HSL (c) in the white fat tissues in the groups.C: Untrained, TN: Trained in thermo-neutral water and TC: trained in cold water; WAT: white adipose tissue; CPT1: carnitine palmitoyltransferase; CD36: fatty acid translocase; HSL: hormone-sensitive lipase.a significant difference with the C group; b significant differences with the TN. Table 2 Serum glucose, norepinephrine, and physical features of rats. C TN TC White fat diameter (μm/ mm 2 ) 51.53 (5.24) 37.35 (1.79) a 31.14 (3.30) a b C: Untrained, TN: Trained in thermo-neutral water and TC: trained in cold water; %BW: body weight percentage; a significant difference with C group; b significant differences with TN.
6,374.2
2024-02-13T00:00:00.000
[ "Biology", "Medicine" ]
Controlling COVID-19 outbreaks in the correctional setting: A mathematical modelling study Correctional centres (termed here ‘prisons’) are at high risk of COVID-19 and have featured major outbreaks worldwide. Inevitable close contacts, frequent inmate movements, and a disproportionate burden of co-morbidities mean these environments need to be prioritised in any public health response to respiratory pathogens such as COVID-19. We developed an individual-based SARS-CoV-2 transmission model for the prison system in New South Wales, Australia ‐ incorporating all 33 correctional centres, 13,458 inmates, 578 healthcare and 6,909 custodial staff. Potential COVID-19 disease outbreaks were assessed under various mitigation strategies, including quarantine on entry, isolation of cases, rapid antigen testing of staff, as well as immunisation.Without control measures, the model projected a peak of 472 new infections daily by day 35 across the prison system, with all inmates infected by day 120. The most effective individual mitigation strategies were high immunisation coverage and prompt lockdown of centres with infected inmates which reduced outbreak size by 62–73%. Other than immunisation, the combination of quarantine of inmates at entry, isolation of proven or suspected cases, and widespread use of personal protective equipment by staff and inmates was the most effective strategy. High immunisation coverage mitigates the spread of COVID-19 within and between correctional settings but is insufficient alone. Maintaining quarantine and isolation, along with high immunisation levels, will allow correctional systems to function with a low risk of outbreaks. These results have informed public health policy for respiratory pathogens in Australian correctional systems. Introduction Correctional facilities have featured several major COVID-19 outbreaks during the SARS--CoV-2 pandemic.Correctional facilities have featured several major COVID-19 outbreaks during the SARS-CoV-2 pandemic.For instance, the first case of COVID-19 recorded at a main jail complex in New York City spread to over 200 cases within the facility in the next 2 weeks [1].A similar situation was observed at a jail in Chicago with approximately 350 cases diagnosed in April 2020 [1].This highlights the high risk of transmission of COVID-19 and other respiratory infections, within prisons (note that the term 'prisons' is used here to describe correctional facilities, including gaols/jails, prisons, and other custodial settings).Inmates are particularly vulnerable due to the close living quarters, the challenges of implementation of public health control measures, and the high prevalence of underlying health conditions [2,3].Given this context, inmates, as well as correctional and healthcare staff, and even visitors, are at risk of infection during an outbreak in a prison system. It is well recognised that prisons should be prioritised in the public health response to the COVID-19 pandemic, and for similar respiratory pathogens [4][5][6][7] Previous analyses of observational datasets have identified risk and mitigation factors associated with COVID-19 outbreaks in prisons.Time-series analysis on data from the California state prisons showed a positive correlation between prison transfers and COVID-19 case rates [8].Another analysis of data from US prisons also revealed an association between the spread of COVID-19 in the community and a growing prison population [9].These studies highlight setting-specific factors such as over-crowding and intra-system prison transfers. Mathematical models have been widely used to inform regional and national policies and public health responses during the COVID-19 pandemic [10][11][12][13][14][15].Previous modelling studies have quantified the potential effectiveness of individual interventions, or circumscribed sets of control measures, such as regular screening of staff to reduce this portal of viral entry, [16] decarceration or immmunisation of prisoners to reduce the size of the susceptible population, [17] as well as quarantine of all newly incarcerated individuals and use of personal protection equipment (PPE) [17].However, these models largely lacked real world data for calibration and validation, and did not consider the differing transmissability and virulence characteristics of the SARS-CoV-2 variants of concern.In addition, these models have generally focused on a limited number of individual prisons rather than considering the whole prison system.Previous models have also disregarded the complexities of varied person-to-person interactions within a prison setting, the diverse physical structures within the prisons, and individual vulnerabilities that may influence COVID-19 infection outcomes [10,15,18]. For this study, we developed an individual-based mathematical model representing the prison system within the Australian state of New South Wales (NSW), the most populous state in the country.We collaborated closely in development of the model with the correctional and prison health authorities in NSW.The model incorporated data provided from the sector including inmate and staff populations, close contact rates, inmate movements, and was validated using data from outbreaks that occurred prior to immunisation scale-up.The model was then used to describe outbreak characteristics for SARS-CoV-2 strains (alpha, delta, and omicron) and to explore the efficacy of a range of integrated COVID-19 public health mitigation strategies at both the individual prison and the prison system level. Methods We developed an individual-based model using C++, adapting an existing model of hepatitis C transmission, [19] to simulate SARS-CoV-2 transmission in the NSW prisons.There were 33 correctional centres in NSW at the time spread over 800,000 square Km.Eleven of these centres included facilities with more than one security classification, but with discrete boundaries, and so were considered separately.The model therefore included 27 minimum security prisons, 11 medium security prisons, and 18 maximum security prisons.The prisons included 14 'reception' centres which receive newly incarcerated individuals from the community.All centres housed both individuals who have been sentenced and those not yet sentenced (i.e., on remand).Modelled individuals were inmates, correctional staff, healthcare staff, or family visitors.All individuals were assumed to be of the same gender.The model simulated daily SARS--CoV-2 transmission over 120 days, tracking individual characteristics which changed probabilistically each day (Table 1).To account for stochasticity, a total of 100 simulations were run for each scenario.Results were obtained by taking mean/median values of key indicators and a 95% confidence interval (CI) from the 100 simulations.The model code is available via an online repository under an open access license [20]. Population and prison system structure The model simulated 13,458 inmates, 6,909 correctional staff, and 578 healthcare staff, based on population data as of December 2019.It reflects the real-world structure of the NSW prison system where each prison consists of areas, which are composed of units (or 'wings'), which in turn are composed of cells, which house up to two individuals (Fig 1).Inmates can interact with each other if they are in the same area of the same prison, [21] and can be transferred to another prison or visit a court [21].Inmates also interact with correctional staff during patrols, escorting of inmates, and security interventions (e.g., breaking up fights) [21].Healthcare staff interact with correctional staff and inmates when they are delivering medical services [21].The probabilities of interactions between individuals were estimated from data provided on the average number of contacts per day for each individual type (S14-S16 Tables and section V in S1 File).The model recorded each inmate's location and movement between centres, to and from court, and into the community using probabilities estimated from provided inmate movement data (S2-S13 Tables and sections II-IV in S1 File).When in transit, the model 1; section I and VI in S1 File).Acquisition of SARS-CoV-2 occurs among those who have never been infected with SARS-Cov-2 following data specified contacts with those in the same prison area, not currently in isolation.This was then implemented as an event using a uniform probability distribution with range 0�02-0�05 for the SARS-CoV-2 alpha strain; [22] multiplied by 2 for the delta strain; and multiplied by 4 for the omicron strain (Table 1; section I in S1 File) [23].Staff who become infected were immediately removed from the centre within 24 hours of onset of symptoms or diagnosis.As the simulation runs for only 120 days, infected individuals who recovered from infection were deemed not susceptible to reinfection. Model parameterization Parameters describing new entrants, movements between prisons, and release to community were set to match the NSW inmate population within each prison using a grid search method.This resulted in a stable prisoner population for the duration of the simulations (sections VI-VII in S1 File).Age-dependent mortality rates were adjusted using the same method to match published infection fatality ratios (S18-S19 Tables and sections VI-VII in S1 File).Simulated outbreaks resulting from alpha, delta, and omicron strains were produced, with the delta strain parameters used for simulation of mitigation strategies. Interventions incorporated Mitigation strategies (including a no mitigation 'baseline' scenario) were co-developed with correctional and health authorities to match NSW prison resources and organisational procedures.These included: personal protective equipment (PPE), quarantine on reception, isolation of proven or suspected infected cases, rapid antigen testing of staff (RAT) and inmates before transfer, prison-to-court transit restrictions, lockdown of individual prisons (i.e.no prisoner movements from centres with cases), and immunisation (see Table 2).For PPE, we modelled the use of standard and N95 masks.For each scenario, the virus entered the prison system via an infected individual (prisoner, healthcare staff or correctional staff member) on day 1. A two-sample Z-test with a p-value threshold of 0�05 was used to compare the distribution of the daily new infections of the three SARS-CoV-2 strains.A prison outbreak was defined as the occurrence of >5 infections per prison within the 120 days.A system-wide outbreak was defined as the occurrence of >2 prisons meeting the prison outbreak criteria.These definitions are conservative versions of the US CDC definitions [24].The probability of an outbreak was estimated by counting the number of simulations meeting outbreak criteria out of 100 simulations. Model validation In August and September 2021, the Metropolitan Remand and Reception Centre in NSW experienced a sustained delta variant outbreak following multiple entries of infected staff and Scenario Description Baseline COVID-19 entry via an infected inmate on day 1; inmates can intermingle with other inmates in the same prison area; SARS-CoV-2 delta strain disease-related parameters applied. Standard mask Standard face masks in use for all inmates, correctional staff, and healthcare staff; This applies a 5% reduction in probability of onward transmission from the source; and a 67% protection of infection for the recipient [31] This scenario assumes 100% PPE compliance. PPE + Quarantine + Isolation Standard face masks are used by inmates everywhere including outside quarantine; new inmates are quarantined for 14 days with PCR tests at day 1 and day 12.If the PCR test returns a positive result (assuming 100% accuracy), the inmate is put into isolation for 14 days; N95 masks are used by staff in the isolation area with an 18% reduction in probability of onward transmission for the source; and an 85% protection from infection for the recipient is applied [31].This scenario assumes 100% PPE compliance. Entry via inmate, daily RAT COVID-19 entry assumed to be from 1 infected inmate on day 1; correctional staff and healthcare staff are subjected to RAT testing every day before entering the prison.A pooled RAT sensitivity of 71% and a specificity of 99% was applied [32].Prison and healthcare staff returning a positive RAT result are assumed to be sent home and subjected to PCR testing within 24 hours. Entry via correctional staff, daily RAT COVID-19 entry assumed to be from 1 infected correctional staff member on day 1; correctional staff and healthcare staff are subjected to RAT testing every day before entering the prison at day 1.A pooled RAT sensitivity of 71% and a specificity of 99% was applied [32].Prison and healthcare staff returning a positive RAT result are assumed to be sent home and subjected to PCR testing within 24 hours. Entry via healthcare staff, daily RAT COVID-19 entry assumed to be from 1 infected healthcare staff member on day 1; correctional staff and healthcare staff are subjected to RAT testing every day before entering the prison at day 1.A pooled RAT sensitivity of 71% and a specificity of 99% was applied [32].Prison and healthcare staff returning a positive RAT result are assumed to be sent home and subjected to PCR testing within 24 hours. Entry via correctional staff daily, second daily RAT COVID-19 entry assumed to be from 1 infected correctional staff member on day 1; correctional staff and healthcare staff are subject to RAT testing every second day before entering the prison.A pooled RAT sensitivity of 71% and a specificity of 99% was applied [32] Prison and healthcare staff returning a positive RAT result are assumed to be sent home and subjected to PCR testing within 24 hours. Results The three outbreak scenarios associated with the alpha, delta, and omicron strains revealed different epidemic curves, with essentially all inmates infected by day 120 and similar numbers of deaths (Fig 3A, 3B and S20-S23 Tables S1 File).The corresponding daily peak of new infections among inmates was 376 for alpha (339-416; on day 46), 472 for delta (430-517; on day in S1 File) (prisoner vs correctional staff p = 0�87, inmate vs healthcare staff p = 0�33).Given these closely comparable outbreaks, all subsequent simulations were based on entry of the delta variant via an individual prisoner.Without any mitigation strategy, this scenario is referred to as the baseline scenario in which most infections were concentrated in minimum security prisons consistent with the more lenient movement restrictions placed on prisoners in these centres.There was also a sustained pattern of new daily infections among inmates in maximum security prisons, reflecting the fact that all reception prisons are designated as maximum security and continue to accept new, susceptible prison entrants from the community (S1 Video, S28 PPE + Quarantine + Isolation scenarios In the standard mask scenario ( RAT Four staff RAT scenarios were modelled ( Transit interventions Four scenarios evaluating the impact of control measures applied during transit of inmates within the prison system were modelled (Table 2).In the Standard mask during transit scenario, a peak of 418 (379-460) new inmate infections at day 40 was recorded, with no reduction in cumulative inmate infections (S36-S37 Tables in S1 File).There were fewer infections and at an earlier peak for the N95 mask during transit scenario [a peak of 372 (336-412) new inmate infections at day 37, no reduction in cumulative inmate infections] (S36 and S38 Tables in S1 File).In the RAT pre-transit scenario, there was a peak of 146 (124-172) new inmate infections at day 75 and an average 42�9% (39�7%-46.0%)reduction in cumulative inmate infections (S36 and S39 Tables in S1 File).In the Restrict prison transfers scenario, there was a small peak of new inmate infections and a substantial reduction in cumulative infections [11 (5-19) Isolation strategies Increasing the size of the population who were put in isolation due to an identified case, from cells to units to areas, progressively reduced the size of the projected outbreaks, noting that as confirmation of infection in a case is not instantaneous, larger isolation boundaries prevent transmission outside the boundary. Prison lockdown Four prison lockdown scenarios in which prisons with a confirmed case were locked down with varied timelines while the remaining prisons in the system operated normally (Table 2). In Immunisation strategies Five scenarios involving immunisation along with standard face masks used by all inmates and staff were explored (Table 2).The magnitude of outbreaks was greatly reduced even with a low vaccination coverage (Fig 6, S50-S55 Tables in S1 File).For the Low coverage immunisation for inmates and staff scenario, there was a peak of 100 (81-121) new inmate infections at day 99 and an average 68�1% (72.0-63�7%) reduction in cumulative infections (S50-S51 Tables in S1 File).High vaccination coverage amongst inmates and staff substantially reduced outbreaks across the prison system but was insufficient to completely prevent outbreaks.[High coverage immunisation for inmates and staff scenario: projected peak 54 (41-71) new inmate infections at day 111, average 84�5% (82�6-86�2%) reduction in cumulative infections] (S50 and S54 Tables in S1 File).The addition of quarantine and isolation along with high vaccination coverage was sufficient to completely prevent outbreaks (S50 and S55 Tables in S1 File). Outbreak probability analysis The probabilistic model allowed investigation of the number infection incursions which generate a system wide outbreak, the average number of prisons that have an outbreak, and the average number of peak inmate infections (Fig 7).As shown in Table S56 in S1 File, Implementation of the PPE + Quarantine + Isolation strategy markedly reduced the probability of a system wide outbreak (6 out of 100 simulations only) and limited the spread to within a single prison.The High coverage immunisation for inmates and staff strategy also reduced the probability of a system-wide outbreak (49 simulations), as did the Restrict prison transfers and Prison lockdown (no delay) strategy (below 50 simulations).The Restrict prison transfers strategy was also able to limit potential outbreaks to only one prison.The High coverage immunisation for inmates and staff + Quarantine strategy prevented outbreaks occurring in any of the 100 simulations of the model. Discussion We developed an individual-based model that represents the whole prison system of NSW. Using this model, we utilised real-world data from correctional and health services in Australia to analyse potential COVID-19 outbreaks in a prison system.We also used this to evaluate the effectiveness of potential mitigation strategies.In the absence of control measures or a rapid outbreak response, our stochastic model projected that 100% of inmates would become infected over 120 days regardless of the SARS CoV-2 variant (alpha, delta, or omicron).The most effective individual mitigation strategies were high immunisation coverage and prompt lockdown of centres with cases which could reduce the ultimate number of cases by more than 60%.Other than immunisation, the simplest and most effective combination strategies included quarantine of inmates on entry, isolation of proven or suspected cases, and widespread use of PPE by staff and inmates.The simulated scenarios highlight the impact of inmate movements in the spread of COVID-19 within a prison system-reiterating how critical mobility is for COVID-19 transmission [8,14].Strategies which restricted prison transfers, promptly locked down a centre where a case has been identified, controlling transmissions arising from entry of new inmates with infection via quarantine, and isolation of proven or suspected cases, were shown to be among the best strategies in mitigating outbreaks.This finding is concordant with a recent study showing how this approach can successfully contain an outbreak [25].These strategies, however, require major changes in the usual custodial operations, and may markedly restrict social contact, worsen mental health, and result in violence including riots [26].Consulting with the appropriate correctional and prison health authorities has enabled us to identify feasible and realistic mitigation strategies that can be implemented within the NSW prison system. It is also important to note also that strategies restricting prisoner mobility were found to be time sensitive.There was a 10% system-wide reduction in the efficacy of a prison lockdown strategy if there was even a one-week delay in implementing this control measure once a COVID-19 case had been found.Moreover, delays of 6 weeks or longer were futile in preventing a major system-wide outbreak.These results were shown via an average of 100 simulations to factor in variation and uncertainty in the number of contacts and duration of contact. Prisons are typically complex structures primarily built to ensure secure incarceration but are also commonly overcrowded at the expense of both physical and mental health [27].Of necessity, prisons incorporate areas where congregation occurs such as shower blocks, cafeterias, and exercise yards, as well as the cells which typically have a multi-layered physical structure.This structural organization is represented in the model with cells housing either one or two inmates, organized into units or wings which share some common facilities, and which in turn are organized into areas which may typically share an exercise yard.Although these internal structures at first glance may appear to prevent the spread of COVID-19, our modelling suggests isolating a whole prison via a prompt lockdown will likely contain an outbreak within that centre, whereas isolation within internal structures is less effective.Similarly, isolation of an area is likely to be more effective than isolation of a unit or a cell-likely reflecting the fact that healthcare and correctional staff may interact with inmates across these structures, and transmissions between inmates within the smaller structures are likely to have occurred prior to, or concurrent with, an initial case detection. Strategies such as the widespread use of PPE may not disrupt prison procedures but have only limited efficacy in outbreak control when implemented alone (even assuming face masks are correctly used 100% of the time).When combined with quarantine of all those newly incarcerated and with isolation of proven or suspected cases, PPE was highly effective in controlling outbreaks.These combined measures were comparable to prompt prison lockdowns and high coverage immunisation (where the probability of an outbreak occurring remained over 40%). While costly, the use of daily RAT testing of all staff, was shown in the model to be very effective in preventing entry of COVID-19 into the prison system via a staff member.Interestingly, reducing the testing frequency to second daily was far less effective in preventing this portal of entry and the consequent substantive outbreaks among the prisoner population.Further, although an effective control strategy for transmission from staff members, RAT testing of all inmates prior to movement between centres only afforded a 43% reduction in cumulative infections, perhaps reflecting a larger number of daily movements of inmates (of the order of 250 movements per day in the system) and the high probability of transmission during transit. Our study shows that high coverage immunisation of both staff and prisoners is effective in mitigating COVID-19 outbreaks.This highlights the need to include prisoners and correctional staff as priority populations in vaccination efforts against COVID-19 [28].Regardless of the coverage, this strategy is comparable to a timely implementation of a prison lockdown strategy.Importantly however, the modelling indicates that high coverage immunisation alone is insufficient in preventing COVID-19 outbreaks.This concern may become increasingly evident if additional new variants emerge and vaccine-conferred cross-protection wanes [29]. The best outcome was achieved when a high vaccination coverage is implemented in combination with the use of PPE and quarantine and isolation. While this model presents a detailed and sophisticated representation of potential COVID-19 outbreaks and the effectiveness of mitigation strategies in the prison system, it is important to note the limitations.First, the data utilised in this model represents the NSW prison system, which might not necessarily reflect other prison systems.However, our simulation represents common key factors present in most prison systems worldwide including prison transfers, enclosed living quarters, and interactions with staff.While this study is applied to the NSW prison system, the model is made available online under an open access license and can be modified to represent other prison systems.Second, although the outbreak size in the baseline scenario was comparable between alpha, delta and omicron strains, the scenarios were based on the COVID-19 delta variant and its parameterisation.It is also important to note that as evidence grows around the transmission rates for COVID-19, the rates may differ from those applied in this model.The comparison that between variants reported here validates the relationship between transmission rates and the epidemic curve (a higher transmission rate results to a higher number of people infected in a fewer number of days).Third, while the structure of the prison system was modelled in moderate detail, there are elements that have not been incorporated such as ventilation and random mixing, which may impact the spread of the virus.The model also omits the possibility of inmates becoming infected from non-prison entities such as civilians in the courts.Fourth, the growing impact of immunisation and prior episodes of infection on reducing morbidity and mortality have not been included in the parameterisation used here.Lastly, while the selection of interventions was done in close consultation with prison authorities, our study did not include a cost-effectiveness analysis of the implementation of the interventions considered.Incorporating financial constraints might impact how these interventions might be implemented in the real-world.Future work should consider this. Conclusion While known measures to prevent and control COVID-19 outbreaks have been adopted in the general population, such measures are not necessarily feasible in many prison systems across the globe.This is due to many differences in the community and the prison setting including higher mobility and access to healthcare services.Based on the findings in this model, a range of effective mitigation strategies can be selected for deployment in prisons and similar highrisk enclosed settings in response to outbreaks of COVID-19 or other respiratory pathogens.These modelling outputs have been used to inform public health policy and practice in several Australian prison jurisdictions.By carefully representing the real-world structure of the NSW prisons, the model can also be extended to study emerging SARS CoV-2 variants of concern, as well as other similarly transmissible respiratory pathogens. Fig 1 . Fig 1. Structure of the model.The model represents the NSW prison system consisting of prisons with varying security settings.Each prison consists of areas, which consists of units, which consists of cells.The model considers the possibility transfers between prisons, as well as visits to 38 courts via 20 transfer buses.https://doi.org/10.1371/journal.pone.0303062.g001 Fig 3 . Fig 3. Simulation results according to SARS-CoV-2 variants and type of individual.Panel A shows a comparison of the number of new cases based on SARS-CoV-2 alpha, delta, and omicron strain transmission probabilities.Panel B shows a comparison of the number of cumulative cases and deaths using three COVID-19 variants.Panel C shows a comparison of the number of new cases given different entry points for the virus.https://doi.org/10.1371/journal.pone.0303062.g003 Fig 4 . Fig 4. Simulation results using face mask, rapid antigen testing, and movement restriction strategies.Panel A shows a comparison of the number of new cases using various strategies involving face masks.Panel B shows a comparison of the number of new cases using different entry points for the virus and modifications in rapid antigen testing.(2 lines superimposed) Panel C shows a comparison of the number of new cases using various strategies related to movement.https://doi.org/10.1371/journal.pone.0303062.g004 the Prison lockdown with no delay scenario, the model projected a peak of 69 (54-88) new inmate infections at day 76 (Fig 5b and S2 and S3 Videos), and an average 73�3% (68�6%-77�5%) reduction in cumulative infections compared to the baseline scenario (S45 Table in S1 File).Increasing the delay until lockdown beyond 1-2 weeks increased the peak size for new inmate infections and reduced the impact on cumulative infections (see Fig 5B and S46-S49 Tables in S1 File), with a 6-week delay resulting in a peak of 255 (224-288) new inmate infections at day 52 and an average 21�7% (19�7-26�5%) reduction in cumulative infections (S46-S49 Tables in S1 File). Fig 5 . Fig 5. Simulation results using isolation and lockdown strategies.Simulation results according to Panel A shows a comparison of the number of new cases using various lockdown strategies.Panel B shows a comparison of the number of new cases using various delays in the implementation of prison lockdown.https://doi.org/10.1371/journal.pone.0303062.g005 Fig 7 . Fig 7. System-wide outbreak probabilities and system-wide outbreak magnitude for varied control strategies.A prison outbreak was defined as >5 infections per prison: a system/wide out 2 or more prisons that meet the prison outbreak criteria.The high coverage vaccination + quarantine strategy did not result to any secondary infections based on 1 infected new inmate over 100 simulations.https://doi.org/10.1371/journal.pone.0303062.g007 number of new cases while the x-axis represents the simulation time from day 1 to 120.(MOV) S2 Video.An animated visualisation of the average number of new cases under the lockdown scenario.The data shown refers to the number of new cases among inmates from representative prisons of varying security classification in NSW prisons.The y-axis represents the number of new cases while the x-axis represents the simulation time from day 1 to 120.(MOV) S3 Video.An animated visualisation of the number of new cases under a single simulation of the lockdown scenario.The data shown refers to the number of new cases among inmates from representative prisons of varying security classification in NSW prisons.The y-axis represents the number of new cases while the x-axis represents the simulation time from day 1 to 120.(MOV) Table 2 . (Continued)Areas with symptomatic inmates are locked down until no one is actively infected with COVID-19; N95 masks are used by staff interacting with isolated inmates; standard PPE masks are used by isolated inmates; inmates within the area are free to move within the area but there is no travel to or from another area within the centre; isolation occurs on a rolling basis while prison interactions outside isolated areas continue as normal.N95 masks are used by staff interacting with isolated inmates; standard PPE masks are used by isolated inmates.Inmates within the centre are free to move within the prison but there are no movements to or from other prisons; prisons are locked on a rolling basis; inmates within the centre are free to move within the prison but there are no movements to or from other prisons (which operate as normal).This was despite the implementation of several mitigation measures including PPE, quarantine, isolation, and initial vaccination rollout.We compared the actual daily prison-acquired case data among inmates to 100 simulations of the model with corresponding interventions in place.The model produced outbreaks with similar daily case rates (see section VIII, S1 Fig inS1 File). Prison lockdown (no delay) Symptomatic inmates are tested using PCR with results by the next day.Prisons with symptomatic inmates with confirmed COVID-19 infection are locked down until no one is actively infected with COVID-19; N95 masks are used by staff interacting with isolated inmates; standard PPE masks are used by isolated inmates; prisons are locked on a rolling basis; inmates within the centre are free to move within the prison but there are no movements to or from other prisons (which operate as normal).Prison lockdown (1-week delay) Symptomatic inmates are tested using PCR with results by the next day; Prisons with symptomatic inmates with confirmed COVID-19 infection are locked down after 1 week until no one is actively infected with COVID-19; N95 masks are used by staff interacting with isolated inmates; standard PPE masks are used by isolated inmates; prisons are locked on a rolling basis; inmates within the centre are free to move within the prison but there are no movements to or from other prisons (which operate as normal).Prison lockdown (3-week delay) Symptomatic inmates are tested using PCR with results by the next day.Prisons with symptomatic inmates with confirmed COVID-19 infection are locked down after 3 weeks until no one is actively infected with COVID-19; Table 2 . (Continued)Assumes standard PPE is in place; 80% of the inmate population are assumed to have had double dose immunisation; the same vaccination rate is applied for correctional and healthcare staff. [33]ssumed the use of Pfizer mRNA vaccine.After the first dose, we applied a 46% reduction in onward COVID-19 transmission a 30% reduction in COVID-19 infection.After two doses, we applied a 65% reduction in onward COVID-19 transmission a 79% reduction in COVID-19 infection[33].High coverage immunisation for inmates and staffAssumes standard PPE is in place; All healthcare staff and all correctional staff are assumed to be double dose immunised; 80% of the inmate population are assumed to have had double dose immunisation.We assumed the use of Pfizer mRNA vaccine.https://doi.org/10.1371/journal.pone.0303062.t00235), and 565 for omicron (519-614; on day 28).Similar outbreaks occurred when initiated by an inmate, healthcare staff, or correctional staff (Fig 3C, S24-27 Tables Table 2 ), a peak of 284 new infections (252-319) occurred among inmates at day 52(Fig 4A and S30Table in S1 File).This equates to an average 21�8% (20.0%-23�5%) reduction in cumulative inmate infections compared to the baseline scenario (S31 Table in S1 File).The model projected only small outbreaks among inmates in the PPE Table 2 ) with a peak of 290 new infections (258-325) among inmates at day 56 and an average 14�8% (13�7-15�9%) reduction in cumulative inmate infections for the Entry via inmate, daily RAT scenario compared to 0 new infections among inmates for the Entry via correctional staff, daily RAT scenario and the Entry via healthcare staff, daily RAT scenario (Fig 4B and S33-S35 Tables in S1 File).Second daily testing was less effective (Fig 4B and S33Table in S1 File).
7,372.4
2024-05-17T00:00:00.000
[ "Mathematics", "Medicine" ]
The Λ2 limit of massive gravity Lorentz-invariant massive gravity is usually associated with a strong coupling scale Λ3. By including non-trivial effects from the Stückelberg modes, we show that about these vacua, one can push the strong coupling scale to higher values and evade the linear vDVZ-discontinuity. For generic parameters of the theory and generic vacua for the Stückelberg fields, the Λ2-decoupling limit of the theory is well-behaved and free of any ghost or gradient-like instabilities. We also discuss the implications for nonlinear sigma models with Lorentzian target spaces. Introduction and summary As an effective field theory on Minkowski space, Lorentz-invariant massive gravity with generic interactions is strongly coupled and breaks perturbative unitarity at a scale Λ * with Λ * < Λ 3 = (M Pl m 2 ) 1/3 [1]. When the graviton mass m is taken to be of the current Hubble scale, this is a very small scale phenomenologically. Moreover, all the interactions that arise strictly below the scale Λ 3 are associated with the nonlinear Boulware-Deser JHEP04(2016)188 (BD) ghost [2][3][4]. This makes the Vainshtein mechanism [5] in all these massive gravity theories untrustworthy as a resolution of the linear vDVZ-discontinuity (van Dam-Veltman-Zakharov [6,7]). As a result, none of the theories of massive gravity with a strong coupling scale Λ * < Λ 3 have a smooth massless limit to General Relativity within the regime of validity of their effective field theory. Fortunately, all the interactions below Λ 3 can be eliminated by a unique graviton potential [8,9], and this coincides with the elimination of the BD ghost [9][10][11]. In ghostfree massive gravity [8,9] gravitational waves carry 5 modes, as expected for a massive spin-2 particle in four dimensions, and the Vainshtein mechanism operates in a much more controlled way [12]. See [13] for a recent review of massive gravity and [14] for an introduction on the Vainshtein mechanism. Λ 2 -limit of massive gravity. The scale Λ 3 = (M Pl m 2 ) 1/3 is usually considered as the highest possible strong coupling scale in a Lorentz-invariant theory of massive gravity (bearing in mind we consider m ≪ M Pl ). This usually comes from analyzing ghost-free massive gravity around the trivial Lorentz-invariant vacuum g µν = η µν , φ A = x A , where the φ A are the Stückelberg scalar fields that ensure that the theory of massive gravity is diffeomorphism invariant. However, about non-trivial vacua which still preserves approximate Lorentz-invariance for the metric (in the limit where m → 0) but not for the Stückelberg fields, the associated strong coupling scale can be parametrically higher than Λ 3 . In unitary gauge, the metric for the non-trivial vacuum configuration (1.1) is still approximately Minkowski (and hence Lorentz-invariant) but in a different coordinate form, g µν = ∂ µφ A ∂ νφ B η AB + O(m 2 ), withφ A (x) being the inverse function ofφ A (x). We will show this in a couple of different ways. First of all, writing the metric as g µν = η µν + h µν /M Pl , we note that if all the vector and scalar modes obtain a kinetic term without needing to rely on a mixing with h µν , then one can define a Λ 2 -decoupling limit for ghost-free massive gravity, by sending which leads to where the first term is the linear Einstein-Hilbert term, T µν is the stress-energy tensor of the matter fields and we have defined the massive gravity nonlinear sigma model as JHEP04(2016)188 with K µ ν = δ µ ν − X µ ν with X µ ν = η µρ ∂ ρ φ A ∂ ν φ B η AB . The interesting properties of this nonlinear sigma model and its generalization have been discussed in [15] and will also be mentioned later in this paper. We emphasize that the massive gravity nonlinear sigma model (1.4) does not amount to simply setting g µν := η µν in ghost-free massive gravity, which would be an inconsistent procedure. Rather, we take a well-defined Λ 2 -decoupling limit which preserves the total number of degrees of freedom along the flow M Pl → ∞, and hence will automatically carry over desirable properties of ghost-free massive gravity (such as the absence of the BD ghost) to the decoupled theory. This fact alone is sufficient to guarantee that L MG−NLS [φ A ] does not carry more that 3 propagating degrees of freedom (in D = 4 dimensions), while the full action (1.3) still carries all the 5 propagating degrees of freedom. The very existence of such a decoupling limit relies on configurations for φ A for which all 3 propagating degrees of freedom in L MG−NLS [φ A ] are active. In what follows we will first perform a full nonlinear Hamiltonian analysis for this massive gravity nonlinear sigma model. That is, we run a Dirac-Bergmann algorithm for the model, finding out all the constraints and checking their consistencies. We stress again that since we are taking a consistent decoupling limit, it is guaranteed that the number of degrees of freedom is not more than three, since h µν accounts for the additional two. For technical reasons, we will limit ourselves to the so-called minimal model although our results hold in all generality for generic sets of parameters. As expected, this Hamiltonian analysis concludes that in four dimensions, 3 out of the 4 Stückelberg fields are dynamical degrees of freedom. In other words, both the vector and scalar modes in φ A are dynamical. Interestingly, even though 'gravity' is entirely decoupled, the BD ghost mode is still eliminated. This of course is due to the matrix square root structure and the anti-symmetization scheme of the ghost-free graviton potential [16] and was guaranteed by taking the decoupling limit. Having proven that the nonlinear sigma model (1.4) includes 3 degrees of freedom one can then search for backgrounds where the longitudinal mode is dynamical. In principle most vacua of the theory will excite all 3 DoFs, but the trivial one φ A = x A and any Lorentz-invariant generalization are special in that at linear order they exhibit an accidental U(1)-gauge symmetry. For the isolated nonlinear sigma model, the longitudinal mode is thus infinitely strongly coupled on these trivial vacua and their regime of validity is null. For massive gravity, however, the coupling to gravity breaks the accidental U(1) and provides a kinetic term for all the relevant degrees of freedom. This implies that vacua where the Stückelberg fields preserve Lorentz-invariance are acceptable vacua for massive gravity and the strong coupling scale on these vacua is lowered to Λ 3 , but these vacua are not acceptable for the nonlinear sigma model. Instead, for the nonlinear sigma model and for massive gravity with a Λ 2 -decoupling limit, one needs to consider non-trivial (weakly Lorentz-breaking) vacua for the Stückelberg fields. (Of course, for the nonlinear sigma model alone, Λ 2 is a free tunable dimensionfull parameter.) Finding exact vacua may be generically challenging from a purely technical viewpoint. Plane waves are exact solutions which play the role of instructive toy-models. More generic vacua can be constructed perturbatively, either by performing a small field expansion about JHEP04(2016)188 the trivial vacuum or by performing a local expansion about a given point in spacetime. The latter expansion will prove convenient to establish the full stability of the DoFs and derive the corresponding strong coupling scale. A nontrivial backgroundφ a will necessarily introduce some characteristic energy scale L −1 (it may of course introduce more scales and when that happens, the relevant energy scale for this discussion is the smallest one). When taking the decoupling limit (1.2) we maintain the scale L −1 fixed and the resulting strong coupling scale ends up being Λ 2 dressed by some positive powers of L −1 . This scale L −1 plays a similar role as the anti-de Sitter (AdS) curvature when considering massive gravity on AdS [17][18][19][20][21]. Note however that unlike massive gravity on AdS, we will focus this discussion to the case where the spacetime curvature vanishes (at least up to order m 2 corrections). Absence of linear vDVZ-discontinuity. 1 The previous Λ 2 -decoupling limit of ghostfree massive gravity has another virtue: namely the absence of coupling between matter fields and the Stückelberg fields. Indeed in the decoupled limit (1.3), only the standard tensor modes h µν couple to matter as in General Relativity while the additional three degrees of freedom and specifically the longitudinal mode fully decouple. This immediately implies that already in the linear regime, i.e. already at large distances compared to L and Λ −1 2 but smaller than m −1 , the phenomenology of ghost-free massive gravity on these vacua is very close to General Relativity, without even needing to invoke any explicit Vainshtein mechanism (or in other words the non-trivial vacua already automatically implement the Vainshtein mechanism). Beyond this decoupling limit we expect corrections suppressed by positive powers of Λ * /M Pl , and fifth forces will also be suppressed by a similar amount (see ref. [22] for relevant discussions). The decoupling of the longitudinal mode also implies that the theory is free from the standard vDVZ-discontinuity at the linearized level about these non-trivial vacua, similarly as for massive gravity on AdS [17][18][19] (or a general FLRW background [23,24]). A crucial distinction with massive gravity on AdS is that in our approach the gravitational (or geometric) sector is insensitive to the scale L in the decoupling limit and the background metric is Minkowski-like (or can be taken to be de Sitter or FLRW if the relevant cosmological constant or matter fields are included). For massive gravity on AdS on the other hand, the gravitational sector is strongly sensitive to the AdS curvature scale L even in the decoupling limit. For massive gravity on AdS, setting a limit where the metric is Minkowski requires sending L −1 → 0 and therefore leads to an arbitrarily low strong coupling scale (see figure 1). Our approach also differs from standard Lorentz-violating theories of massive gravity (see ref. [25] for a classification), where the strong coupling scale can be Λ 2 (or even higher when considering Lorentz-breaking generalizations of the Einstein-Hilbert term [26]). JHEP04(2016)188 Indeed in these theories, the Lagrangian manifestly breaks Lorentz invariance. In the model we consider here, the fundamental theory preserves Lorentz invariance and the latter is only broken spontaneously about the vacua we consider. Nonlinear sigma models with Lorentzian target spaces. The potential of massive gravity can be seen as a non-standard nonlinear sigma model for the four Stückelberg fields φ A , mapping from the spacetime metric g µν (or η µν in the absence of gravity) to the target space (the reference metric [27]). For a standard nonlinear sigma model, a typical requirement is that the target space be Riemannian (its metric being positive definite) to avoid ghost DoFs (see e.g. [28][29][30]). From this point of view, it is not surprising that generically massive gravity is plagued by the BD ghost, as the internal space of the Stückelberg fields is Lorentzian (pseudo-Riemannian with signature (− + · · · +)). Ghost-free massive gravity then acts as a unique and special case that evades the Riemannian requirement. For a symmetric target space, the Lorentzian nature translates to non-compactness of the associated symmetry group. At the technical level, the reason why the BD ghost is eliminated in ghost-free massive gravity is due to the existence of a second-class constraint [8]. Taking the decoupling limit (1.2) of ghost-free massive gravity, the nonlinear sigma model decouples from the gravitational tensor DoFs. Since a decoupling limit never changes the number of DoFs (if taken appropriately 2 ), the absence of the sixth BD mode in ghost-free massive gravity ensures the absence of ghost in the nonlinear sigma model. As a result and as we mentioned above, the nonlinear sigma model that arises from massive gravity is free of the ghost associated with the negative direction of the target space. This is in contrast with the other known ways to avoid the Riemannian requirement of the target space which all rely on invoking some gauge DoFs. This is for instance the case of the string Polyakov/Nambu-Goto action [31][32][33][34][35][36], or more generally for p-brane actions [37], where the target space is the spacetime itself, thus Lorentzian. Another known mechanism is to invoke normal gauge fields that are auxiliary, that is, without a kinetic term for the gauge field. This mechanism is used in supergravity model building (see e.g. [38,39]). All these known exceptions with a Lorentzian target space do not compromise the spirit of the Riemannian requirement in the sense that once the auxiliary gauge/diffeomorphism DoFs are fixed by making use of the auxiliary field equations of motion and gauge choices the target space becomes manifestly Riemannian. On the other hand, the massive gravity nonlinear sigma model and its generalization relies on two second class constraints to project out the would-be ghost associated with the negative direction. Since the ghost-free graviton potential is unique, up to a few free parameters, it follows that the massive gravity nonlinear sigma model (1.4) in D dimensions -with the sum starting from n = 1, the internal space metric η AB replaced by f AB (φ) and the coefficients α n generalized to be functions of the Stückelberg fields α n (φ) [15,40] -is the only nonlinear sigma model where the target space is Lorentzian. We emphasize that the target space can be higher-dimensional than that of the spacetime (that is N > D). The case of N < D JHEP04(2016)188 is more subtle and will be discussed in section 8. See [15] for a bi-gravity braneworld interpretation of this generalized nonlinear sigma model and more discussions on nonlinear sigma models with Lorentzian target spaces. Outline. The rest of the manuscript is organized as follows: we start by introducing ghost-free massive gravity and a generalization of the Nambu-Goto action in section 2, derive the value of the strong coupling scale about the trivial vacuum on Minkowski and AdS, and explain the origin of the vDVZ-discontinuity on Minkowski and its absence on AdS. We then perform the full nonlinear Hamiltonian analysis in section 3 for the massive gravity nonlinear sigma model and confirm the existence of two second class constraints that remove the BD ghost associated with the negative direction of the target space. Motivated by this result we first provide in section 4 an explicit exact nonlinear example of vacuum solution where all the DoFs are manifest. Although that vacuum turns out to be unstable, it corresponds to a useful explicit proof-of-principle. In section 5 we then derive more general classes of backgrounds by expanding the background itself and by adopting a local coordinate expansion. We find a family of stable vacua where all the DoFs are manifest and healthy. The related strong coupling scale on these stable vacua is established in section 6. These results are valid in dimensions larger than two. In two-dimensions we show in section 7 that the U(1)-symmetry is preserved to all orders and the corresponding nonlinear sigma model hence propagates no DoFs. In section 8, we give a short summary of our main results. 2 Ghost-free massive gravity and nonlinear sigma model In this section, we introduce the ghost-free graviton potential in a conceptually novel way: as a non-standard nonlinear sigma model with a Lorentzian target space. In this formulation, the importance of the scale Λ 2 is manifest. Nambu-Goto action for non-compact space We start by considering a theory of N scalar fields φ A living on a D-dimensional flat spacetime metric η µν . These N scalar fields may be thought as coordinates of a non-trivial target (field space, or internal) manifold specified by the metric f AB (φ). This corresponds to a nonlinear sigma model whose action can typically be written as Nonlinear sigma models [41] are effective field theories for multiple fields φ A with applications in various areas of physics (see, e.g., [28][29][30] and references therein for a review). The JHEP04(2016)188 nonlinear sigma model of eq. (2.1) is well-defined and free of ghost if the internal space metric f AB is positive definite, i.e., the target space has to be Riemannian (as opposed to pseudo-Riemannian). If the target space is symmetric, this means that the associated isometry group needs to be compact. When considering a non-compact space, the internal space metric f AB typically has a negative eigenvalue and the sigma model (2.1) has a ghost. One possible way out is to ensure that the mode associated with the negative direction is in fact not dynamical or a gauge mode. This is indeed the resolution for the Polyakov action for a p-brane 3 where the spacetime metric η µν is promoted to an auxiliary field g µν (x) and diffeomorphism invariance ensures that the would-be ghost DoF associated with the negative direction of the internal space is a gauge mode: If the internal space has signature (− + · · · +), then naively the field φ 0 (x) behaves as a ghost. But this action is invariant under the diffeomorphisms and the naive ghost is merely a gauge degree of freedom. This is obvious in the 'static' gauge where φ µ = x µ , for µ = 0, . . . , p, and the left-over target space is manifestly positive definite for the remaining φ A with A = p + 1, . . . , N − 1. An alternative way to see this is to write the auxiliary field metric g µν in ADM form [42] g µν dx µ dx ν = − N 0 2 dt 2 + γ ij dx i + N i dt dx j + N j dt , and then the lapse N 0 plays the role of a Lagrange multiplier that imposes a first class constraint projecting out the would-be ghost DoF, with p A = ∂L Polyakov /∂φ A , and we have accounted for the entire dependence on the lapse in the Hamiltonian. Actually, for the p = 1 string case, we see that for this procedure to work it is essential that the internal space metric f AB be not sign definite, otherwise the constraint would fix more than one phase space variable. In addition to this Hamiltonian constraint, there are D − 1 additional first class constraints generated by the shifts N i but only the Hamiltonian constraint is required to remove the would-be ghost in this Lorentzian space. Since the metric g µν is not dynamical in this model and merely plays the role of auxiliary variables, we can integrate it out without changing the number of DoFs, and we are then left with the well-known Nambu-Goto action for the p-brane: (2.4) The Nambu-Goto action still enjoys the same gauge symmetry, and static gauge can still be chosen to make the target space manifestly positive definite. JHEP04(2016)188 On the other hand, if the D-dimensional tensor X µ ν defined as 4 is diagonalizable, then the Nambu-Goto action may also be re-written as where our anti-symmetrization convention is with the averaging factor 1/n! in front. In this language, the absence of ghost for this non-compact target space can be traced back to signaling that not all of the N scalar fields φ A are dynamical. Generalization of Nambu-Goto Inspired by the expression (2.6) for the Nambu-Goto action, it is now natural to extend it to the following Lagrangians for n ≤ D, We may also consider a fully equivalent representation of theL n by taking linear combinations of them and defining the following Lagrangians So long as N ≥ D, all of these Lagrangians for any 0 ≤ n ≤ D satisfy the same relation (2.7) as the Nambu-Goto action, namely, which ensures the absence of ghost in any of these theories. While this generalization seems to be natural mathematically or at a superficial level, there is a crucial difference between the Nambu-Goto action and the generalized Lagrangians considered in (2.9): for the Nambu-Goto action, the rank of the matrix H AB = ∂ 2 L n /∂φ A ∂φ B is N − D, while for the L n the rank of the associated matrix H AB is N − 1. Also as we have seen, the removal of the degrees of freedom for the Nambu-Goto action is associated with a gauge symmetry, while for the other L n no symmetry is present and the removal of the ghost is related to second-class constraints. Nevertheless, for each one of these Lagrangians the vanishing of the Hessian is what signals the absence of the would-be ghost for these Σ-models on the target space. Therefore, the generalized Nambu-Goto Lagrangian (which we will refer to as the massive gravity nonlinear sigma model for reasons to become clear shortly) is given by (2.12) and N ≥ D. Intriguingly, this generalization of the p-brane Nambu-Goto action exactly gives rise to the graviton potential of ghost-free massive gravity when N = D. To consider in the context of a curved spacetime, we note that, instead of eq. (2.6), the Nambu-Goto action can equivalently be casted as The generalization of this action to terms with fewer factors of X is exactly the ghost-free graviton potential. The difference again is that, while the Nambu-Goto term is diffeomorphism invariant, the terms with fewer factors of X are not. In what follows we will also consider embedding these models in a gravitational setup, i.e., coupling to the dynamical part of g µν . This leads to ghost-free massive gravity in D dimensions (we shall consider only N = D in the following, as our main interest is in the context of massive gravity) [8,9] (2.14) 16) and the fields φ A play the role of Stückelberg fields that restore diffeomorphism invariance. In the gravitational setup, L 1 is a tadpole term and X µ 1 [µ 1 X µ 2 µ 2 · · · X µ D µ D ] acts as a cosmological constant so we do not consider their contributions. Without loss of generality we may always set α 2 = 1 and β 1 = −1. The constants α n and β n are related via and β 1<n<D are two sets of equivalent free parameters of ghost-free massive gravity. Linearized theory on Minkowski "Σ-model". Before considering the effects of gravity, we first focus on the "potential" term of massive gravity as a Lagrangian for the scalar fields φ A in their own right living on a flat Minkowski spacetime, decoupled from the gravitational sector. Note that a priori it is not certain that this "potential" scalar theory from massive gravity is actually continuously connected to massive gravity, which would require the existence of a decoupling limit of JHEP04(2016)188 some sort. We will see that such a decoupling limit indeed exists. At any rate, for now, one may consider the "potential" action of massive gravity as a scalar field theory on its own. Let us, for instance, consider the following Lagrangian As will be shown in section 3, non-perturbatively, this Lagrangian carries D − 1 degrees of freedom (the constraint that removes the ghost in ghost-free massive gravity remains active even in the absence of gravity). However, perturbatively about the trivial vacuum the Lagrangian L 2 is a Maxwell theory for V A and enjoys a U(1) gauge symmetry. In dimensions N = D > 2, that symmetry is an artifact of the linearized theory and does not survive at the nonlinear level. This realization has a profound impact not only for the scalar theory (2.18), but also for massive gravity as we shall see later. Indeed, for the scalar theory (2.18), the fact that one DoF fails to be dynamical on the trivial vacuum φ a = x a implies that this vacuum is infinitively strongly coupled and cannot be trusted (its has no regime of validity). This means that the theory (2.18) only makes sense if considered about different non-trivial vacua which excites all D − 1 degrees of freedom. Implications for massive gravity. In the context of massive gravity the situation is more positive for the vacuum φ A = x A . Indeed the mixing with gravity breaks the U(1) gauge symmetry and all D − 1 DoFs in the fields φ A are dynamical. The trivial vacuum φ α = x α has then an interesting non-trivial regime of validity. In this case one of the DoF in φ α only becomes dynamical (at the linearized level) through its mixing with gravity. This implies that, at the linear level, this DoF directly couples to matter with the same strength as gravity, which is at the origin of the linear vDVZ discontinuity. To see this explicitly, let us start with the ghost-free massive gravity Lagrangian (2.14) and set the cosmological constant Λ c = 0 so as to have Minkowski as a vacuum solution. When splitting the fields φ α = x α + A α + η αβ ∂ β χ and the metric as g µν = η µν + h µν /M Pl , at the linear level, the only place where the kinetic term for χ enters is through its coupling with h µν . Symbolically, this is given by where T µν [ψ i ] is the stress-energy tensor of the external fields ψ i coupled to gravity. The mixing term can be taken care of by performing the field space rotation, symbolically, h µν =h µν +χη µν with Λ 3 3 = m 2 M Pl andχ the canonically normalized helicity-0 mode, At the linear level, the coupling between χ and any non-conformal matterχT is insensitive to the graviton mass m and does not vanish in the massless limit. This is of course at the JHEP04(2016)188 origin of the well-known linear vDVZ-discontinuity and its resolution lies in the nonlinear interactions which become increasingly important in the small mass limit as pointed out by A. Vainshtein in [5]. In the context of nonlinear massive gravity the implementation of this Vainshtein mechanism was considered for instance in [12,14,43,44]. At the nonlinear level the theory involves interactions of the form h(∂ 2χ ) n+1 /Λ 3n 3 , which implies that the theory is strongly coupled at the scale Λ 3 [8,27]. Linearized theory on AdS "Σ-model". When applied to AdS, the previous analysis has a rather different outcome: consider again the Lagrangian L 2 in (2.18) in its own right (i.e. separated from its gravitational context) in N = D dimensions but on an AdS spacetime, so that the tensor K and X now read is the AdS metric with curvature L −2 , so that its associated Ricci tensor is . Then the AdS curvature is sufficient to break the U(1) gauge symmetry already at the linear level. Indeed at the linear level about the trivial vacuum g µν = γ where all the contractions and covariant derivatives are with respect to the AdS spacetime metric. The appearance of a mass term for A µ on AdS implies that the theory enjoys no accidental U(1) and the helicity-0 mode χ acquires a kinetic term A 2 µ ⊃ (∂χ) 2 . It follows that on AdS the trivial vacuum φ a = x a is a perfectly well defined and acceptable vacuum for the sigma model (2.18) of N = D fields, out of which D − 1 are dynamical. Naturally, this result holds true for any generalization of that model L 2 + D n=3 α n L n . Implications for massive gravity on AdS. This result propagates to the case of gravity where it was shown that the linearized vDVZ is absent on AdS [17][18][19][20][21]. Indeed, in the limit where the AdS curvature is larger than the graviton mass m L −1 , the canonically normalized field is nowχ = Λ 3 * χ with and the coupling betweenχ and matter now goes as which makes the massless limit of the linearized theory well-defined already at the linear level about AdS. This massless limit seems to occur without the need of a Vainshtein mechanism but we stress that JHEP04(2016)188 1. The Vainshtein mechanism is actually (secretly) active through the AdS background and this absence of discontinuity is in fact a direct implementation of the Vainshtein mechanism. 2. Strong coupling is still present in that theory. Indeed, the nonlinear theory includes interactions of the form (∂χ) 2 (∂ 2χ ) n−1 /Λ 3n * implying that the theory is then strongly coupled at the scale Λ * as given in (2.23). As shown in figure 1, taking the limit m → 0 and L −1 → 0 leads to the same scaling as if one had started straight from massive gravity on Minkowski and taken the massless limit. However, for a finite mass m the strong coupling scale can be pushed higher if the AdS curvature is sufficiently large m L −1 , although this comes at the price of working about a non-Minkowski reference metric. In what follows we will show how one can capture some of these features of massive gravity on AdS (namely the absence of linearized vDVZ-discontinuity and a higher strong coupling scale) while maintaining the reference metric nearly Minkowski. What we will consider instead is a non-trivial Lorentz-violating vacuum for the Stückelberg fields. Nonlinear Hamiltonian analysis In the rest of this manuscript, we focus on the case where N = D, f µν = η µν and no longer distinguish between spacetime and target space indices. In this section we run the Dirac-Bergmann algorithm for the nonlinear theory (2.11). We will see rigorously that even when decoupling gravity, the BD ghost is eliminated, as argued above, and in general there is no gauge symmetry to further reduce the number of DoFs in the fields φ α . Lorentz-invariant vacua are hence special as they re-introduce an accidental U(1)-symmetry at linear order, but that U(1) is not a symmetry of the full sigma model and does not survive at higher order. Therefore in D dimensions, φ α involves D − 1 dynamical DoFs. For simplicity, and without loss of generality, we focus in this section on the minimal model JHEP04(2016)188 given in (2.11). The general model yields the same result. To explicitly perform the Hamiltonian analysis, it is convenient to work with an equivalent form of the minimal Lagrangian: where the auxiliary variable λ µν is a symmetric tensor with inverseλ µν . See appendix A for the equivalence between this Lagrangian and −TrX. To derive the Hamiltonian, we perform an ADM-like split for the symmetric tensor λ µν where latin indices are for now lowered or raised with σ ij or its inverse σ ij respectively. The conjugate momenta for φ α and σ ij are defined as where the Lorentz index α is lowered with η αβ . After the Legendre transform, the Hamiltonian becomes quadratic in µ k and linear in λ 0 . Integrating out µ k , we get where we have introduced the new set of Lagrange multipliers µ ij to impose the relation (3.5), and we have defined 7) C (1) = π α π α + 1 = 0, (3.8) where C (1) and C (1) ij are primary constraints. If now one further integrates out σ ij , one can see that it is not possible to have any further constraints apart from the secondary associated with C (1) . But to be prudent, we show this explicitly by keeping σ ij . Since C (1) and C ij contain only conjugate momenta but not the fields themselves, it is clear that we have ij (y)} = 0, (3.11) kl (y)} = 0, (3.12) and thus the time preservation of C (1) and C ij generate secondary constraints (3.14) One can indeed check that all λ 0 and µ mn are determined by this system of linear equations. This is more easily performed in a specific number of dimensions. For example, in D = 4 dimensions, one can show that the rank of the system of linear equations is 7, which corresponds to the number of λ 0 and µ mn . Thus, all λ 0 and µ mn are determined. The Dirac-Bergmann algorithm ends here and all constraints are second class. Counting the phase space DoFs, we have, in D = 4 dimensions, (4 + 6) × 2 − (6 + 1) − (6 + 1) = 6 = 3 × 2, (3.21) meaning that the number of physical DoFs is indeed 3. This result was proven for the minimal model L 1 , but by continuity it holds for a general theory of (2.11). We will re-confirm this result with a couple of different methods in the following. Exact non-trivial vacuum solution Having shown that the massive gravity nonlinear sigma model also propagates two constraints that remove the BD ghost, and thus has 3 DoFs on generic backgrounds in D = 4 dimensions, we shall now present an explicit example where this occurs. In order to separate ourselves from the precise matter content of the model we work in the vacuum. In this sense our approach is different from, say, massive gravity on AdS, which requires a negative cosmological constant to source the background configuration. For the sake of simplicity, we focus once again on the minimal model, although our conclusions remain the same for any linear combinations of the Lagrangians L n . Plane-waves One of the difficulties in solving this equation for generic configurations of the fields φ a lies in evaluating the square-root that enters in X µ ν . In what follows we will evaluate this JHEP04(2016)188 square-root by performing perturbative expansions about the trivial vacuum, but for now we may consider the particularly simple -yet instructive-example of plane waves. 5 Take for instanceφ where we have used the notation x 0 = t, x 1 = x and the index I labels the orthogonal directions, I = 2, · · · , D − 1. This solves the vacuum equations of motion for arbitrary combinations of the Lagrangians L n defined in (2.9) and for arbitrary analytic functions F I and G I . Indeed the tensorX µ ν associated with these plane wave configurations (4.1) satisfies ∂ µ (X n ) µ ν = 0 and ∂ µ TrX n = 0 no matter what the power n is. This implies that the background configuration (4.1) satisfies the equations of motion for the fields φ α for arbitrary combinations of the Lagrangians L n . For instance, without loss of generality, we can set G I = 0 for any I = 2, · · · , D − 1, F I = 0 for any I = 3, · · · , D − 1 and write F 2 (t − x) = F (t − x). Then, if for simplicity, we work in D = 3-dimensions and havē While the square root matrix X µ ν has many branches of solution, it is understood that one should choose the branch that connects with the identity matrix when F (t − x) → 0. So the matrixX µ ν associated with the non-trivial vacuum (4.2) is where the prime denotes a derivative with respect to the function's argument, and one can indeed check that this matrix satisfies Tr[X n ] = 3 for any power n, and so we have ∂ µ TrX n = 0. Furthermore, we can explicitly check that ∂ µX µ ν = ∂ µ X 2 µ ν = ∂ µ X 3 µ ν = 0, so (4.3) satisfies the vacuum equations of motion for arbitrary combination of Lagrangians L 1 + α 2 L 2 + α 3 L 3 . This result is independent of the number of dimension and remains valid for arbitrary configurations of the form (4.1). Degrees of freedom Having established that the plane wave configurations (4.1) are exact vacuum solutions, we now proceed to evaluate the number of perturbative DoFs. To establish the number of DoFs on that vacuum, it is sufficient to look at fluctuations of the form where we introduced a dimensionless parameter ε to count the order in perturbations. Focusing on the minimal model L 1 , then to quadratic order in V (quadratic order in ε), we have where F µνρσ are functions of F . 5 Despite the terminology these solutions do not need to exhibit an oscillator behavior and the functions F I and G I are arbitrary. JHEP04(2016)188 The Hamiltonian analysis performed in section 3 confirms that this model only has D − 1 DoFs. About the trivial vacuumφ α = x α (F ≡ 0), V 0 is indeed an auxiliary variable. On more generic vacua, the auxiliary variable is instead a linear combination of the fields V µ , and to simplify the derivation we can perform a rotation in field space V µ = W µ + R µ ν W ν so that W 0 is identified as the appropriate auxiliary variable. In D = 3-dimensions, the appropriate rotation is given by with so thatẆ 0 entirely disappears from the resulting Lagrangian and there are only two conjugate momenta given by: The Hamiltonian is then (to quadratic order in ε) where A n are functions of the background configuration F and are n th order in the remaining phase space variables W i , π i . The exact expressions for A 0 and A 1 are given in (B.8) and (B.8) of appendix B but are irrelevant to this discussion. A 0 is given by and vanishes on the Lorentz-preserving vacuum where F ≡ 0. About this trivial vacuum, W 0 is a Lagrange multiplier that generates a first-class constraint associated with an accidental U(1)-symmetry. Here we see explicitly that this symmetry is broken on generic backgrounds and while W 0 is still an auxiliary variable, it no longer generates a constraint for the phase space variables W i , π i . Then all the D − 1 remaining DoFs are dynamical and the resulting Hamiltonian (after integrating out the auxiliary variable W 0 ) is given by 6 This provides an explicit example of vacuum where all the expected DoFs are excited as they should. Unfortunately, in this specific example, A 0 > 0 and the resulting Hamiltonian is not bounded from below. As a result, in this specific example, the background solution turns out to be unstable. However, it represents an explicit proof-of-principle that nontrivial vacua can excite all the dynamical DoFs without needing to resort to a mixing with 6 For the trivial vacuum where F ≡ 0, one has A1 = ∂iπ i and A0 → 0. This means that deviating from the surface A1 = 0 would cost an infinite amount of energy and the fields are forced to live on the constrained surface where A1 = ∂iπ i ≡ 0. However, as soon as A0 = 0, one is allowed to deviate from that surface, and this deviation is encoded by the existence of an additional DoF. JHEP04(2016)188 the tensor (gravitational) fields. In what follows we will show how to construct a more general class of stable vacua by considering solutions for the Stückelberg fields which are perturbative about the trivial one. We emphasize that looking for perturbative vacua is only used as an approximate tool to derive explicit vacua, but the theory also contains much more general classes of vacua. General perturbative backgrounds We now present a different way to derive an acceptable non-trivial vacuum by relying on a perturbative approach. This will allow us to derive the Hamiltonian for a large class of vacua, confirming the DoF counting result of the full Hamiltonian analysis in section 3 and 4, and determining the absence of ghosts and gradient instabilities for a subclass of these vacua. Hamiltonian of fluctuations As considered previously, we look at fluctuations V µ in a non-trivial vacuumφ µ , where as before ε is a small dimensionless parameter which keeps track of the order in perturbations about the vacuumφ µ . Now for convenience and ease of the presentation, the vacuum configuration itself is treated perturbatively, and we will be considering the background to be perturbative in the dimensionless parameter (in what follows 'barred' quantities will represent quantities that only involve the background). For concreteness, we focus on a specific Lagrangian in what follows and choose with K µ ν = δ µ ν − X µ ν , so L α 2 NLS differs from the minimal model L 1 in (2.11). Including higher α n terms will add some computational complexity, but as we shall see below the α 2 term is sufficient for our purposes. In what follows we look at the Hamiltonian for the fluctuations V α living on top of the perturbed backgroundφ µ . We therefore wish to compute the Hamiltonian quadratic in ε and perturbatively in . We will see that working up to second order in is sufficient for this analysis. The resulting quadratic Lagrangian for V µ is given (symbolically) by As expected, to lowest order in , we recover the Maxwell term for V µ and the theory enjoys an accidental U(1)-symmetry. The exact expressions at linear and quadratic order in in arbitrary dimensions are given in appendix C. JHEP04(2016)188 We now follow the same procedure as in the previous section, see eq. (4.6), and perform a field space rotation so as to identify the auxiliary variable W 0 , and set the elementsT i perturbatively in so that the resulting Lagrangian does not involve anyẆ 0 (after appropriate integrations by parts). This procedure can be performed in arbitrary dimensions and if we focus for simplicity in D = 3 dimensions, we get where we have definedF µν ≡ 2∂ [µBν] . After substitutingT i into eq. (5.4), we can confirm that W 0 is manifestly an auxiliary variable. To pass to the Hamiltonian formulation, we therefore define the conjugate momenta π i = ∂L/∂Ẇ i and get where G 1 and G 2 do not depend on W 0 and their exact expressions is not relevant to the discussion here. In D = 3 dimensions the termĀ is given bȳ One important point to notice is that the term quadratic in the auxiliary variable W 0 only enters at quadratic order in . This means that up to leading and first order in the background expansion (zero and first order in ), the variable W 0 still acts as a Lagrange multiplier which generates the accidental U(1)-symmetry and removes one additional DoF. Indeed, had we truncated the theory to first order in , W 0 would then act as a Lagrange multiplier that enforces a primary constraint C ( ) 1 = ∂ i π i + G 2 ≈ 0 and one can show that this constraint is first-class since it Poisson-commutes with itself On the other hand, when the O( 2 ) corrections are included,Ā does not vanish for the background chosen and W 0 still remains an auxiliary variable but ceases to be a Lagrange multiplier. To that order, integrating out W 0 we then get Therefore, we can see that all the D − 1 DoFs are now activated. The reason why the Hamiltonian is non-analytical in after integrating out W 0 is simply because our background itself is a perturbation around the trivial background φ α = x α , where there is an accidental gauge symmetry and only D − 2 DoFs are active. The non-analyticity in the Hamiltonian (5.12) reflects the fact that a DoF activated by a perturbative background is very weakly coupled, as we shall see more explicitly in what follows. It is straightforward to construct backgrounds for whichĀ does not vanish and is positive, and we shall construct approximate solutions below. The longitudinal mode In the last subsection, we have derived the quadratic Hamiltonian for the field W i on a generic backgroundB µ . Around the trivial backgroundB µ = 0 (orφ α = x α ), the longitudinal mode of W i is only a gauge mode. But, around a generic background (at least including the O( 2 ) terms), this mode becomes dynamical and there are in total D − 1 DoFs. Since the leading order (O( 0 )) of the Hamiltonian (5.9) is just the Maxwell theory, D − 2 of these DoFs are just the transverse modes of an Abelian gauge field, thus totally free of ghost or gradient instabilities. Therefore, to study the linear stability of this theory, we only need to focus on the longitudinal mode π i ∝ ∂ i χ, W i ∝ ∂ i ψ. From the Hamiltonian (5.12), we see that the leading contribution to the longitudinal momentum mode χ comes from the term ∂ i π i 2 /4 2Ā . We shall scale it with so as to make the kinetic term of O( 0 ): Note that this is not yet the canonical normalization for the kinetic term, as there is still a characteristic scale inĀ. Up to O( ) neither G 1 nor G 2 contribute to the longitudinal mode ψ. This is because, up to O( ) in the Hamiltonian (5.9), there is still a gauge symmetry, enforced by a first class constraint C ( ) 1 , as we mentioned above. To see this explicitly, note that, at order O( ), the contributions in G 1 and G 2 which are independent of π i are given by These expressions are clearly independent of the longitudinal mode since G ij vanishes for the longitudinal mode W i ∝ ∂ i ψ. So the leading gradient terms, i.e., ψ 2 terms, come from the next order pieces in G 1 and G 2 . Thus, to make the leading gradient terms of O( 0 ), we can define the longitudinal mode as JHEP04(2016)188 The first term always comes in as squared, so we may definẽ (5.18) and regardχ as the new conjugate momentum. Therefore, the leading Hamiltonian is The linear stability of the longitudinal mode is guaranteed if one can find a background B µ , such thatĀ is positive and the gradient term for ψ is positive definite at least for a local patch of spacetime. Local backgrounds free of ghost and gradient instabilities For a smooth Λ 2 -decoupling limit to be well-defined, it is essential that there are some stable background solutions in the massive gravity nonlinear sigma model. For the perturbative backgrounds being considered, we have come to the conclusion that the background is stable if the longitudinal mode is stable, that is, H L NLS is bounded from below. While one requires an exact solution to be stable across the whole spacetime, it is not necessary for a perturbative background to be stable globally, as the perturbative background may only be a good approximation of the underlying exact solution within a coordinate patch. Thus, to facilitate the stability analysis, we will expand a generic perturbative background within a local spacetime patch. Within this approach, it is easy to give explicit examples where ghost and gradient instabilities are both absent. SupposeB α has a characteristic length scale L, we can at least expect that within the spacetime patch x < L,B α is smooth and analytical, and approximates the underlying exact solution to a sufficiently good extent. Thus, we Taylor expandB µ around the coordinate origin and substitutē into the Hamiltonian (5.19). HereH µ ρ andM µ ρσ are constant. To leading order, both in and x/L, we have where now we havē (5.23) JHEP04(2016)188 Now, sinceF µν andĀ are just constants, we can move ∂ i and √ ∇ 2 around by partial integration, so we may re-write eq. (5.21) as The gradient terms in this expansion are rather simple. In fact, they are manifestly positive definite. Thus, there are no gradient instabilities for any perturbative background within a local patch L. To determine the consistency of a perturbative background, one only needs to check for ghost instabilities, which amounts to checking whether or not a perturbative background gives rise to a positiveĀ. The equations of motion for φ µ in this approach becomes, to lowest and sufficient order, In this section we have established the existence of stable vacua for the longitudinal mode. Since on this perturbative vacua, the other DoFs simply behave as an Abelian gauge theory (with small corrections), these DoFs are obviously free of ghost and gradient instabilities. Moreover, the longitudinal mode does not mix with the gauge modes to leading order. Thus, at least within our perturbative approach, there are backgrounds in the massive gravity nonlinear sigma model that are entirely free of ghost and gradient instabilities. 6 The Λ 2 -decoupling limit In section 2.3, we have seen that around the trivial background the longitudinal mode of ghost-free massive gravity only acquires a kinetic term via mixing with the tensor modes. Thus, around the trivial background, the theory is strongly coupled at the scale Λ 3 . In the previous sections, we have shown that the massive gravity nonlinear sigma model (2.11) has D − 1 DoFs and there are non-trivial backgrounds where all of these D − 1 DoFs are excited and are stable, at least perturbatively. This means that on these generic vacua, ghost-free massive gravity admits a Λ 2 -decoupling limit: which leads to where JHEP04(2016)188 We emphasize that directly setting g µν = η µν in ghost-free massive gravity would be an inconsistent procedure. Rather, the correct way to obtain the massive gravity nonlinear sigma model is through the Λ 2 -decoupling limit defined above. In this way, the healthy properties of ghost-free massive gravity can be carried over to the resulting scaled theory, i.e., the massive gravity nonlinear sigma model. To prove a smooth Λ 2 -decoupling limit exists, we need to make sure the would-be decoupled theory has the right DoFs and there are backgrounds where these DoFs are well-behaved, which we have proven in the previous sections. In what follows we can therefore work in this Λ 2 -decoupling limit and determine how the strong couplings scale gets redressed by the scale L −1 . Generic operators In section 5, we have shown that there are healthy backgrounds that are a small deviation from the trivial oneφ µ = x µ . It may well be the case that there are healthy backgrounds far away from the trivial solution which could in principle be written as whereφ α is an exact background andQ α ρ ∼ O(1) is assumed to have a characteristic length scale L. One might also considerQ α ρ not to be O(1), but that simply amounts to redefining graviton mass m and tuning dimensionless parameters α n (or β n ) away from O(1). Schematically, the spacetime derivative of the background goes as The matrix square root goes like X ∼ ∂φ(1 + ∂V /∂φ + (∂V /∂φ) 2 + · · · ) + O(h/M Pl ). Substituting these into the action (2.14), the quadratic kinetic terms around this background are schematically given by In our dimensional analysis below, we shall neglect all O(1) factors such as f µν ρσ (∂φ) as well as the Lorentz indices unless needed for the discussion. As shown in the previous sections, one DoF in V µ is not dynamical, so one can always perturbatively make a field redefinition so that W 0 is manifestly an auxiliary variable and the D − 1 components of W i are dynamical. At linear order in W µ , this redefinition should reduce to a linear rotation similar to that of eq. (5.6) but withT i now depending on the generic backgroundφ. As shown JHEP04(2016)188 in the previous section, the kinetic terms after the field redefinition will be schematically given by There is a characteristic scale L −1 coming out of the background every time a derivative is shifted from W µ to the backgroundφ. As W 0 is an auxiliary field, one can integrate it out, which, to leading order in perturbations in W µ , should be We will later include all possible nonlinear terms of W µ for W 0 . Therefore, integrating out W 0 at leading order, we have where W i ⊥ represent the D − 2 transverse modes and W i the longitudinal mode which is absent on the trivial vacuum but not on generic ones. Note that in deriving eq. (6.10) we have neglected the L∂W i term of eq. (6.9). This is because a derivative on W µ is greater than L −1 within x L, so one can symbolically think of L∂ as a large number. The canonical normalizations are then and from these normalizations, it is obvious that the lowest strong coupling scale should come from some pure W µ interactions, i.e., terms without h. Although the model is fixed (up to a few parameters), we now have the freedom to choose the vacuumφ. This choice will then affect the normalization and hence the scale of the interactions. We shall first assume that all a priori conceivable terms exist, and then comment on specific classes of vacua where certain terms happen to cancel. Before canonical normalization and integrating out W 0 , a generic interaction for W µ is given by Next, we integrate out W 0 , which, including all possible nonlinear orders, may be written as where we have used eq. (6.9) for W 0 . (In here N is not to be confused with the dimension of the target space that appeared earlier.) Substituting W 0 into eq. (6.12), a generic interaction term is then given by . (6.14) JHEP04(2016)188 Assuming that M of the W i are the longitudinal mode W i and the rest are the transverse mode W i ⊥ , the canonical normalization gives (6.15) with integers T, N, K, P, Q, M satisfying For operators with QK − P − M ≤ 0, the corresponding operator is either relevant or has a strong coupling scale that is no smaller than Λ 2 (simply noting that T −2+Q(N −K) > 0). Strong coupling scale The operators that enter at the lowest energy scale satisfy QK −P −M > 0, which requires For these operators, the associated energy scale is a geometric mean of Λ 2 and L −1 (the characteristic scale of the background): For the stable perturbative backgrounds we have identified with the local coordinate expansion, the existence of a valid effective field theory requires that L is larger than Λ −1 2 * , which implies L −1 < Λ 2 . It follows that the lowest interaction scale then comes from a geometric mean where L −1 has as many powers as possible. That is, the lowest strong coupling scale corresponds to the greatest ratio of In summary, using the relation (6.16) as well as K > 0, it is clear that the greatest ratio corresponds to N = K = 2, P = M = 0, Q = T with T = 3. This ratio comes from cubic terms that go like Since W 0 is an auxiliary field, the ∂ 3 in front of (W 0 ) 3 should only contain spatial derivatives. Thus, if all a priori possible terms exist in the perturbative expansion of W µ on some backgroundφ, then the lowest strong coupling is given by On the other hand, it is conceivable that for certain backgrounds some operators may not exist or cancel out. In addition, some operators may be removable by field redefinitions. Around those backgrounds, Λ 2 * can potentially be raised to Then the greatest ratio of m/n is given by 2 − (P + M − 4)/(T − 2), which tends to 2 when T → ∞ and P, M remain finite. In summary, the precise value of the strong coupling scale depends on the detailed properties of the vacuum and its characteristic scale L, which should be analyzed on a case by case basis. But the range of the dressed scale Λ 2 * is and can be parametrically larger than the standard Λ 3 scale one typically derives in massive gravity. Notice that when L is so large that the resulting scale Λ 2 * becomes comparable or smaller than Λ 3 then the interactions with the gravity can no longer be ignored and the correct strong coupling scale does not actually fall below Λ 3 . U(1) symmetry in 2D The general results of the previous sections apply to dimensions greater than two. In D = 2 dimensions, the massive gravity nonlinear sigma model has an extra gauge DoF, on top of the constraints that eliminate the BD ghost. So there is no physical DoF in the 2D massive gravity nonlinear sigma model, if the internal space is of the same dimension as the spacetime. In this section, we show explicitly the gauge transformation around an arbitrary background. The general massive gravity nonlinear sigma model in 2D is given by For simplicity, we adopt here a Euclidean signature for η µν and η ab , as our goal is mainly to count the number of DoFs in the theory. AssumingĀ µ is a background solution which satisfies the equations of motion, we look for a small perturbation around it The equations of motion forĀ µ are The quadratic Lagrangian for the perturbations V µ on the vacuumĀ µ is captured by where ξ(t, x) is the gauge parameter, once the on-shell conditions are imposed onĀ µ . This implies that the U(1)-symmetry remains about any on-shell background of the theory. Since we have worked at quadratic order about an arbitrary background, our analysis is equivalent to working to all orders about the trivial background. The helicity-0 mode is hence fully absent from the theory which propagates no physical degrees of freedom in D = 2 dimensions. The existence of this symmetry is very specific to D = 2 dimensions and as we have seen does not generalize to higher dimensions where the U(1)-symmetry is broken in the full theory. Discussions In this paper, we have developed the Λ 2 -decoupling limit of Lorentz-invariant massive gravity. This is an approximate description of a large family of solutions of Lorentzinvariant massive gravity, all of which spontaneously break Lorentz invariance. Hence this excludes the usual Lorentz invariant vacuum which lies within the Λ 3 regime. Interestingly the Λ 2 Λ 3 regime is far closer in spirit to the decoupling limit of massive gravity on AdS where the strong coupling scale is also parametrically higher. As in the case of massive gravity on AdS, the vDVZ-discontinuity is simply absent already at the linear level, and hence these backgrounds easily comply with existing tests of gravity. Beyond the scheme of massive gravity, we have also shown an interesting connection between ghost-free massive gravity as a generalization of the p-brane Nambu-Goto action. In particular, we have pointed out that the ghost-free graviton potential can be viewed as a non-standard nonlinear sigma model that uniquely evades the compact requirement for the target space. This evasion is different from all the known examples where some auxiliary gauge trick is utilized and the first class constraints associated with the gauge symmetries explicitly project out the would-be ghost, while the massive gravity nonlinear sigma model makes use of second class constraints to project out the would-be ghost. The uniqueness of ghost-free massive gravity, which essentially is due to the uniqueness of the matrix square root and anti-symmetrization scheme of the graviton potential, suggests that Lagrangian (2.11) is a unique generalization of the Nambu-Goto action that eliminates the ghost associated with the negative direction of the target space [15]. Without spoiling the spirit of this uniqueness, a further generalization is to promote the α n parameters to be functions of φ A , which also gives rise to a consistent nonlinear sigma model [15]. On the other hand, letting the target space have more than one negative direction, such as (−−, + · · · +), is necessary problematic [15]. Such a nonlinear sigma model has more than one ghost in the spectrum, but the unique matrix square root and anti-symmetrization scheme can only eliminate one ghost. 7 (In the Nambu-Goto special case, having more than JHEP04(2016)188 one negative direction is possible as there are more than one diffeomorphism invariance, if D = p + 1 > 1.) For most of this manuscript, we have restricted ourselves to an internal space which is at least as large as the spacetime dimension, N ≥ D. The case N < D has its own interest, and was for example applied for the description of realistic condensed matter systems using the AdS/CFT correspondence in [45]. However, the absence of the BD ghost for N < D is more subtle. As shown in [46], in some cases of N < D, all the N DoFs may propagate. We note that this happens whenever the lapse function squared of the reference metric −f 00 + f 0k (f −1 ) kl f l0 vanishes, (here we have extended the target space metric f AB with zeros such that it formally has the same dimension as g µν ), which is when the unitary gauge Hamiltonian proof of the ghost-free-ness of massive gravity with a general reference metric [47] fails. We have studied the massive gravity nonlinear sigma model by performing a nonlinear Hamiltonian analysis/Dirac-Bergmann algorithm, finding an exact solution and examining perturbations on that solution, and examining perturbations on a general perturbative background and determining its stability. Our study of the massive gravity nonlinear sigma model indicates that: • There exists a smooth Λ 2 -decoupling limit where the tensor modes are completely decoupled, and the whole matrix square root and anti-symmetrization structure is kept intact. • There are many non-trivial Λ 2 -backgrounds that are stable, around which all the D − 1 DoFs are propagating. These backgrounds need non-vanishing support from the vector modes, and spontaneously break the Lorentz invariance with the strength of the graviton Compton length scale. • There is no linear vDVZ-discontinuity around these Λ 2 backgrounds. Thus these backgrounds trivially pass the local gravity tests such as the solar system tests for a Hubble scale graviton Compton length. In some sense, the Λ 2 backgrounds are the ones with the Vainshtein mechanism already implemented. • Around these Λ 2 backgrounds, the strong coupling scale is raised to Λ 2 * , which is parametrically larger than Λ 3 . It has been shown that homogeneous and isotropic cosmological solutions, as well as static, spherically symmetric black holes, in ghost-free massive gravity are absent/unstable [43,44,48], and it has been argued that the "natural" cosmological solutions in ghost-free massive gravity are inhomogeneous/anisotropic and the "natural" black hole solutions are non-static/spherically symmetric, the deviations from the exact symmetries being typically of O(m 2 ). In the Λ 2 decoupling limit, we are forced to break Lorentz symmetries in order to have stable backgrounds, and indeed we expect that it is the Λ 2 decoupling limit that is the most appropriate description of the generic inhomogenous cosmologies in massive gravity. We remind the reader that this forced inhomgeneity is not in conflict with observations since the scale of the inhomgeneity is set by m −1 which can JHEP04(2016)188 be made arbitrarily large, and is usually taken to be at least of the order of the current Hubble horizon. The existence of the Λ 2 -decoupling corresponds to a description of backgrounds which in unitary gauge will locally take the form They are physically different solutions from the Minkowski metric η µν even if the O(m 2 ) corrections were excluded, and the differences will show up in perturbations in the gravitational sector. If m −1 is taken to be a cosmological scale (of the order of the observable Universe today), all these backgrounds have essentially an approximately FRW geometry below the Hubble horizon, and at scales larger than the current Hubble scale can become inhomogeneous. We thus expect that the Λ 2 solutions describe a typical inhomogeneous cosmology, which may be approximately homogenous out to the scale m −1 . Once again, these Λ 2 backgrounds have the virtue that there is no linear vDVZ-discontinuity, and hence it will be significantly easier to satisfy current tests of gravity, raising the possibility that it is these Λ 2 backgrounds that may have the most direct connection with phenomenology. We have shown that a Λ 2 background that is perturbatively away from the trivial Λ 3 backgroundφ α = x α is sufficient to excite the longitudinal mode. This suggests that one can continuously connect the trivial Λ 3 background with some nontrivial Λ 2 backgrounds. There may be some backgrounds such that in some local region (for instance around a star or black hole) the background is of the Λ 2 type, and asymptotically the background approaches the Λ 3 limit. How a particular background is chosen is determined by the initial and boundary conditions. JHEP04(2016)188 where λ ρσ and λ ρσ are symmetric in exchanging ρ and σ. Since Λ µ ν is quadratic in either of the two Lagrangians, we can easily integrate it out respectively. Up to a global rescaling of λ αβ , we get whereλ ρσ is the inverse of λ ρσ . In section 3, we take advantage of Lagrangian (A.4), as this form entitles an ADM-like splitting for λ αβ in the full Hamiltonian analysis. This action also resembles the Polyakov action to some extent. Expressions similar to Lagrangian (A.5), with gravitons activated, have been utilized to re-confirm the absence of the BD ghost in ghost-free massive gravity [50,51]. Further integrating out λ ρσ , we arrive at B Plane-wave Hamiltonian To count the DoFs about the non-trivial plane-wave vacuum configuration (4.1), we work in the Hamiltonian formalism. To provide an explicit derivation, we focus on the D = 3 dimensional case provided in eq. (4.2) and without loss of generality, we consider solely the Lagrangian L 1 . We consider linear fluctuations V α about the vacuum configurationφ α so that the fields φ α take the form φ α =φ α + V α . (B.1) To quadratic order in fluctuations, we then have where F µνρσ are functions of F . Since the BD ghost is absent from this theory (as confirmed by the Hamiltonian analysis of section 3, some combination of the V µ 's must play the role of a Lagrange multiplier. On arbitrary backgrounds the Lagrange multiplier is a linear combination of the V µ 's, and, to make the primary constraint manifest, we can rotate the fluctuations V α in field space in such a way that W 0 becomes an auxiliary field. By requiring ∂L 1 /∂Ẇ 0 not to contaiṅ W µ , we get in D = 3 dimensions where A n are functions of the background configuration F and are n th order in the remaining phase space variables W i , π i , +2(F 2 + 8)(3F 2 + 16) F (∂ 2 π 2 F − 4∂ 2 π 1 + 4∂ 1 π 2 ) − 8(∂ 1 π 1 + ∂ 2 π 2 ) +F 4 (6∂ 2 π 2 + ∂ 2 W 1 F ) + 8π 2 (3F 2 + 8)F 2 F − 64π 1 (F 2 + 4)F F , (B.8) −64∂ 2 W 2 + 4096π 2 1 F 2 + 4 + 256π 2 2 3F 4 + 16F 2 + 64 . (B.9) As soon as A 0 = 0, W 0 enters quadratically and it no longer imposes an additional first-class constraint. Rather one can easily integrate it out giving rise to the following Hamiltonian Since we are looking for the stability of the fluctuations V α , it is sufficient to construct the Lagrangian and Hamiltonian at quadratic order in fluctuations, i.e., to second order in ε. Moreover, we treat the backgroundφ α perturbatively and for the sake of this analysis it JHEP04(2016)188 will be sufficient to work to second order in . To that order in perturbations, the explicit form of Lagrangian (5.4) is then given by Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
16,087.4
2016-04-01T00:00:00.000
[ "Physics" ]
Single-molecule stochastic resonance Stochastic resonance (SR) is a well known phenomenon in dynamical systems. It consists of the amplification and optimization of the response of a system assisted by stochastic noise. Here we carry out the first experimental study of SR in single DNA hairpins which exhibit cooperatively folding/unfolding transitions under the action of an applied oscillating mechanical force with optical tweezers. By varying the frequency of the force oscillation, we investigated the folding/unfolding kinetics of DNA hairpins in a periodically driven bistable free-energy potential. We measured several SR quantifiers under varied conditions of the experimental setup such as trap stiffness and length of the molecular handles used for single-molecule manipulation. We find that the signal-to-noise ratio (SNR) of the spectral density of measured fluctuations in molecular extension of the DNA hairpins is a good quantifier of the SR. The frequency dependence of the SNR exhibits a peak at a frequency value given by the resonance matching condition. Finally, we carried out experiments in short hairpins that show how SR might be useful to enhance the detection of conformational molecular transitions of low SNR. Stochastic resonance (SR) is a well known phenomenon in dynamical systems. It consists of the amplification and optimization of the response of a system assisted by stochastic noise. Here we carry out the first experimental study of SR in single DNA hairpins which exhibit cooperatively folding/unfolding transitions under the action of an applied oscillating mechanical force with optical tweezers. By varying the frequency of the force oscillation, we investigated the folding/unfolding kinetics of DNA hairpins in a periodically driven bistable free-energy potential. We measured several SR quantifiers under varied conditions of the experimental setup such as trap stiffness and length of the molecular handles used for single-molecule manipulation. We find that the signal-to-noise ratio (SNR) of the spectral density of measured fluctuations in molecular extension of the DNA hairpins is a good quantifier of the SR. The frequency dependence of the SNR exhibits a peak at a frequency value given by the resonance matching condition. Finally, we carried out experiments in short hairpins that show how SR might be useful to enhance the detection of conformational molecular transitions of low SNR. I. INTRODUCTION All nonlinear systems that exhibit stochastic noise are susceptible to undergo stochastic resonance (SR). When SR is triggered, the response of a system to an external forcing is amplified. SR has been studied in a large variety of systems, including climate dynamics [1,2], colloidal particles [3][4][5], biological systems [6][7][8], and quantum systems [9,10]. With the recent advent of single-molecule techniques, it is nowadays possible to measure SR at the level of individual molecules. Biomolecules exhibit rough and complex free energy landscapes that determine folding kinetics and influence the way they fold into their native structures. The use of force spectroscopy techniques has become important practice in studies of molecular biophysics. By applying a mechanical force at both extremities of an individual molecule and by recording the time evolution of the molecular extension (the reaction coordinate in these experiments), information about the folding reaction can be obtained. The application of forces makes possible to disrupt the weak bonds that hold their native structure to reach a stretched unfolded conformation. In this way thermodynamics (e.g. the free energy of folding) and kinetics (the rates of unfolding and folding) can be determined. Although most SR studies use temperature as a tunable parameter, this is not the best choice to investigate * Electronic address<EMAIL_ADDRESS>SR effects at the single-molecule level. Biomolecules have a strong sensitivity to temperature variations. Indeed, beyond increasing thermally assisted noise, temperature also modifies the shape of the molecular free energy landscape. Thus, another tunable parameter such as the oscillation frequency of force might be more appropriate to study SR in biomolecules. SR appears as a maximum in the response of a biomolecule at a characteristic frequency (the resonance frequency). This occurs when a characteristic timescale of the signal (e.g. its decorrelation or relaxation time) matches half period of the oscillation (the so-called matching condition). The matching condition must not be taken as a strict equality but a qualitative relationship between the two timescales [11,12]. This means that different SR quantifiers may not give coincident resonance frequencies specially for low quality resonance peaks. It seems important to investigate which SR quantifier is best suited to identify SR behavior. In this work, we use optical tweezers to investigate SR in single DNA hairpins driven by oscillatory mechanical forces. The high chemical stability of DNA makes DNA hairpins excellent models to investigate SR at the singlemolecule level. When force oscillates around the average unfolding force, thermally activated hopping kinetics between the folded (F) and unfolded (U) states synchronizes with the frequency of the external driving force, leading to SR. SR can be measured by recording the oscillations produced in the molecular extension, relative to the magnitude of the noise produced by the thermal forces. Our aim in this work is to perform a systematic study of SR in single-molecules exhibiting bistable dynamics, rather than using SR as a useful tool to determine the kinetic properties of DNA hairpins. In fact, these can be estimated by using other much less time-consuming methods (e.g. by directly analyzing hopping traces). Yet, we also carry out SR studies in short hairpins that show how SR might prove useful to enhance the detection of conformational transitions of low SNR. The paper is organized as follows. In Section II, our experimental set up is explained. Our main SR results in DNA hairpins are presented in Section III, and the influence of the experimental conditions (i.e. dsDNA handle length and trap stiffness) is investigated in Section IV. We compare different SR quantifiers in Section V and in section VI we describe the related phenomenon of resonant activation. Finally in Section VII, we purposely designed short DNA sequences to increase the noise of the signal to test whether SR can still be used to identify the hopping rate. In the last section, we summarize our conclusions, and discuss situations where SR might be a useful technique. II. EXPERIMENTAL SETUP AND HOPPING EXPERIMENTS In Fig. 1a, we show a schematic illustration of our experimental setup (left) and the DNA sequence of hairpin H1 that we investigated (upper right). The DNA hairpin is tethered between two short dsDNA handles (29 bp) that are linked to micron-size beads [13]. One bead is captured in the optical trap whereas the other is immobilized at the tip of a glass pipette [14]. By moving the position of the optical trap relative to the pipette, a force is exerted at the extremities of the hairpin. In a pulling experiment, the optical trap is moved away from the pipette and mechanical force is applied to the ends of the DNA construct (DNA hairpin plus DNA handles) until the value of the force at which the hairpin unfolds is reached. In the reverse process, the trap ap-proaches the pipette and the force is relaxed until the hairpin refolds. In this experiment, the force exerted upon the system, f , is recorded as a function of the relative trap-pipette distance giving the so-called forcedistance curve (Fig. 1a, lower right). Around the coexistence force, f c ≃ 14.5 pN, the hairpin hops between the F and U states for sufficiently low pulling speeds. Hopping experiments can be done in two different modes: constant force mode (CFM) and passive mode (PM) [15,16]. In the CFM, the force applied to the DNA construct is maintained at a preset value by moving the optical trap through force-feedback control (Fig. 1b, upper). The folding and unfolding transitions of the DNA hairpin are followed by recording the trap position, X(t). In contrast to the CFM, the PM is operated by leaving the position of the optical trap stationary without any feedback. The bead passively moves in the trap in response to changes in the extension of the DNA construct (Fig. 1b, lower). When the hairpin unfolds, the trapped bead moves toward the trap center and the force decreases; when the hairpin folds, the trapped bead is pulled away from the trap center and the force increases. The folding and unfolding transitions of the DNA hairpin are followed by recording the force, f (t). In both cases (CFM and PM), the kinetic rates of hopping can be measured from the residence times of the trace (X(t) in the CFM and f (t) in the PM). Fig. 1b shows hopping traces measured in the CFM and PM at the co-existence force, f c ≃ 14.5 pN, where the hairpin hops between the F and U states populating them with equal probability (i.e. it spends equal time in both states). In this work, we focused on the experiments at controlled force, rather than at fixed trap position. Both the hopping and the oscillation experiments (described below) were carried out using the force feedback control. The reason is that the controlled force experiments avoid undesirable drift effects in force that strongly affect the kinetics of the hairpin (see Methods). Therefore we mainly carried out the experiments in the CFM by recording the position of the trap, X(t). This signal exhibits dichotomic motion between the two distinct levels of extension (Fig. 1b, upper left). The difference between the two levels (short extension, folded; long extension, unfolded) reflects the release in extension (≃ 18 nm) of the 44 nucleotides of hairpin H1. From X(t) we can extract the residence time distribution at each state that shows the exponential form characteristic of first-order decay processes (Fig. 1c). The fit of the time distribution to an exponential function allow us to get the average residence time. The force-dependent kinetic rates (equal to the inverse of the mean lifetimes), k FU and k UF , were measured at the co-existence force, f c = 14.5 ± 0.3 pN, giving k c = k c FU = k c UF ≃ 0.66 ± 0.04 s −1 (Table S0 in SI). III. SR EXPERIMENTS To induce the SR phenomenon, we applied an oscillating force, f (t), to the DNA hairpin using the force feedback protocol, where f (t) = f c + f os (t). For f os (t) we chose a square-wave signal of amplitude, A, and frequency, ν os = 1/T os , where T os is the oscillation period (Fig. 1d, upper). The four distinct levels of extension observed (Fig. 1d, middle) correspond to the molecular extensions of the hairpin in the F and U states at the two force values, f = f c + A and f = f c − A. The power spectral density, S(ν), is defined as the Fourier transform of the stationary correlation function of the signal X(t): where · denotes a time-average over the signal. As shown in Fig. 1d (lower), S(ν) can be described as the superposition of a background power spectral density, S N (ν), and a structure of delta spikes centered at ν n = (2n + 1)ν os (n = 0, 1, 2, · · · ). In order to extract the signal from the background noise, we define the output signal (OS), the background noise (BN) and the signalto-noise ratio (SNR) as [11], The SNR defined in Eq.(4) is equal to the ratio of the spectral power of the signal at the frequency ν os (OS), to the noise-floor spectral density measured in the presence of the oscillation (BN) and has dimensions of Hz. Fig. 1d (lower) illustrates how we measured the OS (red area) and the BN (blue vertical bar) from the spectral density. Other equivalent definitions of the SNR [17] are the dimensionless ratio between the power in the output signal (Eq.(2)) and the total input noise power delivered by the noise (proportional to the integral of background spectral density S N (ν) over all ν). Because the total input noise power only depends weakly on ν os , we can take the OS, Eq.(2), as another indicator of the SR phenomenon. Indeed, both indicators OS and SNR are equally valid to identify resonant behavior, even though the peak is often more visible in the latter (see below) [18]. For the hairpin H1 at high trap power and trap stiffness κ trap ≃ 70 pN/µm, the resulting OS and BN as a function of ν os are depicted in Fig. 2a (lower), while Fig. 2c shows the SNR. In contrast to the OS, the presence of a peak around ν os = 0.4 ± 0.05 Hz is apparent for the SNR. This value is close to that predicted by the matching condition, ν SR = k c /2, which states that the SNR is maximum when the average hopping time of the hairpin (1/k c = 1.56 s) is equal to half the period of the forcing oscillation (1/2ν os =1.25 s) [11,[19][20][21]. This shows that SR in single-molecule hopping experiments approximately fulfills the matching condition as has been observed in other bistable systems. The OS and the SNR can be calculated theoretically as a function of the oscillation frequency for a Brownian particle in a continuous double-well potential [18,19,22]. In this model, the OS and the SNR exhibit a soft and a sharp peak, respectively, only when SR is induced at large enough forcing amplitudes [18]. These large forcing amplitudes correspond to a non-linear regime of the system, in which the shape of the double-well potential is so deformed that the barrier separating the wells vanishes at the maximum elongation of the oscillation. In our experiments, we applied a large oscillation amplitude (A=0.7 pN). Note that the region of coexistence between the F and U states spans less than 3 pN in Fig. 1a (lower right). Thus an extra-force of 0.7 pN strongly alters the barrier and the relative free energy between states F and U. Our experimental results agree with the theoretical predictions by Stocks [18] obtained in the non-linear response regime. We performed a numerical simulation of an overdamped particle moving in a double-well potential with parameters that fit the experimentally measured molecular free energy landscape (Sec. IV in SI). Despite its simplicity, the model qualitatively reproduced the experimental results for the OS, BN and SNR (dashed lines in Fig. 2c). In order to see what happens for lower oscillation amplitudes, we explored the response of hairpin H1 to an oscillating force of lower amplitude, A = 0.2 pN. A very soft peak and a gentle maximum in the OS and the SNR can be seen around 0.4 Hz (Fig. S1 in SI) in agreement with the results previously obtained for the higher amplitude, A = 0.7 pN (Fig. 2). However, the peak for A = 0.2 pN is much less clear than the peak for A = 0.7 pN, showing the importance of using oscillation amplitudes beyond the linear-response regime (AX † † /k B T ≪ 1, where X † † is the characteristic distance separating the folded or unfolded states from the transition state. See also Sec. III in SI for SR behavior in the linear response regime). A characteristic feature of SR experiments at the single-molecule level is the large variability observed in the measured response from different molecules. Fig. 3 shows the OS, BN and SNR for 10 different molecules. Larger variability is observed for the OS as compared to the BN. This might be due to non-linear effects which are sensitive to small differences in the experimental setup (e.g. tether misalignment with respect to the pulling direction, variations in the size of the bead and the trap stiffness, etc.). IV. INFLUENCE OF TRAP STIFFNESS AND LENGTH OF THE HANDLES An important issue in single-molecule experiments concerns the influence of transducing effects induced by the experimental setup (e.g. trap stiffness and length of the handles) on the measured kinetics. Recent studies [13,15,16] show that the kinetic rates are only moderately affected (within one order of magnitude) when changing the length of the handles one thousand-fold or the trap stiffness ten-fold. Besides, numerical simulations carried out in Ref. [16] show that kinetic rates for hairpins measured with handles and trap always remain close and converge to the intrinsic rate (i.e. the rate measured without handles and trap) in the limit of very compliant linkers. To inquire the influence of the experimental design on the kinetics of hairpin H1, SR was further investigated by varying conditions of the experimental setup such as 1) the stiffness of the optical trap and 2) the length of the handles. We observed how both effects changed the intrinsic noise of the system (Figs. 2b, 2c and 4). In the first case, when the trap stiffness, κ trap , was decreased from 70 pN/µm to 24 pN/µm (Fig. 2b), the maximum peak in the SNR was shifted to higher frequencies (from 0.4 Hz to ≃ 0.8 Hz) and became less clear (Fig. 2c, red curve). The effect of the trap stiffness on SR was evaluated by using the numerical simulation (Sec. IV in SI), finding good agreement between experiments and simulations (Figs. 2b and 2c). In the second case, if we increase by twenty-fold the length of the handles (528 bp and 874 bp at each flanking side) keeping the trap stiffness constant, κ trap = 70 pN/µm, we find that the resonance frequency shifts to a larger value for the long handles (Fig. 4). For the long handle construct, the matching condition is verified (ν SR = 2 Hz) and k c ≃ 4 s −1 as obtained from hopping experiments [13]. The dependence of the resonance frequency measured from SR, ν SR , on the trap stiffness and the length of the handles was similar to that reported for the hopping rate measured in the hopping experiments at the co-existence force [13,15,16]. In both cases, the soft trap stiffness or the larger compliance of the long handles contributes to increase the hopping rate, supporting the conclusions of Ref. [13]. Interestingly enough, the quality of the resonant peak worsens as the trap stiffness decreases but not as the linker becomes softer, showing that the quality of the SR peak is only dependent on the combined effective stiffness of bead and handles (κ −1 eff = κ −1 trap + κ −1 handle ≃ κ −1 trap ) which is approximately equal to the trap stiffness in our experimental conditions. V. OTHER SR QUANTIFIERS Next we investigated other representative SR quantifiers. These are: the fraction P 1 of transitions that occur every half-period of the oscillation [4,23,24]; and the average dissipated work, W [5,25]. To extract P 1 , we measured the residence time distributions, P (τ F ) and P (τ U ), of the F and U states in the presence of the oscillating force. The distributions are shown in Fig. 5a for hairpin H1 in the cases ν os = 0.4 Hz (upper) and ν os = 5 Hz (lower) with A=0.7 pN. Unlike the distributions shown in Fig. 1c, P (τ F ) (P (τ U )) is not monotonically decreasing with τ F (τ U ) and exhibits spikes corresponding to higher harmonics for τ F = T os (1 + 2n)/2 (τ U = T os (1 + 2n)/2) where n = 0, 1, 2, · · · . A few harmonic frequencies are shown as vertical arrows in Fig. 5a. In particular when ν os is close to the resonance frequency, the shape of the residence time distribution strongly deviates from an exponential and a broad peak appears around the fundamental mode, τ F = T os /2 (τ U = T os /2) (Fig. 5a, top). In contrast, many peaks appear in the residence time distribution when ν os ≫ ν SR (Fig. 5a, lower). P 1 can be extracted from the area of the residence time distribution around the peak located at the fundamental mode, τ F = T os /2 (τ U = T os /2). Let {τ i ; i = 1, · · · , N } be the series of N residence times measured in the presence of the oscillating force. By counting the number, n, of τ i that satisfy the condition T os /2 − T os /4 ≤ τ i ≤ T os /2 + T os /4, we define P 1 takes a large value if the residence time of the hairpin is equal to half the period of the oscillating force. This means that a large fraction of hopping transitions occur when the oscillating force changes sign. Therefore, the value of P 1 has a maximum when SR is induced, because the transitions between the two states are synchronized with the oscillating force (P 1 is a bona fide SR quantifier [23]. See also Sec. III in SI). The results obtained for P 1 in hairpin H1 are shown in Fig. 5b. P 1 exhibits a broad maximum around the resonance value ν SR = k c /2 = 0.4 Hz. The broadness of the peak is in contrast to the narrower peak observed in the SNR (Fig. 2c). These results are consistent with analytical calculations [11,23]. For the average cyclic work done by an oscillating force, we define [26] where the brackets stand for statistical averages over traces. Because W takes a large value when the folding/unfolding of the hairpin is synchronized with the oscillating force, it is a useful SR quantifier as well [5,27]. In fact, the larger the synchronisation between transitions of the hairpin and oscillations in the force, the larger the work done by the optical trap on the molecule. Re-sults for W are shown in Fig. 5b. In contrast to SNR but similarly to P 1 , the maximum in W is broad. Finally, we compared our experimental results with the predictions obtained from the numerical simulations in the continuous double-well potential whose parameters are the same as those used in Fig. 2 (Sec. IV in SI). Figs. 5a and 5b show a good agreement between experiments and simulations. Although both P 1 and W show broad maxima as a function of ν os , they are not coincident, the maximum for the work is found at a lower frequency as compared to P 1 . As pointed out in the introduction, the precise value of the resonance frequency depends on the quantifier specially when the quality of the resonant peak is low. VI. RESONANT ACTIVATION In stochastic systems driven by oscillating forces, it is customary to distinguish two effects: stochastic resonance (SR) and resonant activation (RA). SR stands for the optimization of the response of the system (i.e. the output signal) whereas RA stands for the optimization of kinetics (i.e., maximization of the number of hopping transitions per second). SR and RA are different phenomena related to barrier crossing dynamics along temporally modulated energy landscapes [4]. RA is induced when the mean residence times of the states of the system are minimized with respect to the frequency of the oscillating force, ν RA . The values of ν SR and ν RA are often not the same, the latter being typically larger than the former. Fig. 5c (top) shows the mean residence times, τ F and τ U , for hairpin H1 measured in the range 0.1 Hz ≤ ν os ≤ 5 Hz. Only at higher frequencies (between 1 Hz and 2 Hz), the graph suggests a very shallow minimum for the residence times. Therefore we are capable of observing both the SR and RA phenomena in the single-molecule experiments. The experimental results also agree with the numerical simulations (Fig. 5c, dashed lines). Similar behavior has been reported in the experiments with a colloidal particle in a double-well potential generated by optical tweezers [4]. VII. SR IN SHORTER HAIRPINS SR might be used to detect the transitions in cases where the hoppings of a hairpin is hard to be discriminated. These correspond to cases in which the hopping signal (extension jumps) are on the same order of the standard deviation of noise fluctuations. To investigate this problem, we designed two short hairpins (SH10 and SH8) having only 10 and 8 base pairs along the stem, respectively (sequences shown in Figs. 6c and 6d). The molecular free energy landscapes were calculated for the two sequences at the theoretically predicted co-existence forces using the nearest-neighbour model for DNA (Fig. 6a, upper left) [28,29]. As the length of the stem decreases, the landscapes show progressively lower co-existence force values, molecular extensions and kinetic barriers. Measurements for SH10 and SH8 were taken at low trap stiffnesses to decrease the hopping signal (κ trap ≃ 32 pN/µm and 17 pN/µm, respectively). Pulling curves and hopping traces in the CFM are also shown in Fig. 6a (lower left). While the transitions are still visible for SH10, these are hardly discriminated for SH8. This is also apparent from the dwell distributions on trap position, X, shown in Fig. 6a (right). Measured jumps in the molecular extension upon unfolding/folding are equal to 10.5 ± 0.5 nm and 7.0 ± 0.5 nm for SH10 and SH8, respectively. Fig. 6b shows the power spectra of X(t). Whereas SH10 can be fit reasonably well to a sum of two Lorentzians with two characteristic corner frequencies (0.64±0.02 Hz and 2.4±0.3 kHz), the quality of the fit considerably worsens for SH8 (≃ 9.8 Hz and ≃ 15.6 kHz). The low frequency (in the range of Hz) in the power spectra corresponds to the hopping kinetics of the hairpin whereas the high frequency (in the range of kHz) corresponds to the random motion of the optical trap caused by the force feedback. Because the noise in the trap position, X, introduced by the force feedback protocol is not of thermal origin, the power spectra measured in the CFM should not necessarily be fit to a sum of two Lorentzians. This is specially acute for SH8 where the feedback loop cannot follow the fast hopping transitions. Once the hopping properties of the hairpins were characterized, we then carried out the oscillating experiments for hairpins SH10 and SH8 around the co-existence force. The results we obtained for SH10 are similar to those reported for hairpin H1 at low trap power shown in Fig. 2c. For SH10 the peak in the SNR around ν SR =0.5 Hz is close to k c /2 where k c was measured to be 0.43±0.07s −1 from the hopping traces for X(t). More interesting is the case of hairpin SH8 where the co-existence force can still be located, but the hopping signal is blurred by the fluctuations. In Fig. 6d, we can see that the OS and the SNR exhibit a maximum around ν SR = 5 ± 1 Hz for SH8 which gives k c ≃ 10 ± 2 s −1 according to the matching condition. This value agrees with the value of ≃ 9.8 Hz obtained from the Lorentzian fit to the power spectrum. As an additional test, we have implemented a Hidden Markov Model (HMM) with the forward-backward feedback algorithm as described in Ref. [30] to extract the kinetic rates of SH8 from the hopping trace, X(t). By applying the HMM to the hopping traces of SH8, we obtained a value of k c = 9.4 ± 0.5 s −1 (7 molecules), which confirms the results obtained with SR and Lorentzian fit to the spectral density. Thus, SR offers an alternative method to estimate the hopping rate of SH8. Indeed, the two states (F and U) cannot be easily detected from the hopping trace and the residence time analysis done for hairpin H1 (Fig. 1c) is difficult to implement. In this case SR confirms the value of the hopping frequency initially obtained from a poor Lorentzian fit of the power spectrum. VIII. CONCLUSION We carried out SR experiments in single DNA hairpins subject to an oscillatory mechanical force of varying frequency. Our aim was to investigate how a molecule exhibiting bistability (i.e. hopping between the folded and unfolded conformations) responds to an applied oscillating force. In SR the response gets amplified at frequencies close to the characteristic hopping frequency of the hairpin. By measuring the power spectral density of the molecular extension, we carried out a detailed in-vestigation of the frequency dependence of the output signal (OS, Eq. (2)), the background noise (BN, Eq. (3)) and the signal-to-noise ratio (SNR, Eq. (4)) in the 20bp hairpin H1 which exhibits dichotomous hopping behavior. We then extended our research by exploring how several parameters of the experimental setup such as trap stiffness, length of the handles, oscillating amplitude and size of the hairpin influence the resonance behavior. From the measured traces, we also analyzed a few other SR quantifiers such as the number of folding and unfolding transitions occurring every half-period of the oscillation (P 1 , Eq. (5)), the average mechanical work per period of the oscillation (W , Eq. (6)) and the mean residence times in the unfolded and folded states ( τ U Comparison between νSR (Hz) and kc (s −1 ) νSR from SNR νSR from OS νSR from P1 νSR from W kc/2 and τ F ). The mean residence times describe a mechanism slightly different from SR that has been termed resonant activation (RA). Overall, we find that the SNR and the other SR quantifiers (such as OS, P 1 , W ) exhibit a peak at a frequency close to that determined by the resonance matching condition. Among all quantifiers only the SNR and the OS tend to show a modest amplification of the response, the SNR showing a higher quality peak. Our results are summarized in Table I. Moreover, our experimental results are well predicted by numerical simulations of an overdamped particle in a double-well potential reproducing the measured molecular free energy landscape of the hairpin (Sec. IV in SI). Finally, our experimental findings also agree with theoretical results [18] that show a modest gain in the response of noisy systems driven by oscillating forces. A unique aspect of our work is the investigation of SR in small systems in conditions of weak thermodynamic stability (folding free energies of a few k B T units) not far from noise level (k B T ). This has a primary consequence: the proper control parameter in our experiments does not appear to be the noise intensity. In fact, by changing noise intensity (e.g. by tuning temperature or denaturant concentration), we also modify the structural properties of the molecule in a non-controlled way (i.e. by changing its thermodynamic stability or free energy of formation). Our work circumvents this problem by using the frequency of the external driving force as control parameter. Simple as this choice may seem only a few theoretical and experimental works have addressed it in the past. From this perspective, our study should stimulate further theoretical work in SR of small systems where noise intensity and thermodynamic stability are tightly coupled. Another consequence of the noise intensity vs thermodynamic stability coupling is the strong variability exhibited by single-molecule SR experiments: the measured signal-to-noise ratio versus any control parameter (in our case, oscillation frequency) will tend to show large variations from molecule to molecule. This was apparent in the results for hairpin H1 shown in Fig. 3 and has been observed in the rest of molecules (see, for instance the results shown in Fig. 7 for SH8). Such variability is consequence of the aforementioned weak stability of biomolecular bonds, and various sources of experimental errors (e.g. instrumental drift, misalignment attachment, inaccurate discrimination of the co-existence force, etc..). It has no counterpart in other SR studies of non-linear macroscopic devices or single degree-offreedom systems (such as single colloidal particle in optical traps or macroscopic systems in solid state physics or electronic devices). IX. FUTURE PERSPECTIVES The results of our work suggest that we could extract the kinetic rates of molecular hoppers by measuring the resonance frequency in oscillating experiments. Is this approach useful? There are several widely accepted and commonly used single-molecule methods that can extract the kinetic parameters of molecular hoppers just by analyzing the hopping traces without bothering about carrying out oscillating measurements. It is then clear that single-molecule SR is not worth pursuing if other simpler methods are available. Yet SR might be of interest for investigating fast molecular transitions where current methods might fail. In Section VII, we investigated SR in an 8bp short DNA hairpin (SH8) at conditions (low trap stiffness) where hopping rates are hard to be measured from standard methods (e.g. the Bell-Evans model). The faster hopping rate and the smaller jumps in extension (due to both the shorter length of SH8 and the decreased trap stiffness) contribute to make the hopping rate measurements difficult. Note that we have been able to extract the value of the hopping rate either by measuring the power spectrum (Fig. 6b) or by implementing a hidden-Markov model. Interestingly, whereas applying standard methods to extract kinetic rates become steadily difficult as the hopping signal becomes more noisy, the quality of the resonant peak in the SNR remains acceptable (Fig. 6d). This suggests that in experimental conditions where hopping signals become nearly undetectable, SR may find a fertile ground for useful applications. Measuring the kinetics of single bonds might be crucial to dissect the kinetic pathways of many reactions, from nucleic acid translocases indispensable in virtually all tasks of nucleic acids metabolism, to molecular folding of proteins and ligand-receptor binding. Moreover, the detection of single bond kinetics also provides a direct measurement of the affinity (or free energy of formation) of weak single bonds (e.g. important for an accurate determination of the parameters characterizing the thermodynamics of secondary structure formation in nucleic acids [32]). It is therefore important to explore new approaches capable of illuminating into such questions. The experimental resolution of formation/dissociation kinetics is currently limited to 5 base pairs [28,31]. Overcoming this limit strongly relies not only on increasing the hopping signal relative to the noise but also in slowing down the (expected too fast) formation/dissociation kinetics of single bonds. A direct measurement of the formation/dissociation fast kinetics of single molecular bonds stretchable along sub-nanometer scales and resistant to low (a few piconewton) forces remains an experimental challenge. In fact, the route to discriminate hopping kinetics in a small number of base pairs may be plagued of difficulties. The situation might be even worse if the aim is to detect the unraveling kinetics of a single nearest-neighbor base pair (NNBP), which is the minimal unit of DNA bonds (double stranded helices are stabilized by both hydrogen bonds between complementary bases and stacking between NNBP). Currently most kinetics measurements are carried out in hopping experiments. However there is a complication present in hopping experiments due to the low signal-to-noise ratio inherent to unraveling a single NNBP together with the disturbances caused by the multifrequential noise present in the high frequency range where the kinetic rate of formation/dissociation of a single NNBP is expected to fall. The low signal-to-noise ratio problem can be partially resolved using advanced data analysis tools such as Bayesian methods and HMM to unravel hopping traces for SH8. However such methods assume a specific form of the noise (i.e. decorrelated force fluctuations and Gaussian emission signal) and do not account for multifrequential sources of noise (due to the aforementioned sources). In this regard, SR might be extremely useful to separate the true formation/dissociation kinetics of a single NNBP from these other artifacts. Finally our work focused in the SR phenomenon in DNA hairpins wehereas other interesting molecular structures are now available for single molecule pulling. From this point of view, it would be very interesting to carry out SR measurements in more complex molecular folders (e.g. exhibiting multiple folding pathways, intermediate states or non-cooperative transitions) such as RNAs and proteins. Methods. Synthesis of DNA hairpins. The DNA hairpins with handles are synthesized using the hybridization of three different oligonucleotides (Fig. 1a). One oligonucleotide contains the sequence of the ssDNA left handle plus a part of the sequence of the desired DNA hairpin; the second has the rest of the sequence of the DNA hairpin and the ssDNA right handle. The right and the left handles have the same sequence in order to hybridize them with the third oligonucleotide. The first oligonucleotide has a biotin at its 5' end and the second oligonucleotide has been modified at its 3' end with a digoxigenin tail (DIG Oligonucleotide Tailing Kit, 2nd generation, Roche Applied Science). Once the first and the second oligonucleotides are hybridized to form the hairpin, the third oligonucleotide is hybridized to the handles to form the dsDNA handles. Streptavidincoated polystyrene microspheres (1.87 µm; Spherotech, Livertyville, IL) and protein G microspheres (3.0-3.4 µm; G. Kisker Gbr, Products for Biotechnology) coated with anti-digoxigenin polyclonal antibodies (Roche Applied Science) were used for specific attachments to the DNA molecular constructions described above. Attachment to the anti-digoxigenin microspheres was achieved first by incubating the beads with the tether DNA. The second attachment was achieved in the fluidics chamber and was accomplished by bringing a trapped antidigoxigenin and streptavidin microspheres close to each other. The sequences of the short hairpins are: SH10 (5'-GCGGCGCCAGTTTTTTTTCTGGCGCCGC-3'), SH8 (5'-GGCGCCAGTTTTTTTTCTGGCGCC-3'). Experimental setup. The experiments have been carried out using a high stability newly designed miniaturized dual-beam optical tweezers apparatus [32]. It consists of two counter-propagating laser beams of 845 nm wavelength that form a single optical trap where particles can be trapped by gradient forces. The DNA hairpin is tethered between two beads (Fig. 1a). One bead is immobilized at the tip of a micropipette that is glued to the fluidics chamber; the optical trap captures the other bead. The light deflected by the bead is collected by two photodetectors located at opposite sides of the chamber. They directly measure the total change in light momentum which is equal to the net force acting on the bead. Piezo actuators bend the optical fibers and allow the user to move the optical trap. The force is made to oscillate using a force feedback system that operates at 4 kHz minimizing instrumental drift effects as compared to protocols without feedback. Force feedback does not introduce artifacts in our measurements unless
8,904
2012-08-24T00:00:00.000
[ "Physics" ]
ß-Hydroxybutyrate Improves Mitochondrial Function After Transient Ischemia in the Mouse ß-Hydroxybutyrate (BHB) is a ketone body formed in high amounts during lipolysis and fasting. Ketone bodies and the ketogenic diet were suggested as neuroprotective agents in neurodegenerative disease. In the present work, we induced transient ischemia in mouse brain by unilaterally occluding the middle cerebral artery for 90 min. BHB (30 mg/kg), given immediately after reperfusion, significantly improved the neurological score determined after 24 h. In isolated mitochondria from mouse brain, oxygen consumption by the complexes I, II and IV was reduced immediately after ischemia but recovered slowly over 1 week. The single acute BHB administration after reperfusion improved complex I and II activity after 24 h while no significant effects were seen at later time points. After 24 h, plasma and brain BHB concentrations were strongly increased while mitochondrial intermediates (citrate, succinate) were unchanged in brain tissue. Our data suggest that a single administration of BHB may improve mitochondrial respiration for 1–2 days but not for later time points. Endogenous BHB formation seems to complement the effects of exogenous BHB administration. Supplementary Information The online version contains supplementary material available at 10.1007/s11064-022-03637-6. Introduction Cerebral ischemia has severe consequences including death and disability [1]. Drug treatment of cerebral ischemia, to this day, is unsatisfactory apart from the use of recombinant tissue plasminogen activator (rtPA). While the healthy brain almost exclusively uses glucose as energy substrate, ketone bodies such as ß-hydroxybutyrate (BHB) can substitute for glucose under certain conditions, e.g. in early life or during prolonged fasting [2]. When ketone bodies reach high (5-10) millimolar concentrations, up to half of the brain energy consumption can be supplied by ketone bodies [3,4]. The ability of ketone bodies to energize the brain have led to a range of studies to elucidate if ketone bodies may have neuroprotective activity [5]. In brain ischemia, hyperglycemia is detrimental, whereas ketone bodies have significant benefits. Animal work has shown that fat-rich and ketogenic diets reduce the outomes of global ischemia and stroke [6]. In focal ischemia, diet-induced ketosis as well as the administration of BHB improved neurological function [7,8]. Exogenous BHB also prevented neuronal death in models of Alzheimer's and Parkinson's disease [9,10]. The mechanism of action of BHB in ischemia remains to be firmly established [11]. BHB has multiple activities in the brain, interacting with ion channels and inhibiting histone deacetylation [12]. BHB also has indirect antioxidative activity and inhibits neuroinflammation [13,14]. Some evidence connects mitochondrial function to BHB´s actions. In the brain, BHB can be converted to acetoacetate in mitochondria, producing NADH and, by further cleavage, acetyl-CoA [12]. BHB administration can increase succinate concentrations, stabilize complex II activity and reduce reactive oxygen generation [8,10]. 13C-labeled BHB, given to humans, was metabolized into glutamate and glutamine in the brain, a pathway mediated by the citric acid cycle [15]. In our hands, significant BHB formation was observed after stroke, an effect that was strongly stimulated in mice fed a fat-rich diet [16]. These finding led us to investigate mitochondrial function after transient ischemia and a single administration of BHB. Chemicals Chemicals were purchased from Sigma/Merck (Darmstadt, Germany) unless stated otherwise. Animals Female CD-1 mice (29-32 g, Charles River) were used for the experiments. They were kept in standard cages, under 60% humidity, 22 °C temperature, and a 12 h-light/dark cycle. Food and water were available ad lib. The study was registered with the local animal committee (Regierungspräsidium Darmstadt). In accordance with GV-Solas guidelines, all procedures were designed to minimize the suffering of the experimental animals. Mice were randomized to study groups using a computer program for random number generation. In total, 173 mice were used for this study. 24 experiments could not be followed through because of surgical problems (insufficient blockade of the MCAO, continuous bleeding during reperfusion), and the mice had to be sacrificed. In 17 experiments, analytical problems caused a failure to obtain data (lack of perfusion in the microdialysis probe, problems during sample work-up and GC-MS measurements). Thus, the results shown in Figs. 1, 2, 3, 4, 5 and 6 were obtained from 132 successful experiments (with an average of eight experiments per group). Separate experiments were performed for the generation of figures, except results in Figs. 6 and 7 that were from the same group of animals. Microdialysis Experiment For surgery, animals were anesthetized with isoflurane (induction dose 5%, maintenance dose 2% v/v) in synthetic air (Air Liquide, Düsseldorf, Germany). Self-constructed, Y-shaped, concentric dialysis probes with a molecular weight cut-off of 10 kDa were stereotaxically implanted into the hypothalamus with the following coordinates (from bregma): AP − 1.5 mm, L + 0.5 mm, DV − 3.8 mm according to [17]. Glass ionomer eluting cement (PermaCem Smartmix Dual, Dental Milestone, Hamburg, Germany) was used to fix the probe on the skull (for further details, see [18]). Probes were implanted at least 18 h before each experiment to allow recovery to stabilize [19]. Microdialysis was performed on the next day with a perfusion fluid (aCSF) containing 147 mM NaCl, 4 mM KCl, 1.2 mM CaCl 2 and 1.2 mM MgCl 2 . The perfusion rate of the microinjection pump was 2 µL/min. The collection intervals were 15 min. Data are given as absolute levels not adjusted for probe recovery. Glucose and lactate concentrations in microdialysates were determined by a colorimetric method (530 nm) using an ISCUSflex Microdialysis Analyzer (M dialysis AB, Solna, Sweden). Transient Middle Cerebral Artery Occlusion (t-MCAO) The procedure was performed as previously described [20,21]. Briefly, mice were anesthetized using isoflurane (2% in synthetic air), their body temperature was kept constant using a thermostatic device, and buprenorphine (0.1 mg/ kg i.p.) was injected 15 min before performing surgery (this injection was repeated 8 h later). After a paratracheal incision, a silicon suture (Doccol®, Redlands, California; size 6-0) was inserted into the A. carotis communis and Cerebral blood flow was monitored with a laser-Doppler monitoring device (Moor Instruments, Devon, UK) to ascertain ischemia (< 15% of blood flow vs. basal). After 90 min, the suture was removed to allow reperfusion. Mice were sacrificed under isoflurane anesthesia either after 60 min or after 1, 3 or 7 days. Behavioral Experiments Neurological deficits were determined by behavioral testing in the morning before surgery and 24 h after MCAO. The "Chimney test" (modified from [22]) was performed for each mouse three times before and after surgery. A mouse was placed head first at the entry of a tube (200 mm long and 40 mm diameter). When the mouse reached the bottom of the tube, the tube was raised to an angle of 45 degrees. All mice reacted by walking backwards. The time needed to climb out of the tube was measured for a maximum of 120 s. The "Corner test" was used as described [23]. Mice were placed in a corner (30° angle) and the chosen sides to leave the corner were counted. Each mouse was tested for one trial (maximum time 120 s) before and after surgery. The Fig. 2 Effects of stroke and ß-hydroxybutyrate (BHB) on neurological outcome. Mice underwent transient cerebral ischemia for 90 min and were given saline or BHB (10-100 mg/kg) by i.p. injection immediately after reperfusion "Stroke Saline"; "Stroke BHB"). For sham-operated mice ("Sham Saline"; Sham BHB"), the carotid artery was prepared but not occluded. A battery of neurological tests (see text) were carried out one day later. Maximum score was 15. Data are expressed as means ± SEM of N = 8 experiments. Data were evaluated by one-way ANOVA followed by Tukey´s mulitpile comparison test. **p < 0.01 Fig. 3 Effect of ß-hydroxybutyrate (BHB) on mouse motoric function. Mice underwent a transient cerebral ischemia for 90 min and were given saline or BHB (30 mg/kg) by i.p. injection immediately after reperfusion ("NaCl"; "BHB"). A Chimney Test after 24 h. This test measures the performance expressed in time (s) needed to exit a tube backwards (maximum value 120 s). B Chimney test after 72 h. C Corner Test after 24 h. This test determines the preferred side to leave a corner. A score of zero represents equal number of turns to both sides, a score of 10 indicates that the animal always turned contralaterally to the brain lesion. D Corner test after 72 h. Data are scatter box plots (means ± SEM are indicated) of 10-14 independent experiments. **p < 0.01 (t-test) laterality index (LI) was calculated: (left turns-right turns)/ total number of turns [24]. After 24 h, we also calculated a neurological score from animal behavior. Details of the scoring procedure are given in the Suppl. Table 1. High-Resolution Respirometry in Isolated Mitochondria After decapitation, the brain was immediately dissected from the skull, the cerebellum was removed and the brain divided into hemispheres. From each hemisphere the frontal part of the brain (≈100 mg) was separated and homogenized in 2 mL MiR05. In addition, a protease inhibitor cocktail (PI) was added to the medium (cOmplete Tablets EASY pack, Roche, Mannheim, Germany). The homogenate was centrifuged twice to remove all cell debris (1400×g, 7 min, 4 °C). The purified supernatant was then centrifuged again (10,000×g, 5 min, 4 °C), the resulting pellet containing the mitochondria was resuspended in 1000 µL MiR05 + PI and centrifuged once again (1400× g, 3 min, 4 °C). Finally, the supernatant was centrifuged one more time (10,000×g, 5 min, 4 °C) and the pellet resuspended in 250 µL MiR05 + PI. Mitochondria from ischemic and contralateral hemispheres were put into parallel chambers of the respirometer. Each chamber was filled with 2.4 mL MiR05 medium according to manufacturer's instructions and kept at 37 °C with constant stirring (750 rpm). After 30 min equilibration and subsequent air calibration, 80 µL of the mitochondrial suspensions were injected into the closed chamber. The remaining mitochondria were frozen in liquid nitrogen for protein determination with the Bradford assay. After Concentrations of A citrate and B succinate in brain homogenate (ischemic hemisphere). Mice underwent transient cerebral ischemia for 90 min and were given saline or ß-hydroxybutyrate (BHB; 30 mg/kg) by i.p. injection immediately after reperfusion ("Stroke Saline"; "Stroke BHB"). For sham-operated mice ("Sham Saline"; Sham BHB"), the carotid artery was prepared but not occluded. Blood and brain samples were taken 24 h after BHB administration. Brain concentrations were calculated as µM assuming 80% brain water content. Data are given as means ± SD (N = 6-8 To verify the integrity of the outer mitochondrial membrane, cytochrome c (10 µM) was added; mitochondria whose respiration increased by more than 15% upon cytochrome c addition were discarded. The maximum capacity of the electron transfer system (ETS) was determined by the stepwise titration of the uncoupler FCCP (state E). To see the isolated CII respiration, the complex I inhibitor rotenone (2.5 µM) was added (CII-linked substrate state, uncoupled). After inhibition of complex III by antimycin A (2.5 µM), the residual oxygen consumption (ROX) remains, which is used to correct the mitochondrial respiration states. Ascorbate (2 mM) and tetramethyl-phenylendiamine (TMPD, 0.5 mM) are artificial electron donors that induce maximum cytochrome c-oxidase (complex IV, CIV) respiration by reducing cytochrome c. Ascorbate regenerates TMPD and is injected first. At the end of the experimental run CIV is inhibited by a high concentration of sodium azide (120 mM). The chemical background as well as ROX remains. To obtain the CIV activity this value has to be subtracted from the total measured oxygen flux (for further details, see [25]). Analytical Measurements Blood plasma and brain samples were harvested immediately after decapitation of mice, frozen in liquid nitrogen and stored at− 80 °C until metabolites were measured by GC-MS. Brain homogenates were extracted using Folch's procedure, the aqueous supernatant was dried under a stream of nitrogen, and the dry residues were derivatized with N,O-bis(trimethylsilyl) trifluoroacetamide (BSTFA) and trimethylchlorosilane (TMCS) (99:1). In plasma samples, proteins were precipitated by addition of methanol/ water (9:1), centrifuged, and the supernatants were treated as described above. Samples were measured on an HP-6890 Series GC-System (Hewlett Packard®, Palo Alto, California) coupled to an Agilent Mass Selective Detector 5973 (Agilent®, Waldbrunn, Germany) and an Agilent® Autosampler 7683. We used a VF-5MS capillary column (30 m × 0.25 mm inner diameter) (Varian Technologies®, Palo Alto, CA) with a silylated precolumn (5 m). After the qualitative analysis of the metabolites (spectra adjusted to N.I.S.T. database), we established single ion monitoring (SIM) parameters and used them for quantification of glucose, BHB, citrate, succinate, fumarate and malate. The calculations were done with internal and external standard methods. Statistical Procedures If not indicated otherwise, data are presented as means ± SEM of N (number of animals). All data were tested for normal distribution by the Kolmogorov-Smirnov test (GraphPad Prism 5.03). Potential outliers (> 2 SD) were identified by the Grubbs test (https:// www. graph pad. com/ quick calcs/ grubbs). Sample size was calculated by the formula N = 2 SD 2 × power index/delta 2 . Based on many years Fig. 7 Concentrations of ß-hydroxybutyrate (BHB) in A blood plasma and B brain homogenate (ischemic hemisphere). Mice underwent transient cerebral ischemia for 90 min and were given saline or BHB (30 mg/kg) by i.p. injection immediately after reperfusion ("Stroke Saline"; "Stroke BHB"). For sham-operated mice ("Sham Saline"; Sham BHB"), the carotid artery was prepared but not occluded. Blood and brain samples were taken 24 h after BHB administration. Brain concentrations were calculated as µM assuming 80% brain water content. Data are given as means ± SD (N = 6-8). Statistics were calculated by One-way ANOVA and Tukey multiple comparison test: A F 3,27 = 84.4; p < 0.001; B F 3,27 = 86.3; p < 0.001. **p < 0.01 vs. Sham Saline of experience, an SD of 20% was expected for metabolite measurements and a treatment effect of 25% was defined as goal of the study. The value for the power index (α = 0.05, two-sided; ß = 0.2; 80%) was taken from the book "Intuitive Biostatistics " by Harvey Motulsky (Oxford University Press, 1995). Treatment effects on activity changes of mitochondrial respiration (Figs. 4, 5, Suppl. Figures 1-3) were compared using one-way analysis of variance (ANOVA; Prism 5.03; GraphPad Software, La Jolla, CA, USA) with Newman-Keuls post-test for multiple pair-wise comparisons. To compare means between two groups we used unpaired Student's t-test (Fig. 3). P-values < 0.05 were considered to be statistically significant. All data were normally distributed, and no outliers were detected. This is an exploratory study using mitochondrial parameters and levels of energy metabolites as major outcome variables. The experimenter was blinded to the animal groups during the measurements of corner and chimney tests. Apart from that, no blinding was performed in this study. Microdialysis Study We first investigated whether exogenously applied ß-hydroxybutyrate (BHB) reaches the brain. The extracellular concentration of BHB in the brain was in the low micromolar range (Fig. 1A). After injecting 30 mg/kg of BHB (a dose that gave optimal results in the behavioral study, see below) the BHB level in the brain approximately doubled within 15-30 min, then returned to baseline. The levels of glucose and lactate (Fig. 1B) were not affected by the BHB injection. Neurological Outcome After Stroke In the following experiments, transient cerebral ischemia was induced by unilaterally blocking the carotid artery for 90 min. After reperfusion, BHB (30 mg/kg) was injected intraperitoneally, and behavioral outcomes were observed 24 h later when the mice had recovered from surgery, anesthesia and pain medication. As shown in Fig. 2, shamoperated mice had no difficulties fulfilling the tasks (see Methods) but stroke induced a massive worsening of the neurological score. Mice showed motoric impairments and partial paresis, they had difficulties balancing on a round stick or a narrow beam. Importantly, the moderate dose of 30 mg/kg improved the score significantly whereas a lower (10 mg/kg) and a higher dose (100 mg/kg) had no effects. Therefore, the following experiments used the dose of 30 mg/kg BHB exclusively. Figure 3 contrasts the performance of the stroked mice after 24 and 72 h. In the chimney test which requires considerable muscle strength, mice did not improve over time (Fig. 3A, B); a limited number of mice also showed poor performance after 7 days; data not shown). In BHB-treated mice, the time to leave the tube was reduced by approx. 5 s, a statistically significant effect at all time points. In the corner test which requires less muscular strength, untreated mice performed poorly after 24 h (Fig. 3C) but improved significantly after 72 h (Fig. 3D). BHB-treated mice performed best after 24 h, this was the only time point when BHB treatment showed a significant beneficial effect. Mitochondrial Activities Based on previous work which identified BHB as a contributor to energy metabolism in the brain, we hypothesized that BHB administration may affect mitochondrial respiration. We first tested if BHB was effective when added to isolated mitochondria. As shown in Suppl. Figure 1, BHB was double as effective as pyruvate alone (p < 0.05). BHB-induced respiration was further increased after addition of malate (+ 79%) but pyruvate plus malate gave the highest signal, more than three times higher than BHB plus malate. Succinate also stimulated respiration significantly (six times more than pyruvate alone), but addition of malate did not further increase respiration (data not shown). The following results were obtained in isolated mitochondria after induction of stroke. When mitochondria were isolated from mouse hemispheres 60 min after reperfusion, oxygen consumption in mitochondria from the ischemic hemisphere was less than 50% of that measured in the contralateral hemisphere (Suppl. Figure 2). Complexes I, II and IV were affected, and BHB administration was ineffective at this time point (Suppl. Figure 2). Figure 4 summarizes data obtained after 24 h of reperfusion. Here, the activity of the complexes I and II (and, consequently, oxidative phosphorylation) was further reduced in untreated animals and remained low at 72 h of reperfusion (Fig. 5). Importantly, after 24 h of reperfusion, BHB administration normalized complex I activity and increased complex II activity (Fig. 4 A, B) whereas complex IV total activity was not affected. At 72 h past reperfusion, the single administration of BHB still had beneficial effects, but the differences between salinetreated and BHB-treated mitochondria were no longer significant (Fig. 5 A-C). After one week, complexes I and II seem to recover but no effect of BHB could be seen (Suppl. Figure 3). It seems, therefore, that the single administration of BHB had a beneficial but transient effect on mitochondrial energy metabolism. Metabolite Concentrations in Brain Tissue Since BHB effects were strongest at 24 h past administration, we measured energy metabolites in mouse brains 24 h after reperfusion. Brain levels of citrate and succinate are shown in Fig. 6 and were not affected by either ischemia or BHB administration. The same was true for the levels of malate and fumarate (data not shown). Finally, we determined the plasma and brain tissue levels of BHB (Fig. 7). Plasma BHB concentrations were 134 ± 20 µM in saline-treated, shamoperated mice and were increased significantly after BHB administration (193 ± 67 µM; Fig. 7A). Brain levels of BHB were not affected by BHB administration (Fig. 7B). After cerebral ischemia, however, BHB levels were increased in plasma (to 294 ± 73 µM, + 119% vs. sham-operated animals) and in brain (to 460 ± 76 µM; + 364% vs. sham-operated animals). Administration of BHB 24 h earlier caused a remarkable increase of plasma BHB (to 2.15 mM) and of brain BHB (to 423 µM) as illustrated in Fig. 7A and B. While plasma BHB was much higher in stroked mice after BHB administration (Fig. 7A), the brain tissue contents of BHB were high in stroked mice, irrespective of prior BHB application (Fig. 7B). Discussion The present study dealt with BHB, a ketone body with neuroprotective properties. We first showed by microdialysis that BHB administration causes an increase of the BHB concentration in the extracellular space of the hippocampus. It is noteworthy that this increase was limited, likely due to extensive uptake of BHB by other organs as described previously [15,26]. We then tested BHB's effects when given as a single acute dose after 90 min of transient ischemia. 24 h later, the neurological scores indicated that BHB administration at 30 mg/kg attenuated the consequences of brain ischemia. It should be noted that neither a low nor a very high dose of BHB was able to affect the neurological outcome so that dosage seems to be important for beneficial effects. The reason why the high dose did not work remains unknown, the toxicity of BHB is considered low. In the following experiments, BHB was dosed at 30 mg/kg; at this dose, its beneficial effects at 24 h past ischemia partially disappeared at 72 h indicating that a single dose of BHB may have beneficial but transient effects. Of note, mice remained handicapped after 3 days in the challenging chimney test that requires a higher muscular effort, so that the effect of BHB was still visible after 3 days. In contrast, BHB only worked in the corner test after 24 h. This test requires less muscle strength and mice improved considerably within 3 days so that BHB was no longer effective. Clearly, the effects of repeated administration of BHB after ischemia should be investigated in future studies. The present work focused on mitochondrial effects of BHB application. We first confirmed that BHB can be used for mitochondrial respiration in insolated mitochondria. We then tested mitochondrial effects 60 min past BHB administration. At this early time point, all mitochondrial complexes showed reduced respiration, but BHB administration had no effect. At 24 h past cerebral ischemia, complex IV activity had recovered, but it must be pointed out that in our assay, complex IV activity was measured ex vivo under optimum conditions of substrate supply, so that its activity is very high and does not reflect activity in situ. Complex I and II activities, and their combined activity which is reflected by "oxidative phosphorylation", were significantly reduced after 24 and 72 h. After 24 h, BHB administration significantly improved mitochondrial respiration in these complexes. After 72 h, a minor effect was still visible but did not reach significance any more. It follows that the effects of the single BHB administration were transient, similar to the results found in behavioral assays. In other words, a similar time course was observed for BHB's actions on mitochondrial respiration and on functional outcomes. In a final series of experiments, we measured BHB levels in plasma and brain, and mitochondrial metabolites in brain tissue. 24 h after administration, plasma BHB levels were higher than in controls (by 44%). Brain BHB concentrations were unchanged in non-ischemic mice 24 h after BHB administration. Cerebral ischemia, however, caused ketosis in mice: Plasma levels rose to 292 µM after stroke and, 24 h after BHB administration, to 2.15 mM. Brain tissue levels reached values of 400-500 µM BHB, five times higher than in non-stroked mice. This massive increase of ketone bodies was also observed in our previous study [16] in which ketosis after stroke was particularly strong in mice kept on a fat-rich diet. In this earlier study, ketosis was prevented by propranolol, a blocker of adrenergic ß-receptors, and we hypothesized that the well-known activation of the sympathetic nervous system post stroke induced ketone body formation in liver, mediated by adrenaline [27]. In our present study, mice had been kept on standard diet, but nevertheless ketosis was visible in blood and brain. While BHB may also accumulate in the brain due to lack of metabolism in hypoxic conditions, in our hands BHB brain levels were higher after stroke than in plasma, at least in the stroke-saline group, and this finding could be explained by BHB synthesis in the brain. While most ketone bodies are synthesized in the liver, astrocytes have also been shown to produce ketone bodies [28,29], and this pathway may be neuroprotective under conditions of hypoglycemia [30,31]. We speculate that BHB accumulates in the brain due to slow further metabolism, possibly because of lack of oxygen and NAD + in hypoxic conditions. Still, the functional tests show that some BHB is evidently metabolized and attenuates neuronal dysfunction, improving motoric function. In summary, we confirm previous reports [7,32] that ß-hydroxbutyrate has neuroprotective properties in cerebral ischemia. Several mechanisms of action have been proposed for BHB actions: Some work suggested an inhibition of neuroinflammation through HCA2 receptors [33,34], other studies favored epigenetic mechanisms through histone deacetylase inhibition and reduction of reactive oxygen species (antioxidative mechanism) [35,36]. While our study does not exclude these mechamisms, we suggest that BHB's beneficial action is associated with the improvement of mitochondrial function. Earlier studies have also reported mitochondrial effects in animal models [36]. In one study, an increase of succinate was suggested to mediate BHB´s actions [10]. In our hands, total brain succinate levels were stable after BHB, but we cannot exclude dynamic local changes of citric acid cycle metabolites. Nevertheless, we suggest that in our model, BHB likely acted by improving complex I and II activities and therefore, mitochondrial function. Conclusion ß-Hydroxbutyrate, a ketone body, can improve mitochondrial function and behavioral outcomes at 24 h when given immediately after transient cerebral ischemia in mice. The effect is dose-dependent and transient as improvements already disappear after 72 h. The potentially beneficial effects of a prolonged administration of BHB after cerebral ischemia should be investigated.
5,901
2022-06-08T00:00:00.000
[ "Biology" ]
Bifid Uvula in three members of a family Uvula is a key organ in functions like speech, deglutition and mastication. The majority of the world population has a uvula that is conical in shape, hanging upside down. However, there are times when the uvula is split, the condition is called a bifi d or bifurcated uvula. Sometimes it is also called a cleft uvula. Three male blood relatives, father and his two sons reported to outpatient department of Oral Medicine and Radiology at Sharad Pawar Dental College, Wardha for odontogenic complaint. But, their intraoral examination revealed interesting fi nding, that was presence of bifi d uvula and cleft palate in three of them, who were of course fi rst relatives of each other. The presentation of the condition in three members of the same family is a unique feature of this article. Case Report Bifi d Uvula in three members of a family Suwarna Dangore Khasbage* Oral Medicine and Radiology, Sharad Pawar Dental College, Datta Meghe institute of Medical Sciences Wardha, Maharashtra, India Dates: Received: 29 May, 2017; Accepted: 15 July, 2017; Published: 17 July, 2017 *Corresponding author: Suwarna DangoreKhasbage, Oral Medicine and Radiology, Sharad Pawar Dental College, Datta Meghe institute of Medical Sciences Wardha, Maharashtra, India, Email Introduction Bifi d uvula means a cleft in uvula. It is often considered as a marker for sub mucous cleft palate. Compared to the normal one, it has fewer amounts of muscular tissues. It is commonly noticed in infants and is rarely found in adult. A bifi d or bifurcated uvula exists in two percent of the general population. The prevalence of cleft uvula is much higher than that of cleft palate. Cleft uvula is more common in whites (1 in every 80 white individuals) as compared to blacks (1 in every 250 individuals) [1]. It can cause problems in ear. Sometimes it is unable to reach the posterior pharyngeal wall during swallowing, causing regurgitation. It may produce velopharyngeal insuffi ciency and nasal intonation. Sometimes it is associated with major systemic problems like aneurysm in different vascular bed like coronary and abdominal aortic aneurysm. But, it does not cause problems in view of airway management. Case report 1 A 58 years old male had reported to outpatient department of Oral Medicine and Radiology at Sharad Pawar Dental College, Wardha with a complain of toothache in left posterior region of lower jaw. His past medical history and past dental history was not contributory. Clinical examination revealed food lodgment and initial proximal caries with 36. Soft tissue examination revealed presence of approximately 1X 1.5cm, oval palatal perforation suggestive of cleft palate and short bifi d uvula ( Figure 1). He had no local problems like speech diffi culty or nasal regurgitation etc., and not ready for further evaluation regarding cleft palate and bifi d uvula. Routine treatment protocol was advised for the carious tooth. Case report 2 A 28 years old male who was an elder son of the patient described in fi rst case report, had reported to outpatient department of Oral Medicine and Radiology at Sharad Pawar Dental College, Wardha with a complaint of decayed tooth in posterior region of right side of lower jaw. His past medical and dental history was not contributory. Clinical examination revealed deep caries with mandibular fi rst molar. Examination of palate showed presence of approximately 1X 1cm, round palatal perforation suggestive of cleft palate and short bifi d uvula as shown in (Figure 2). There was no history of speech diffi culty, nasal twang or nasal regurgitation. He was also not ready for further evaluation regarding cleft palate and bifi d uvula. The carious tooth was treated by conservative approach. Case report 3 A 26 years old male who was a younger son of the patient Various symptoms associated with bifi d uvula and cleft palate may be inability to breastfeed, diffi culty in bottlefeeding, nasal regurgitation and recurrent otitis media, delay in speech development etc. The speech had a characteristic hyper-nasal resonance with nasal air emission and an abnormal speech pattern with compensatory articulations [2]. Bifi d uvula, although looks apparently benign, sometimes may be associated with anomalies leading to catastrophic complications. Cornelia de Lange syndrome is a rare congenital syndrome associated with bifi d uvula and sub mucous cleft palate that causes problems in airway due anatomical distortion [3]. Bifi d uvula may be associated with increased risk of schizophrenia, mild mental retardation, and chromosomal disorder, diagnosed by fl uorescent in situ hybridization technique [4]. Loeys-Dietz syndrome (autosomal dominant) is a genetic syndrome with clinical features overlapped with Marfan syndrome, but etiology due to mutations in the genes encoding transforming growth factor beta receptor 1. Hypertelorism, cleft palate, or bifi d uvula are the major fi ndings. Arterial aneurysms/dissections, arterial tortuosity involving aortic and its branches, carotid, vertebral, extracranial artery, abdominal aorta and its branches, common iliac, and popliteal arteries are reported in this syndrome [5,6,7]. It is stated that bifi d uvula may have been a warning sign of the syndrome with internal anatomical or functional changes without any external manifestation akin to the tip of an iceberg. Although cerebral aneurysm is very rare with bifi d uvula, it may be a part of the above mentioned syndromes [8]. Thus, whenever anesthetists plans to conduct a case with bifi d uvula (even though non-syndromic), they must ask for detailed family and genetical history, clinical examination relevant investigations, and specialty consultation [8]. Adequate preoperative preparation and, accordingly, intra- to be alert to SMCP because SMCP may account for these persistent mild complaints. Therefore, early detecting of SMCP can yield profi ts [10]. The possible treatments for bifi d uvula depend on the severity of problem. In asymptomatic cases, as such no treatment required which was the situation in the patients described in the present article and thus they were not ready for any investigations or management. If the symptom includes speech diffi culties, then a speech therapist could possibly help the patient learn how to talk well. Swallowing and feeding problems may also be addressed through appropriate therapy. Some patients may opt for the removal of the bifurcated uvula but others would opt for the surgical reconstruction of these abnormal tissues. Treatment of SMCP is surgical repair which includes a V-Y palatal pushback and simultaneous transposition of a superiorly based pharyngeal fl ap. Similarly the interrelated problems of chronic otitis media and faulty speech production may be diminished by functional reorientation of the palatal muscles and simultaneous revision of the velopharyngeal portal [11].
1,524.6
2017-07-17T00:00:00.000
[ "Computer Science" ]
Debris of Gaia-Sausage-Enceladus that made a H I hole in the Milky Way ≈ 20 million years ago The Perseus arm is known as one of the two 1–3 or four 4,5 dominant spiral arms of the Milky Way. While there is a large number of Massive Young Stellar Objects in the outer portion of the arm, a lower density of those is found in the inner portion 6–8 . Inner Perseus arm shows a noncircular motion of > 70 km s − 1 at a Galactic longitude of ∼ 50 ◦ , and its origin remains unclear 9 . Here we report an analysis of the kinematics and spatial distribution of neutral hydrogen (H I ) gas, star-forming regions (SFRs) and stars, together with an analysis of the star’s chemical abundances. We discovered that H I gas with ∼ 10 6 solar mass was lacked in the inner Perseus arm, and a similar amount of H I gas was distributed above the Galactic plane. The extended H I gas is well followed by retrograde low-metallicity stars, which are likely fossil stars from Gaia − Sausage − Enceladus 10–13 . Orbit integration shows that the fossil stars crossed the inner Galactic disk about 20 million years ago. The lower star-formation detailed structure of the spiral arms in the disk. compare kinematics of the inner Perseus arm the velocity distribution of 35 SFRs at a Galactocentric specific 8 range less affected by the bulge (Galactic thus studying the effects of spiral arms. Inner Perseus-arm sources show slower velocities V compared other spiral-arm statistical peculiar (noncircular) motion of G049.41 much larger than would be expected given the gravitational potential of the spiral arm the origin of the peculiar (noncircular) motion the marginally significant vertical motion ± the R deed, the faint area (∼10 K; gray area) is more than two times as faint as the surrounding area (> 1 20 K). Physical size of the faint area scales as 1 Galactic longitude, and d is heliocentric distance. The distance of G049.41 is 6.6 +1.1 −0.4 kpc 14 . Fig. 3 2 indicates an existence of H I hole with a size of ∼1 kpc around G049. 41. H I mass in the figure 4 can be estimated with a general procedure 19 between the faint and surrounding areas is >2×10 6 M ⊙ at the distance of G049.41. A similar shape 8 (i.e., black polygon in Fig. 2), but with bright emissions, was discovered toward a high-velocity 9 gas in M101 20 . M101 is the nearly face-on spiral galaxy, and shows holes in H I distribution 21,22 . 10 The high-velocity gas is moving perpendicular to the disk of M101, and its origin is thought to be 11 recent collisions of extragalactic gas clouds with the disk of M101 20 . 12 To reveal the origin of the faint H I emissions, we integrated H I emissions over the velocity 13 range in the black polygon of Fig between the excess emissions above the plane and the faint emissions in the disk will be further 19 discussed below. 20 To estimate the distance of the excess H I emissions, we obtained the 6D phase space infor-1 mation for stars from the early installment of the Gaia's third data release (EDR3) [24][25][26] . Stars that 2 satisfied the LSR velocity range in the black polygon (Fig. 2) and a parallax accuracy of better than 3 20%, were selected (see Methods for details). The final sample was composed of 424,059 stars, 4 of which 47,695 stars had metallicity information (the common logarithm of the iron-to-hydrogen 5 ratio divided by the solar value; [Fe/H]). Stars with [Fe/H] < −1.0 dex (i.e., less than one tenth 6 of the solar metallicity) are defined as "low metallicity stars" in this paper (430 stars identified). 7 We found that the low metallicity stars were systematically distributed above the Galactic plane 8 with a median Galactic height (z) of 1.8 kpc, whereas stars with [Fe/H] ∼ 0 (i.e., solar metallicity) 9 were distributed more closely to the plane (Extended Data Fig. 1). We examined the kinematics of 10 the low metallicity stars, and found that retrograde low-metallicity stars (i.e., V φ < 0 km s −1 ) are 11 moving away from the Galactic plane with a median vertical velocity (V z ) of 68 km s −1 (Extended 12 Data Figures 2 and 3). The retrograde low-metallicity stars and G049.41 are superimposed on 13 l − b plots of H I emissions (Figures 3a and 3b). Surprisingly, the distribution of the retrograde 14 low-metallicity stars is well matched with those of H I emissions above the plane. Mass of H 15 I emissions scales as where b max − b min is a range of Galactic latitude, and the others were explained previously. The 17 median distance of the retrograde low-metallicity stars is 5.5 kpc. In Fig. 3b tion by low-metallicity thick-disk stars. The low-metallicity thick-disk stars are thought to be born 12 during or after the GSE merger 27 . 13 We checked to determine when retrograde low-metallicity stars with e >0.7 crossed the 14 Galactic disk, by orbit integration (see Extended Data Fig. 5 disk. Raw material for star formation in the inner Perseus arm could have been reduced by the disk 10 crossing, although relationship between the arm and the disk crossing should be further examined. 11 The parental cloud of G049.41 might be perturbed by shock wave induced by the disk crossing. 12 The above interpretation is schematically summarized in Fig Table 2, and those associated with the source are defined in Extended 7 Data Table 3. The parameters and the definitions are applied throughout the paper. Here, we only 8 describe details about the stellar sample because we applied general procedures for H I and VLBI 9 data analyses. We checked to determine each radial velocity as a function of Galactic longitude satisfied the 15 LSR velocity range in the black polygon (Fig. 2). Note that radial velocity in Gaia EDR3 is 16 calculated in the solar barycentric reference frame (∼heliocentric radial velocity V Helio ), and thus 17 we converted each radial velocity to LSR velocity (V LSR ) for the comparison. 18 Also, we added the restriction of a parallax accuracy better than 20% ( π δπ > 5). This is 19 because estimating distance by simply inverting the parallax can result in the Lutz-Kelker bias, 20 which becomes significant when the parallax error is large (e.g., π δπ ≤ 4) 36 stars that satisfy the LSR velocity range in the black polygon (Fig. 2) sion. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy 10 of Sciences. 14 The LAB H I data was analyzed by AIPS, Astronomical Image Processing System 46 . AIPS is produced and 15 maintained by the National Radio Astronomy Observatory, a facility of the National Science Foundation 16 operated under cooperative agreement by Associated Universities, Inc. 17 The orbit integration was performed with the galpy 39 (see HP:http://github.com/jobovy/galpy). 18 The software TOPCAT was used for making figures. TOPCAT was also used for the cross-matching be-19 tween Gaia EDR3 and metalicity data (i.e., APOGEE DR16 and LAMOST DR5). TOPCAT was initially 20 Bewilligungsnummer 05A08VHA), the European Space Agency, and the FP7 project GENIUS. All of this 4 support is gratefully acknowledged. 5 Author contributions All the authors contributed to the work. N.S. led the project and contributed to all 6 the aspects of the paper (i.e., data reduction; discussion; paper writing). H.N. provided initial idea of the 7 research, disk crossing by a stream. The idea allowed us to discover the stellar and gaseous streams. H.N. 8 and K.K. contributed to the writing and provided stimulated discussions, which improved the quality of the 9 paper. 10 Code Availability There is no custom code or mathematical algorithm that is deemed central to the con-11 clusions in this paper. 12 Competing Interests The authors declare that they have no competing financial interests. 13 Correspondence Correspondence and requests for materials should be addressed to Nobuyuki Sakai (email: 14 nsakai@kasi.re.kr). 15
1,897
2021-06-07T00:00:00.000
[ "Physics" ]
Digital Finance, Environmental Regulation, and Green Technology Innovation: An Empirical Study of 278 Cities in China : Digital finance provides a premises guarantee for green technology innovation, and effective environmental regulation helps to achieve green and sustainable development. This article selects Chinese urban panel data from 2011 to 2019 to explore the impact mechanism of the influence of digital finance and environmental regulation on the innovation capacity of green science and technology. It is found that extensive financing channels and the strong information-matching ability of digital finance have a significant promoting effect on local green science and technology innovation. Moreover, government environmental regulation not only facilitates the development of green technology innovation locally and in nearby regions, but also strengthens the utility of digital finance in driving green science and technology innovation. Further research found that the influence of digital finance and environmental regulation on the ability of green science and technology innovation has regional heterogeneity, and only digital finance in Central China can promote green science and technology innovation in both local and adjacent areas. Therefore, the government should continue to promote the development of digital finance, optimize environmental regulations by increasing environmental protection subsidies and creating a green innovation environment, and further stimulate willingness to innovate green technologies. At the same time, it is also important to note the coordinated development and governance with neighboring regional governments. Introduction Since the reform and opening up, China has created a number of "China miracles", but environmental protection has been neglected in the rush to boost economic growth. After entering the new normal, China's economy has begun to aim for high-quality development. This goal sets higher requirements for economic progress and environmental protection. However, China ranked only 120th out of 180 countries in the 2020 Environmental Performance Index, illustrating the imbalance between high-quality economic development and environmental protection. In order to balance "economic performance" and "environmental performance", since 2012, China has gradually introduced and improved laws and regulations to protect ecological civilization, aiming to promote green transformation through strict environmental regulations. Traditional innovation only considers technological progress and economic development, while green technology innovation also needs to take into account ecological civilization. Therefore, in order to achieve green technology innovation, the main body of innovation will face higher technical standards, capital investment, financing costs, and risks [1]. In this way, in the process of realizing green technology innovation, good financial services are needed as a prerequisite, and at the same time, there should be correct guidance from the environmental regulations by the government [2]. However, it is difficult for traditional financial services to meet the capital needs of many enterprises for green technology innovation because of its high threshold, high cost, and low efficiency [3]. Therefore, digital finance based on the continuous development of digital technology has gradually caused concern in various circles. Digital finance can solve the capital problem in the process of green technology innovation by lowering the threshold, improving resource allocation, alleviating information asymmetry, reducing transaction costs, widening inter-regional overflow channels, and using other methods [2,4,5]. So, as a new way of financial services, can digital finance drive green technology innovation to achieve sustainable development? Is digital finance affected by government environmental regulation in the process of influencing green technology innovation? Is there a spatial spillover effect of digital finance on green technology innovation? To solve the above problems, the mechanism studied in this paper covers digital finance, environmental regulation, and green technology innovation. Based on the panel data of 278 cities from 2011 to 2019, the spatial Durbin model was constructed to explore the impact of digital finance and environmental regulation on green technology innovation. The main research contents of this paper are as follows: first, the influence of digital finance and environmental regulation on green technology innovation is discussed. Second, the moderating role of environmental regulation in the process of digital finance affecting regional green technology innovation is explored; third, the marginal effect of digital finance and environmental regulation on green technology innovation is analyzed. Fourth, 278 cities are divided into seven regions: East China, South China, North China, Central China, Southwest China, Northwest China, and Northeast China. Based on the perspective of regional heterogeneity, the influence mechanism of digital finance and environmental regulation on green technology innovation is discussed in depth. The rest of this paper is arranged as follows: Section 2 reviews the relevant literature. Section 3 proposes the research hypothesis. Section 4 introduces the model setting and data description. Section 5 presents an empirical analysis, robust testing, and regionalheterogeneity analysis and discussion. Section 6 puts forward research conclusions and relevant policy suggestions. Section 7 puts forward research limitations and prospects. Literature Review Green technology innovation is innovation activity with high investment, high risk and long cycle. To enhance the abilities of science and technology, good financial service is the key [6,7]. Digital finance is a new financial service model that integrates traditional financial industry with big data, internet, cloud computing and other information technologies [8,9]. The G20 Advanced Principles for Digital Inclusive Finance adopted at the G20 Hangzhou Summit in 2016 advocated the development of inclusive finance relying on digital technologies and included relevant indicators of digital finance into the evaluation system of inclusive finance, which greatly promoted the development of digital finance. At present, there have been abundant discussions on the relationship between digital finance and technological innovation in academic circles. At the micro level, Lin [10] points out that fintech can reduce the financing risk caused by information asymmetry on enterprise technological innovation. Subsequently, Tang et al. [11] took Chinese listed companies as research objects and believed that digital finance could promote the output of technological innovation by broadening financing channels and reducing financing costs. At the macro level, scholars have reached a relatively consistent conclusion that digital finance has the advantages of low threshold, wide coverage, low transaction cost and high resource allocation rate, which is of far-reaching significance to realize technological innovation [12,13]. For example, Nie et al. [14] selected the SYS-GMM model and found heterogeneity in the promotion effect of digital finance on regional technological innovation. Combined with the trickle-down effect, Xu [15] found that digital finance can also drive technological innovation in neighboring areas through spatial econometric model. In addition, some scholars also discussed the relationship between digital finance and green technology innovation. For example, Yu et al. [16] pointed out that digital finance can significantly promote green technology innovation on family farms and believed that promoting the development of digital finance is of great significance to the sustainable development of agriculture. Habiba et al. [17] took 12 major countries with carbon emissions as research objects and found that green technological innovation is a key factor in reducing carbon emissions and achieving sustainable development, and digital finance can effectively promote the progress of green technological innovation. When exploring the impact of digital finance on carbon emissions, Lee [18] concluded that green technology innovation plays an intermediary role. Environmental regulations are related environmental laws and regulations formulated by the government for the purpose of protecting the environment, aiming to guide economic subjects to make decisions to improve the environment, reduce pollutant emissions while improving the overall economic benefits, and achieve the goal of the sustainable development of technology and the environment [19]. In the 1960s, neoclassical economic theory, based on the static perspective, pointed out that under the environmental supervision of the government, enterprises need to pay a large amount of environmental protection costs, which are bound to occupy the R&D funds originally used for innovation activities of enterprises, resulting in the innovation crowding-out effect. Therefore, environmental regulation will inhibit economic development [20,21]. Porter put forward a different point of view in 1991. Porter believed that with economic development, production technology and equipment of enterprises are constantly upgrading, and the key to environmental protection has shifted from process to result. Therefore, the environment in which enterprises are located should be regarded as dynamic and the impact of environmental regulation on economic development should be studied from a dynamic perspective. Therefore, based on the dynamic perspective, the Porter hypothesis is proposed. Porter [22], as well as Porter and Vander [23] believe that strict and effective environmental regulations can guide enterprises to voluntarily strengthen their investment in green technology R&D, enhance their competitive advantages, and achieve a win-win balance between economic performance and environmental performance. Since the porter hypothesis was put forward, scholars have continued to discuss the relationship between environmental regulation and green technology innovation. However, the opinion camp is always divided into three parts: the first side mainly believes that environmental regulation can effectively promote the ability of green technology innovation based on the Porter hypothesis [24,25]. Li et al. [26] pointed out that the financing availability of large enterprises is relatively high. Therefore, in the face of strict environmental regulations, enterprises will reduce environmental costs and improve resource utilization through green technology innovation, so as to enhance their competitive advantages and achieve sustainable development. Zhang et al. [27] studied 33 countries and concluded that environmental regulation has a significant incentive effect on green patent output. The second side, supported by neoclassical economic theory, holds that environmental supervision inhibits technological innovation ability [28,29]. Lanoie et al. [30] found that the benefits generated by enterprise green technology innovation could not cover the costs generated in the process of environmental compliance. Therefore, compared with green technological innovation with high investment, high risk and long cycles, enterprises are more inclined to pay the environmental penalty. Dechezleprêtre [31] believes that environmental costs caused by environmental regulations occupy the funds originally used for innovation activities of enterprises, thus hindering the development of green technological innovation of enterprises. The third party believes that there are preconditions for the relationship between the two and emphasizes the role of environmental regulation intensity, senior executives' environmental awareness, regional economic development level, financing and other factors [32,33]. At the same time, many scholars also pay attention to the interactive relationship between digital finance and environmental regulation. For example, Shi et al. [34] points out that the synergy between digital finance and environmental regulation can effectively improve the degree of environmental pollution and play an important role in environmental governance. Li et al. [35] showed through the study of urban panel data that the interaction between digital finance and environmental regulation is conducive to the upgrading of urban industrial structure. Wang et al. [36] point out that digital finance cannot do without the regulatory role of government intervention in the process of promoting county economic growth. In addition, Feng et al. [37] took the intensity of regional environmental regulation as the threshold variable when exploring the relationship between digital finance and green technology innovation, and found that digital finance significantly promoted of green technology innovation only in regions with stricter environmental regulation. According to the above literature, scholars have made many achievements in the research on the relationship between digital finance and technological innovation, environmental regulation and green technological innovation. However, if we place digital finance, environmental regulation and green technology innovation in a research framework, we can find that the existing research has three characteristics. First, the literature pays more attention to the influence of digital finance on technological innovation, and less attention is paid to the influence mechanism of digital finance on green technological innovation. Second, scholars have conducted preliminary discussions on the relationship between digital finance and environmental regulation, but the discussions are few and scattered, focusing on economic development and environmental governance. Third, existing studies mostly focus on spatial spillover effects of digital finance from the perspective of spatial independence. Compared with the existing research, this study has three main contributions. Firstly, in terms of research perspective, this paper constructs a research framework of digital finance, environmental regulation and green technological innovation, in which environmental regulation is taken as a regulating variable to provide a perspective for the discussion of the significance of digital finance. Secondly, in terms of research methods, considering the flow of financial elements, the migration behavior of enterprises and the spillover effect of technological innovation, this paper chooses the spatial Durbin model to explore the interaction between regions from the perspective of spatial correlation, further enriching the empirical research on digital finance, environmental regulation and green technological innovation. Thirdly, in terms of practical significance, this paper studies the heterogeneous impact of digital finance and environmental regulation on green technology innovation according to geographical location, providing theoretical basis for the sustainable development of each region. Research Hypotheses As a high-risk, high-investment, and long-cycle activity, green technology innovation is prone to being restricted by financing problems during its development [16,38,39]. In order to realize the improvement of green technology innovation ability, a large amount of capital is needed to support it [40]. However, the problems of traditional finance, such as information asymmetry, high threshold, and low service efficiency, all lead to its poor inclusiveness and difficulty in effectively alleviating financing difficulties [41]. Therefore, with the integration of information technology, digital finance with strong universality is gradually becoming known by all circles. Digital finance can increase the possibility of obtaining financing through a variety of ways, promote R&D investment, and strengthen green technology innovation so as to achieve high-quality economic development [42,43]. On the one hand, digital finance absorbs investors that are "large, small and scattered" in the market, that is, the long tail group [44,45], which has more financial resources and can effectively broaden supply channels. Due to technical limitations and high service costs, traditional financial markets cannot effectively absorb these investors [46]. Supported by information technology, digital finance can process massive data at low cost and low risk, lower the service threshold, and promote broader long-tail groups to join the financial market [47,48]. In addition, digital finance provides intelligent investment, supply-chain finance, consumer finance, and third-party payment, which broadens financing channels [49] and further provides the possibility of obtaining funds for green technological innovation. On the other hand, the information matching function of digital finance can alleviate information asymmetry and enhance the allocation efficiency of financial resources [50,51]. Most scholars believe that information asymmetry between the financial market and innovation subject is one of the main reasons for inefficient resource allocation. The cost of information collection reduces investors' willingness to invest, so it is more difficult for the innovation subject to obtain external financing. Digital finance can evaluate investor credit through algorithms and big data, provide credit informatization and transparency, alleviate information asymmetry, improve the credit-resource mismatch, overcome external financing constraints, and help innovation subjects to make reasonable and effective green technology innovation decisions [52], so as to comprehensively improve regional green technology innovation. With the continuous improvement in the development level of digital finance, due to the profit-seeking of capital and the liquidity of financial elements, digital finance can continuously radiate to neighboring areas through the "trickle-down effect", resulting in a spatial spillover effect [53]. Especially with the support of digital technology, geographical distance is no longer one of the more difficult problems affecting innovation subjects' access to financial services [54]. Therefore, the spatial spillover effect of digital finance strengthens the financial support and information exchange of neighboring regions, and also promotes the green technology innovation ability of neighboring regions. Based on the above, this paper proposes research Hypothesis 1: Hypothesis 1 (H1). Digital finance can significantly enhance urban green technology innovation. At the same time, digital finance will also help improve the green technology innovation capacity of surrounding cities. Porter hypothesis holds that strict and effective environmental regulation can stimulate enterprises' willingness to innovate green technology and obtain competitive advantage through improving resource utilization rate, enhancing product performance and meeting production emission standards [22,23]. When the government implements environmental regulations, enterprises need to invest a large number of research and development personnel, research and development funds, purchase environmental protection equipment, emission permits, etc., which can be collectively referred to as environmental protection costs. In order to avoid the decline in economic benefits, enterprises will add environmental protection costs back into the product price. However, companies will also lose customers as prices rise, resulting in a loss of profits. At this time, the government can force and guide enterprises to carry out green technology innovation through environmental regulation. With the upgrading of technological structure, enterprises can realize the improvement of the resource utilization rate and the reduction of production costs and administrative penalty costs, thus obtaining a greater profit margin [55]. At the same time, an enterprise's environmental image can attract more green consumers, increase the market share, and obtain competitive advantages. In this process, the innovation income of green technology is greater than the innovation cost, resulting in the "innovation compensation" effect [56,57]. Therefore, the government can further encourage enterprises' green innovation behavior through environmental regulation. Combined with imitative learning between governments, relocation behavior of enterprises and technology spillover effect, this paper proposes research Hypothesis 2: Hypothesis 2 (H2). Environmental regulation can significantly improve urban green technology innovation. At the same time, environmental regulation also helps to improve the green technology innovation capacity of surrounding cities. The good financial supply of digital finance provides a financial guarantee for the technological innovation activities of enterprises. However, whether the innovation results can improve the competitiveness of enterprises and also give consideration to environmental protection depends on the environmental regulation of the government [58,59]. Under the constraints of environmental regulations, enterprises need to carry out green technological innovation to achieve environmental compliance. Both front-end green production innovation and back-end governance innovation require a large amount of capital [60]. At this point, if the investment in green innovation exceeds the enterprise's expectation and the financing cost is high, the enterprise will give up green transformation and turn to the negative behavior of reducing or stopping production [61]. Digital finance provides credit support to green transformation enterprises under the guidance of government environmental regulations and facilitates the green technological innovation of enterprises with low-cost and low-threshold financial services [40,62]. Therefore, in the process of providing effective financial services, digital finance should combine the green development orientation of the government to jointly promote the ability of green technology innovation and achieve the goal of high-quality economic development. Based on the above, this study proposes research Hypothesis 3: Hypothesis 3 (H3). Environmental regulation positively moderates the relationship between digital finance and urban green technology innovation capability. Model Construction This study constructs a spatial Durbin model to explore the mechanism of digital finance and environmental regulation on green technology innovation capability. The specific measurement model is as follows: where i represents a city (i = 1, 2, 3, · · · , 278), t represents the year (t = 2011, 2012, 2013, . . . , 2019), gt represents green technology innovation, df represents digital finance, er denotes environmental regulation, X means control variables, ρ stands for the space autoregressive coefficient, W stands for the weight matrix of adjacent space, and v stands for the error term. Explained Variable Green Technology Innovation (lngt): Based on Lu's [63] opinion, this study selects the data of urban invention patents and utility model patent applications and uses the principle of entropy weight method to construct a comprehensive index to measure the level of urban green technology innovation. The specific calculation process is as follows: first, the data indicators are normalized. Second, the entropy weight method is used to calculate the weight of each index. Finally, the comprehensive index of green technology innovation in each city is calculated. Core Explanatory Variable Digital finance (lndf): Guo et al. [64] combined the characteristics of digital finance and data availability, and constructed "Peking University Digital Inclusive Finance Index" through three first-level dimensions, 12 s-level dimensions, and 33 specific indicators by using micro-data. This index scientifically portrays the degree of development of digital inclusive finance in China. Therefore, this paper chooses its comprehensive index as the measurement index of digital inclusive finance. Environmental regulation (lner): Based on the ideas of Ye et al. [65], this study selected wastewater, sulfur dioxide, and smoke (powder) dust emissions for a comprehensive evaluation of environmental regulation intensity through the entropy weight method to build the index system. This indicator is a positive indicator; that is, the greater the indicator, the greater the intensity of environmental regulation. Control Variables In order to improve the scientific nature of the empirical results between digital finance, environmental regulation, and green technology innovation, a series of control variables are added. (1) Regional economic development level (lngdp ): measured by gross regional product; (2) urban innovation environment (lnie): measured by the general budget of local finance; (3) degree of opening to the outside world (lnod): measured by the gross industrial output value of foreign-invested enterprises in the region; (4) urban environmental quality (lneq): use harmless treatment rate of household garbage to measure; (5) urban industrial structure (lnis): the proportion of added value of the secondary industry in GDP is selected for measurement. In consideration of data integrity and reliability, panel data of 278 Chinese cities from 2011 to 2019 were selected in this study. The data come from The Research Center for Digital Finance of Peking University and The Statistical Yearbook of Chinese Cities. In this study, all data were logarithmically processed to mitigate the impact of heteroscedasticity, extreme values, and skewness on the estimated results. Statistical results of variable description are shown in Table 1. Spatial Autocorrelation Test Before the empirical analysis, the Moran index was used to analyze the spatial autocorrelation of digital finance and the green technology innovation ability of 278 cities by using the adjacent spatial weight matrix, and the spatial econometric model was investigated. Its calculation formula was as follows: Among them : The Moran index is one of the most commonly used indicators of spatial correlation. The value of Moran index I is generally between [−1,1]. A Moran index I close to 0 indicates that the spatial distribution is random and there is no spatial autocorrelation; greater than 0 indicates positive correlation, and the larger its value, the more obvious the spatial correlation; a value less than 0 indicates negative correlation, indicating greater spatial heterogeneity. As can be seen from Table 2, the Moran index I of digital finance and green technology innovation from 2011 to 2019 is between 0.060 and 0.126, and is significant at the 1% level, indicating a strong spatial correlation between digital finance and green technology innovation. To observe the spatial agglomeration of digital finance, this paper draws local Moran scatter plots of digital finance in 2011 and 2019, as shown in Figure 1. Figure 1 shows that digital finance has a spatial agglomeration effect and strong spatial correlation. Model Selection In this study, the LM test and its robustness test were used to judge the spatial distribution properties of each variable and the choice of spatial econometric model. As can be seen from the LM test results in Table 3, both passed the significance test and significantly rejected the null hypothesis. The panel model with spatial effect should be selected in this paper. Secondly, the LR test of the spatial Durbin models (1) and (2) shows that the hypothesis that they degenerate into spatial error model or spatial lag model is significantly rejected, which supports the scientific selection of the spatial Durbin model. Meanwhile, Hausman test strongly rejects the null hypothesis; that is, the Durbin model with fixed effects is more suitable for this study than the Durbin model with random effects. Therefore, this paper should select the spatial Durbin model for spatial econometric analysis. Spatial Model Results This study examined the relationship between digital finance, environmental regulation, and green technology innovation using the spatial Durbin model with time-city dual fixations. Model (1) mainly examines the impact of two explanatory variables on green technology innovation, while Model (2) includes the interaction term of digital finance and environmental regulation, and comprehensively considers the interaction relationship among the three. As can be seen from the results of Model (1) in Table 4, the regression coefficient of digital finance on local green technology innovation is 2.721, which passes the significance test. However, the influence of digital finance on green technology innovation in neighboring areas is not significant. Part of hypothesis 1 is verified. This indicates that digital finance can only promote local green technology innovation. The wide financing channels and strong information-matching ability of digital inclusive finance stimulate the willingness of innovation subjects to green innovation, so it has a significant positive impact on the local green technology innovation ability. However, the influence of digital finance on green technology innovation in neighboring areas is not significant. This indicates that green technology innovation is only affected by the development of digital finance in this region, and is not affected by the development of digital finance in other regions, which is consistent with the research conclusion of Zhang et al. [66]. The possible reason that lies in the difference between the development level of inter-regional digital finance and the degree of government interaction leads to the regional heterogeneity of the spatial spillover effect of digital financial development. Combined with the results of regional heterogeneity analysis in Section 5.5, it can be seen that the significant inhibition effect in southwest China and northwest China may offset the significant promotion effect in central China, resulting in the insignificant total sample estimation coefficient. The regression coefficient of environmental regulation on local green technology innovation was 0.092, which passed the significance test. Environmental regulation promotes local green technology innovation Moreover, environmental regulation also has a positive impact on green technology innovation in neighboring areas. On the one hand, with the increase in government environmental regulation intensity, enterprises will achieve the effect of reducing environmental protection costs and improving resource utilization rate through green technological innovation, aiming to achieve the common progress of economic benefits and environmental benefits through the "innovation compensation" effect. On the other hand, in order to avoid excessive expenditure in environmental costs, some small and medium-sized high-tech enterprises move to the neighboring areas with relatively low environmental regulation intensity. Therefore, the flow of capital, information, technology and personnel promotes green technology innovation in neighboring areas. Therefore, hypothesis 2 is supported. The regression coefficient of the interaction term between digital finance and environmental regulation in Model (2) is significantly positive, indicating that environmental regulation plays a positive moderating role in the process of digital finance affecting local green technology innovation. Hypothesis 3 is supported. That is, the government's environmental regulation can play a positive role in the process of digital finance promoting green technology innovation. The empirical study shows that the level of economic development, the degree of urban openness, and the quality of urban environment all have a significant promoting effect on green technology innovation, while the industrial structure has a significant positive effect on green technology innovation. This indicates that the higher the proportion of secondary industry is, the more unfavorable it is to the progress of urban green technology innovation level. The coefficient of the spatial Durbin model passed the significance test at the 1% level, indicating that the level of local green technology innovation contributes to the improvement of the level of green technology innovation in neighboring areas; that is, there is a spatial spillover effect of green technology innovation. Note: *, ** and *** represent significant at the significance levels of 10%, 5%, and 1%, respectively, and t-statistics in parentheses. Considering that digital finance and environmental regulation may have a lag effect on green technology innovation, this study adopts digital finance and environmental regulation with a lag of one stage to conduct re-regression on Model (1) and Model (2). The test results are shown in Model (3) and Model (4) in Table 4. Based on Table 4, it can be seen that, compared with Model (1) and Model (2), the test result of one lag period is basically consistent with that of the current period. Therefore, the following robustness test adopts lagged one-phase variables to further test the model. Spatial Effect Decomposition To further illustrate the marginal effects of digital finance and environmental regulation on green technology innovation, this study performs a spatial effect decomposition and divides the changes into direct, indirect and total effects. The direct effects include the direct impact of explanatory variables on local green technology innovation and the feedback effect of neighboring explained variables on local green technology innovation. Indirect effects reflect the influence of local explanatory variables on green technology innovation in neighboring areas Table 5 shows the decomposition results of spatial effects of digital financial and environmental regulations. According to the direct-effect test results, digital finance has a significant positive influence on local green technology innovation; that is, every 1% increase in the development level of digital finance can improve the local green technology innovation level by 2.725%. Environmental regulation plays an important role in promoting local green technology innovation; that is, when the intensity of environmental regulation increases by 1%, the level of local green innovation will increase by 0.091%. Compared with the parameter estimation of the fixed effect of the spatial Durbin model in Table 4, it can be seen that there are some differences between the parameter estimation results of digital finance and environmental regulation. For example, the direct effect of digital finance on local green technology innovation is 2.725, while the regression coefficient estimated by the spatial Durbin model is 2.721. The difference between the two is caused by the feedback effect of digital finance on green technology innovation in nearby areas. Note: ** and *** represent significant at the significance levels of 5%, and 1%, respectively, and t-statistics in parentheses. The estimation results of indirect effects show that the environmental regulation has a significant positive spillover effect, while the spillover effects of digital finance do not pass the significance test. Each 1% increase in the intensity of environmental regulation has a 0.612% promotion effect on the green technology innovation ability of neighboring areas. Robustness Test To maintain the reliability of the regression results, data around 3% of the sample maximum and minimum values were excluded for robustness testing, and the results of each indicator after excluding outliers are analyzed in detail in the columns of Table 6. From the results in Table 6, it is known that the estimated coefficient values of the variables remain significant, the coefficient fluctuation range is not large, and the sign of the positive and negative have not changed. It is not difficult to see that the results are basically consistent with the previous spatial regression results, which further confirms the robustness of the empirical results in this study. Note: *, ** and *** represent significant at the significance level of 10%, 5%, and 1%, respectively, and t-statistics in parentheses. Heterogeneity Analysis To further analyze the regional differences in digital finance and environmental regulation on green innovation, 278 cities were divided into seven parts, namely East China, South China, North China, Central China, Southwest China, Northwest China, and Northeast China, and each region was tested. Specific test results are shown in Table 7. By comparing the total effect of digital finance, we found that smart finance in Northeast, South and Central China has a significant contribution to green technology innovation. Moreover, from the elastic coefficient, digital finance has the best promotion effect on South China. On the contrary, digital finance inhibits green technology innovation in North China and Southwest China. The elasticity coefficient of Southwest China is -0.985, indicating that the level of technology innovation in Southwest China will decrease by 0.985% when digital finance increases by 1%. To be specific, digital finance can promote local green technology innovation except in North China, and digital finance has a spillover effect only in Central China, Northwest China, and Southwest China. In conclusion, there is regional heterogeneity in the impact of digital finance on China's green technology innovation in China, so it is difficult to comprehensively promote green innovation Table 7. Heterogeneity analysis of the impact of regional digital finance and environmental regulation on green technology innovation. Note: *, ** and *** represent significant at the significance level of 10%, 5%, and 1%, respectively, and t-statistics in parentheses. Variable According to the results in Table 7, we find that the total effect of environmental regulation only passes the significance test in East China, North China, and Central China. Among them, the total effect of environmental regulation in East China and Central China is positive, indicating that environmental regulation has an inhibitory effect on green technology innovation. Moreover, the promotion effect in East China is greater than that in Central China. Xu et al. [67] believe that the role of environmental regulation is closely related to the degree of economic development. East China has a high-quality economy, so its environmental regulations are more scientific and perfect. Therefore, under strict and effective supervision, green technology innovation can be further promoted. Central China is in the middle of the economic development level, because it is largely East China that undertakes energy-intensive industries. Therefore, its regulation is limited. When environmental regulation is strengthened in central China, its promotion effect on green technology innovation is less than that in eastern China. On the contrary, environmental regulation inhibits the progress of green technology innovation in North China. There are negative spillover effects that hinder the progress of green technology innovation in neighboring areas. The environmental regulations in East China and Central China not only hinder the local innovation and progress but also inhibit the neighboring regions. Xin [68] believes that North China is an important political center of China, where a large number of technological enterprises gather together. Therefore, when the intensity of environmental regulation increases in North China, some enterprises with strong pollution move to neighboring areas, which ultimately inhibits the level of green technology innovation in local and surrounding areas. Discussion of Empirical Results This paper examines the influence mechanism among digital finance, environmental regulation and technological innovation by constructing a spatial Durbin model. It can be seen from the robustness test results in Table 6 that the model has passed the test, indicating that the conclusion is true and reliable. Meanwhile, according to the empirical results, although digital finance has a significant role in promoting local green technology innovation, the spatial spillover effect on neighboring areas fails to pass the test. This conclusion is consistent with Xie's [69] research on digital finance and regional technological innovation based on provincial panel data. Although digital finance has a significant spatial agglomeration effect, it fails to drive green technology innovation in neighboring areas. Secondly, the empirical results show that environmental regulation can not only promote local green technology innovation progress, but also stimulate the improvement of green technology innovation levels in neighboring areas through a positive spatial spillover effect. This is consistent with the conclusions of Zheng et al. [70] analyzing the impact of environmental regulation on industrial green innovation and Zhang et al. [71] examining environmental regulation and environmental governance. Thirdly, the study also shows that environmental regulation can strengthen the promotion effect of digital finance on green technology innovation. That is, in the process of digital finance affecting green technology innovation, environmental regulation plays a positive moderating role. Shi et al. [37], Li et al. [35], Wang et al. [36] reached similar conclusions when exploring the impact of digital finance and environmental regulation on environmental pollution, industrial structure upgrading and economic growth. Finally, the relationship among digital finance, environmental regulation and green technology innovation is found to have regional heterogeneity. Green technology innovation simultaneously considers technological progress, economic performance and environmental performance. Thus, spurred by digital finance and environmental regulation, companies can make more profits from cleaner methods of production. These achieve sustainable economic development through green technology innovation. In summary, the research in this paper further enriches the research related to digital finance, environmental regulation and green technology innovation, and at the same time provides a theoretical basis for the government to adopt relevant mechanisms and thus achieve regional green transformation and upgrading. Conclusions and Suggestions This paper selects panel data of 278 cities in China from 2011-2019 and builds a spatial Durbin model based on a spatial correlation perspective to empirically investigate the relationship between digital finance, environmental regulation and green technology innovation and conducted robustness tests. Then, considering the regional heterogeneity, 278 cities were divided into seven parts according to geographical location, and the relationship among the three areas was discussed, respectively. The results are as follows. • Digital finance has an important role to play in promoting local green technology innovation. It is obvious that the low superlative threshold, low cost, high efficiency and informatization of digital finance encourage local enterprises' green technology innovation through channels such as improving financing availability, reducing financing cost and transaction time, and improving resource allocation rate. • Government environmental regulation facilitates the development of green technology innovation in local and adjacent areas. For one thing, it shows that the Porter hypothesis is valid in China. For another, environmental governance also reflects the relationship between learning and competition among local governments in China. When local governments force companies to innovate in green technologies by enforcing strict environmental regulations, neighboring governments also strengthen environmental regulations to achieve high-quality development. • Environmental regulation plays a positive moderating role in the process of digital finance affecting green technological innovation. That is, environmental regulation plays a positive moderating role in the process of digital finance affecting green technology innovation. It shows that in the process of digital finance promoting green technology innovation, government environmental regulation plays an important guiding role. • There is regional heterogeneity in the relationship between digital finance, environmental regulation, and green technology innovation. Among them, the environmental regulation in North China inhibits the local green technology innovation the most; Digital finance in Central China can not only promote green technology innovation in the region but also green technology innovation in neighboring regions through a spillover effect. • The development of the secondary industry hinders the progress of green industry and further inhibits the level of urban green technology innovation. In summary, we put forward the following policy recommendations: First, the government should continue to promote the development of digital finance and accelerate the innovative integration of finance and technology, on the basis of improving digital finance development infrastructure, promoting the construction of credit evaluation system, and guiding more practitioners to join. Additionally, it is essential to standardize the financial market service system and strengthen information protection. Second, the government should fully consider regional heterogeneity when formulating environmental regulations, combining regional characteristics to guide enterprises through green technology innovation to through environmental subsidies and policy publicity, so as to realize the coordination of environmental protection and economic progress. Therefore, local governments should break the restrictions of administrative regions and strengthen the communication and cooperation between regions when formulating and implementing environmental regulations. We should give full play to the role of environmental regulations in improving green technological innovation, and work together to achieve green upgrading and transformation. Third, the government should vigorously promote the transformation of the secondary industry. To achieve high-quality economic development, the government needs create a good industrial innovation environment and stimulate the willingness of the secondary industry to innovate. Research Limitations and Future Research There are some limitations in this study. First, due to limited data availability, this paper and a large number of existing studies in the construction of green technology innovation comprehensive evaluation index only consider the relevant data of green patents, not the R & D personnel in the process of innovation, R & D funds and R & D results of green product sales and other related data into the evaluation system. In the future, data will continue to be mined to further improve the comprehensive index of green technological innovation. Second, the study in this paper focuses more on the impact of digital finance on green technology innovation, and therefore does not provide a detailed delineation of environmental regulation. In the later research, environmental regulation should be divided into command-and-control type, market incentive type and voluntary type according to the different regulatory tools, so as to further explore the heterogeneous impact of environmental regulation.
9,371.6
2022-07-15T00:00:00.000
[ "Environmental Science", "Economics" ]
The Virtual as Affirmative Praxis: A Neo-Materialist Approach : This chapter addresses the resonances between the concept of the virtual and a material philosophy of life, based on heterogeneity, hybridity, and becoming. It outlines the basic tenet of this materialist philosophy and explores its implications, in relation to the notions of difference and becoming. It, also, highlights the importance of an ethics of affirmation, which may balance the creative potential of critical thought with a dose of negative criticism and the oppositional consciousness that such a stance, necessarily, entails. Situating this project in the context of cognitive capitalism, it discusses the question of how to resist the injustice, violence, and exclusions of the times, our times, the better to resist them and engage with them in an affirmative manner. trans-human modes of subjectivity. One Concept: Materialism as the Non-Reductive Property of Living Matter The virtual is a materialist way of defining the force of matter as embodied, embedded, relational, and affective in a vital, but not reductive, manner. The concept of the virtual instils the temporality of the constant becoming the ontological core of matter, assuming that all entities are variations on the same matter that unfolds, relationally, across multiple axes of encounter. This dynamic property of living matter is what makes it vital, that is to say, a non-essentialised vector of becoming. Contemporary neo-materialism, when compared with earlier philosophical versions, is marked by a more comprehensive understanding of matter itself. This entails a closer relationship between the three cultures of philosophy and the humanities, the social sciences, and the life sciences (Kagan 2009). This transversal approach bridges the gap between the binary oppositions of nature/culture, human/non-human, and technology/matter. It proposes to replace such dualisms with a naturecultural continuum, which is immanent and, hence, embedded and embodied, constitutionally linked to others as well as technologically mediated. That is to say, naturecultural matter is a heterogeneous assembly that connects but does not amalgamate. A neo-materialist approach, thus defined, does not entail the dismissal of the importance of language, signification, or meaning-making. It, rather, points out the limitations of the linguistic turn, as formulated in the American reception of French philosophy in the second half of the 20th century (Cusset 2008;Redfield 2016). Whereas the linguistic turn gives priority to the semiotic theory of language and representation in the process of subject formation, the materialist turn looks towards the vitality of matter itself and its self-organising capacity. When confronted by the thick and painful materiality of the current environmental crisis on the one hand, and the divisive social implications of the new technological advances on the other-a historical condition I referred to as the post-human convergence (Braidotti 2013(Braidotti , 2019)-I think that a new materialism is urgently needed. Materialism is about the complexity of being embodied, embedded, relational, and affective. It is a philosophy of immanence, in that it assumes that matter is vital, intelligent, and self-organising, which, of course, includes a structural relationship to non-human entities. These non-humans are geological, zoological, ecological, and technological 'others', and they relate to humans not in any linear sequence or succession, but rather in dynamic inter-relations, transpositions, and becomings. What moves them is their shared capacity to affect and be affected by one another. This mutual force of attraction sets in motion flows of relations that inform and transform all participants. Their generative interaction enables the instantiations of novel potentials and capacities-the virtual-and it, thereby, expresses the ontological relationality that defines all living entities. However, it is obviously affected by specific historical conditions and is never outside the social, though it exists as the core capacity of all matter to be activated. To a certain extent, therefore, vital materialism prompts a form of philosophical realism, in assuming that matter cannot be reduced to a social construction, but should be understood to exist, independently, of human representation. This line is in keeping with the physics of matter itself, as combinations of elementary particles that are never stable but, rather, "vibrate and fluctuate constantly between existence and non-existence" (Rovelli 2014, p. 30). The vitality of matter today has been extended to the technological apparatus, which is 'live', smart, and self-correcting. Vital neo-materialism is, therefore, an enlarged and dynamic materialism that cannot be easily accommodated within the binary and polarising oppositions of matter/mind and nature/culture. It is activated by the intrinsic tendency of living matter to be actualised, yet with untapped forces, competences, and relations. This virtual generative force is the heart of the matter. It can also be described as materialism in a differential mode, which moves away from dualistic thinking, while avoiding holistic organicism. It rejects an undifferentiated system-the tendentious 'flat' ontology-that would form alleged equivalences across all species, all technologies, and all organisms, under one common signifier. The transversal character of neo-materialism allows, on the contrary, for materiality to emerge as the differential common denominator across the human, non-human, and dehumanised entities of all species. That common denominator is the relational character of the vital properties of matter itself, that is to say, its constitutive heterogeneity, not any holistic homogeneity. To say that vital relationality is the ontological core of matter means that all material entities are driven by the power to differ from within, in so far as their process of individuation depends upon, requires, and co-exists with all the other entities they encounter. All entities are, therefore, 'dividuals', traversed and co-constructed by the affective impact of others. Affect is the gravitational force that attracts them-the ethical powers of joy and affirmation-keeping in mind Nietzsche's distinction between morality and ethics. The former is the implementation of rules and protocols of acceptable behaviour, while the latter is about relations and intensities. Ethics is, therefore, an ethology of forces, which mobilise power relations as multi-layered and pluri-facetted. Mindful that power functions both as a restrictive force (potestas, or entrapment) and as an affirmative one (potentia, or empowerment). These different modalities of power are not mutually exclusive but, rather, co-exist as multiple facets of the same process, the perennial unfolding of yet-unexplored possibilities. The primacy of the relation re-positions difference as a verb, in a process ontology that is heterogeneous and constitutionally hybrid. Contrary to Masumi's equation of the virtual to pure abstraction (2002), I see it as the capacity to be instantiated, by emerging as the core relational force of all entities and, more specifically, of their capacity to persevere in their relational potency. This is what allows vital neo-materialism to acknowledge the specificity of different bound categories and species, while emphasising cross-species interconnection and mutual dependence. It, accordingly, respects differences in intensities, properties, and locations, and prioritises a relational ethics of mutual affirmation. Methodologically, neo-materialism allows for more precise analyses of contemporary power formations. Exploring both discursive and material practices, it exposes the normative power of the traditional humanist and anthropocentric ideals of 'the Man of reason' (Lloyd 1984). It also calls for adequate analyses of the role these ideals have played, in constructing sexualised and radicalised hierarchies of dehumanised others, as well as the exclusion of naturalised and nonanthropomorphic others. I have argued, by extension, that a post-human materialist approach focuses on the complex workings of the system of human exceptionalism within neoliberal, biogenetic, and cognitive capitalism. New materialism provides more precise analytical tools to reveal, specifically, how the contemporary market economy capitalises on the genetic propensities and vital potencies of matter and life itself (Braidotti 2006;Rose 2007;Cooper 2008;Protevi 2013). A vital materialist philosophy of becoming stresses trans-species inter-dependence and relational collaboration, not only with the material eco-systems and their non-human entities, but also with technological apparatus and artefacts. Accordingly, matter is rematerialized by becoming embedded and embodied in the physical ravages of environmental depletion, climate change, and global pandemics. At the same time, matter is, also, de-materialized, through advanced computational and bio-genetic technological interventions (Fuller 2005). This is only an apparent dematerialisation, however, which actually involves a material reconfiguration into another kind of matter: codes, numbers, storage, algorithms, etc. This double pull, towards rematerialisation and dematerialisation, is constitutive of a vital neo-materialist ontology and is crucial to the logistics of perception and actualisation of the virtual (Massumi 2002). It is an internal vacillation or swing, that need not be resolved but must be acknowledged and operationalised. Post-human thought embraces the tensions of neo-materialism and repurposes them, by alternatively re-and de-naturalising, strategically, all naturecultural-mediated matter. It, thus, produces a process ontology of cross-species relations that includes the inorganic and the technological apparatus. It foregrounds relationality and difference as the engines for the actualisation of the perennial unfolding of virtual modes of becoming. Moreover, relationality, as driven by affirmative ethics, turns this intimately collaborative vision of matter into a value, thereby criticising the profit-oriented incursions of contemporary capitalism into life and living matter. Enfleshed subjects are both material and in the process of perpetual becoming: embodied entities are materiality in process, and, also, signify sociality, but, above all, bodies are relational and affective. This means they are capable of incorporating external influences and en/unfolding their own affects outward, in a constant in-between manner. Embodied and embedded subjects in a neo-materialist frame are time machines, as well. They are mobile entities in space and time, enfleshed memories capable of lasting through discontinuous variations of intensity, while remaining faithful to their ontological core. That core is the desire to persevere in one's existence, which forms the basis for an ethics of affirmation. What is affirmed is desire, freedom, and becoming, and what makes the actualisation of these values possible, is the force of the virtual as the structural capacity of all entities to differ from themselves, as argued above. This non-unitary structure, however, is not framed by an ontology of negativity, antagonism, and lack (as in the Hegelian and Lacanian paradigm). It is, rather, supported by ontological positivity, a non-binary notion of difference and the idea of desire as plenitude and generative excess (as in the Spinozist-Deleuzian paradigm). Critical Spinozism is of the essence, for the case for vital neo-materialism. Spinoza's central idea being that we, humans and non-humans, are all part of a common matter or nature. There is no mind-body dualism, but rather a continuum and also a parallelism between mind and matter as well as nature and society, in that all matter is capable of affecting and being affected. Spinozist philosophy produces a careful renaturalisation of subjectivity, which challenges the reductive reading of scientific reason. It, also, refuses to see the political sphere of the polis as being dualistically opposed to the state of nature (physis). Last but not least, it de-links the ethics and politics of the human animal-biosfrom the non-human dimension-zoe. Spinozist-Deleuzian materialism bridges all those divides. Matter and thought are different but equal attributes and expressions of the same substance, linked by productive resonances. This produces an environmentally integrated form of trans-individuality, and a non-unitary vision of the subject as a heterogeneous assemblage. Obviously, our relationship to the natural continuum is affected by the historical social context in which we live. Nature is immersed in history and social structures, and vice-versa, without dualistic oppositions. What gets foregrounded is the process of constitutive trans-individuality of all entities, human beings included, thereby rejecting the transcendental power of consciousness as the organising principle and a distinctive human trait (Deleuze 1988(Deleuze , 1990Balibar 1994). For neo-Spinozist thinkers (Lloyd 1994(Lloyd , 1996, the immanent, naturalistic worldview demands an adequate understanding of one's life conditions, through a process of gradual clarification of the ethical forces at play in one's relationship, to the said conditions and their affective charges. Adequate understanding is rational, in the sense of not being superstitious, fanatical, or caught in the delusions of unchecked passions. The task of reaching an adequate understanding of the conditions that weigh upon us is collaborative and relational. It is driven by 'common notions' that connect us to kindred spirits and link the force of the imagination to the power of reason. The process, therefore, entails a better knowledge of ethology, the physics of bodies, and the validity of ideas. Spinoza applies these basic notions to a political analysis that opposes despotism, authoritarianism, and mob politics, electing democracy as the only system capable of supporting free subjects' quest for adequate knowledge and joyful passions. Spinoza takes critical distance from liberal philosophers, such as Locke and Hobbes, and contests the contractualist model of the social contract-which is, incidentally, also a sexual contract biased against women and LBGTQ+ people (Pateman 1988)-with a more radical idea of democracy from below. These assumptions allow for a post-human "vital politics" (Olkowski 1999;Braidotti 2002;Bennett 2010;Sharp and Taylor 2016). As a consequence, materialism is not an idealised internalisation of the outside world through grids of cultural representations. There is no such thing as an inert outside-ofthe-human-be it body, stone, earthworm, or code-whose existence depends on the activities and perceptions of the human mind, although matter does get filtered by a linguistic grid and internalised by humans as a psychic representation. This relational materialism entails a form of philosophical realism, which asserts the existence of entities in the world, independently of the existence of the human mind (Delanda 2006(Delanda , 2016. This is a distributed sense of neural agency, which argues that the human mind and the world it inhabits are, inextricably, entangled in a myriad of ways. Of course, the human mind has the ability to perceive and visualise the world, but the concepts and mental representations of the world we form in our minds do not have the power to change the qualities of the entities thus perceived. To open up to and take on the world, however, means to take in the pain of the world, its negative aspects, and its wounds. In his commentary on Spinoza, Deleuze (1988, p. 22) stresses his extensive use of the term 'poison'-which in Latin is 'virus'-to describe the impact of this encounter. Negativity enters our system as we embrace the world: "all the phenomena that we group under the heading of Evil, illness, and death, are of this type: bad encounters, poisoning, intoxication, relational decomposition". They stand for the negativity that undermines the affirmative ethical life, in a "dreadful concatenation of sad passions; first, sadness itself, then hatred, aversion, mockery, fear, despair, morsus conscientiae, pity, indignation, envy, humility, repentance, self-abasement, shame, regret, anger, vengeance, cruelty". (Deleuze 1988, p. 26). Like many diseases, negativity (poison) goes viral and turns the poisoned into toxic poisoners, who bring out the worst in each other. Unethical behaviour destroys our capacity to deploy our relational power and, thus, our persevere in living; it betrays trust, legal obligations, moral bonds, and emotional accountability. That is the definition of negativity, or ethical evil. The rejection of these sad passions reasserts the ontological positivity of living matter, as a self-differing force that aims at persevering or enduring. Negativity supports, instead, the cult of humiliation, degradation, and the disparagement of life's generative forces. The neuroscientist Antonio Damasio describes "Spinoza as mental immunologist developing a vaccine capable of creating antipassion antibodies" (Damasio 2003, p. 275, my emphasis). To be immunised against toxic negativity has become even more of an imperative in our world since the COVID-19 pandemic became almost emblematic of the contradictions of the post-human predicament. Here is a human-induced environmental disaster, causing a public health crisis that is shared unevenly across the globe, with disadvantaged groups bearing a disproportionate share of the costs. In addition, the solution proposed is an increase in technological mediation, both via vaccines and bio-medical intervention as well as through information technologies and digital platforms. Some humans-the sexualised, racialised, and naturalised minorities as well as other marginalised groups-have always had to face up to uncomfortable truths through the hardship of their life circumstances. Having had this kind of intensive training to bear and process the negativity thrown at them, they are epistemologically ahead of the rest. They develop their anti-negativity antibodies stoically, as they go and, hence, they know better. Such a critical and creative counterforce gives the 'wretched of the earth' (Fanon 1961), a head start in the historical process of envisaging alternative worlds as well as more just and sustainable social systems. The multiple axes of oppression and, hence, of hurt, humiliation, and pain, also, contain within them the creative forces that they can generate as motors of transversal and collective transformation. I shall return to them in a later section. This non-representational apprehension of the world is the core of neo-materialist notions of the virtual. The first step of the argument is that there is no such thing as unmarked or inert environmental matter, awaiting socio-cultural coding by a symbolic system dominated by Man/Anthropos, and that human minds are heterogeneous relational structures embedded in these dynamic and auto-poietic agents, as their multiple ecologies of belonging (Guattari 2000). What, then, follows from this structural inter-dependence on non-human factors and forces, is the primacy of the relation itself. In the beginning, there is always differential and material heterogenesis, that is to say, the relational principle of ontological difference defined as differing within a commonly shared matter. The premises for ethical and political accountability, therefore, are immanence, complexity, and heterogeneity, as well as the positivity of difference as the principle of non-one, or complexity. This critique is helpful in redefining the virtual as an affirmative ethics of becoming. Affirmative ethics as the establishment of mutually empowering relationships, based on cooperation and the combination of the specific powers of each entity, aims at increasing each entity's individual capacity to self-preserve against adverse forces. Entities and individuals grow, thanks to a collaborative community. The capacity to resist and fight back emerges from the same relational capacities that can also potentially cause harm and discomfort: all we have is others, and our relationship to others is constitutive of ourselves. What binds us-humans and non-humans-together, over and above contractual interests and transactional protocols, is a common propensity to persevere in our existence and increase our relational capacities. In the absence of such a shared propensity and its spatiotemporal force, which is the virtual, we would be left in the banality of an undifferentiated flux. The virtual as the very practical, ethical, and political urge to 'become otherwise', is what activates matter to be both embodied and embedded and differential and flowing. An ethics of affirmative collaboration is our binding factor. Given this vital potency of material matter, nothing is ever completely actualised, and nothing is totally lost. What is defeated or excluded is not dialectically cut off from the processes of becoming, by being confined into the limbo of nothingness. The dialectics gets this process wrong, by over-emphasising the negative. What is not actualised is just that: a non-potentiated option, which falls asleep, in an ontological slumber that Leibnitz describes so well, as different degrees of being-vegetating, hibernating, and going virtual. Until, that is, it is called out again by a collective assemblage, which demands the freedom to become and desires its actualisation. The emphasis on freedom as non-reactive activity driven by the ethics of joy is the key notion. Affirmation is the force that endures, aggregates, and sustains, whereas negativity brings about reaction, disaggregation, and stasis. Affirmative ethics is the affect that binds together the heterogeneous components of complex subject assemblages. It works through the confrontation with and transformation of negativity, in a rigorous and humble praxis, not as a metamorphic flash or a revolutionary leap. Political consciousness is emancipatory, to the extent that it repairs the violence and pain of structural exclusions and injustices. It revives the minoritarian counter-memory of the oppressed, by filling in the blanks of dominant cultural memory and bringing the specific memories of the minorities within that linear order. In this respect, the battle for partial recognition entails processing the pain of injustice and exclusion in a process of affective healing, which is integral to political projects. However, it does not exhaust that political process. Just as importantly, it also mobilises the virtual forces of becoming, by splintering/deterritorialising the consolidated identities that defined them as excluded minorities to begin with. In other words, political consciousness is transformative, if it is allowed to act as a de/reterritorialising agency that dislodges subjects from their sense of selves. The virtual is a praxis that needs to be enacted, a new location that needs to be constructed by subjects, as heterogeneous and praxis-oriented alliances. There is no immediate revolutionary metamorphosis, but rather a praxis, a collective practice of activating both critique and creation as well as resistance and vision, right here and now. The essentialised vision of all identities, including those of empirical minorities, are challenged by a qualitative transformative process that is essentially ethical, in that it allows affirmative ethics to set the politics. The generative force of this anticipatory politics, and the desire to exit the present world, is not nihilistic or reactive but, rather, affirmative. It expresses a deep and trans-historical aspiration to justice and freedom. This is an irrepressible force that will not be squashed or avoided, though it will be subjected to regular and systematic delays as well as boycotts by the opposition. The Primacy of the Virtual If matter is not a stable entity, but instead a process of constant self-differentiation in relation to multiple others, then all entities as individuated organisms are bound instantiations of a matter potentially infinite in its modulations. This means that there is always a residual seed of possibility to be actualised in all instances. In other words, the full potency of becoming is never completely exhausted. Or rather, a potential for regeneration is always subtracted, from that which has managed to become actualised. An affirmative kind of passivity is at work at the core of vital matter: a preference, a tendency, and an ontological gravitation towards the inexhaustible, which is a heterogeneous compossibility that is neither dialectical nor voluntaristic, but refers to a variable capacity of matter to act. This capacity can be activated, accelerated, or delayed, in relation with others, but shall never be deleted. There is a temporal side to the primacy of the virtual as well: the present is a complex multi-directional process of flowing from virtual past to future perfects, via the continuous present and everything in between. Deleuze (1988) teaches us that the constant en/infolding by the subjects, with their multiple outsides, affects the sedimented strata of past experienced contained within us and activates them. That means it liberates us from the authoritarian hold of the past (oedipalised, patriarchal, Eurocentric, monumentalised), as the main force shaping the present. It defrosts the authority of the past as the main point of reference for the present and, thus, activates many internal virtual pasts. The heterogeneous memories within are not frozen archives, but also points of regeneration: the pasts await actualisation and realisation in the present. These resonances are what shapes new processes of becoming. The force of the present-and the core of its intelligibility-is that it does not coincide, completely, with the here and now. Such synchronisation is never complete, since in a neomaterialist vital system, all human and non-human entities are nomadic subjects-in-process, in perpetual motion, immanent to the vitality of self-ordering matter. Approaching the present, therefore, produces a multi-faceted effect: on the one hand, the sharp awareness of what we are ceasing to be (the exhaustion of the actual), and on the other, the perception-in different degrees of clarity-of what we are in the process of becoming (the activation of the virtual). Both phenomena occur at once, in a non-linear time continuum. That amounts to multiplying the present along these parallel plateaus of actual and virtual (Deleuze and Guattari 1991). In other words, thinking about the present makes us not only confront, but also exceed, the immediate conditions we inhabit. If the present is multi-layered and multi-directional, then, we are always dealing with the virtual past, what 'we will have been'. We are always projected/projective futures, always delving in a time continuum. Yet, we need enough meta-stability to hold the frame long enough, to draw a cartography of the very conditions of the present that shape and escape us. By extension, philosophy cannot stop at the critique of the actual (i.e., of what we are ceasing to be), but needs to move onto the creative actualisation of the virtual (i.e., of what we are in the process of becoming). The interplay between the present as actual and the present as virtual unfolds and sustains the process of subject formation, always in a collective, collaborative frame. The conceptual heart of the virtual is a process ontology, driven by the positivity of desire as endurance and affirmation. In addition, since philosophical thinking is about the creation of new concepts, it is a way of actualising the virtual. Thus, thinking is immanent to the world, embedded in the very conditions it is trying to affect and transform-we humans are part of both the problems and the solutions. Difference as Non-Binary Complexity Difference is disengaged from dialectics, as a positive, self-generating force internal to all entities-as sets of modulations of a common matter. This difference is ontological and, therefore, immediate, not dialectically mediated, and not oppositional, in the sense of being generated by binary contradictions, antagonistic alterity, or negation. Difference as ontological is not a matter of "either/or", but of "AND.. AND.." (Deleuze and Guattari 1987). This concept of difference is irreducible, to external or abstract degrees of difference, and it is vital and material, though it does not refer to a reductive biological notion of life. It, rather, is an immanent philosophy of complexity and of multiple specified and situated lives. It is a process of difference, through internal differentiations carried by the inexhaustible force of these immanent lives. These include many non-human categories-from zoe/geo/techno-mediated lives (Braidotti 2019(Braidotti , 2021) to a general ecology of "chaosmosis" (Guattari 1995)-pointing to the vitality of living matter as both actual and virtual; 'we' are in this neo-materialist vital flow of becoming together, but-as I have repeatedly argued-'we' are not one and the same, but differentially individuated and located. Differential materialism is crucial to the politics of immanence and becoming as well as to its feminist, environmentalist, and anti-racist applications (Braidotti 2021). The political is defined by this affirmative ethics of actualising the virtual, which cannot just repurpose existent realities and social conditions-that is to say, the present as that which has already been actualised and, hence, also, as the record of what we are already ceasing to be. The politics of immanence, rather, state that the conditions for the overturning of negative realities cannot be drawn from the present as the actual, since the possibility for renewal does not emerge, dialectically, from the present conditions. They need to be constructed in the present as the virtual, that is to say, that which they are capable of becoming, through affirmative ethical encounters. This is not so much accelerationism, but actualisation and active deterritorialisation: we need to borrow transformative energy from the future, in order to articulate a vital materialist philosophy, which combines resistance to real-life historical conditions with visions for alternative futures. The positivity and non-dialectical structure of difference grounds, also, the relation between humans and non-humans. Non-humans are constitutive of the heterogeneous assemblages that compose human subjectivity. As argued above, non-humans are both organic and technological as well as integral to the activity of thinking. Thinking is a relational gateway to the openness of zoe-the non-human life that does not bear a human name, let alone an individual name. Thinking is the stuff of the world (Alaimo 2014). In addition, by taking place in the world, it is accountable to multiple constituencies, not only the academic community. All the more so today, when knowledge is being produced across a broad range of social, corporate, activist, artistic, and mediated locations, as well as in specialised scientific, technological, and academic settings. Of course, there is a qualitative difference between accepting the structural interdependence among species and actually treating non-humans as knowledge collaborators. However, the point here is that this is precisely what we need to learn to do, since we live in the age of computational networks and synthetic biology on the one hand, and climate change and socio-economic polarisations on the other. Granting equal status to natural and non-human organisms is an explicitly post-anthropocentric move, which entails conceptual and methodological transformations. It also requires defamiliarisation from established habits of thought and anthropocentric mindsets, by offering more adequate concepts to deal with the ecological environment, media-nature-culture continuums, and non-human others. This is a crucial aspect of the post-human ethical and aesthetic sensibility. It extends, also, to keeping the importance of the inhumane aspects of the post-human predicament high on the agenda, notably, the status of devalorised and dehumanised others. This is a feature of the necropolitical governance of life in cognitive capitalism (Braidotti 2019). The political imagination plays a crucial role in actualising movements/ defamiliarisations, and transformative becomings. Actualising the virtual is a gesture of conceptual creativity that enlists resources other than analytical reason. It includes an intensive, qualitative dimension, which connects to the virtual totality of a block of past experiences and affects and activates them as action in the present, thereby realising their unfulfilled potential. This mode of affirmation is an exercise in temporary and contingent synchronisation, which sustains in the present the activity of actualising the virtual. In other words, this virtual intensity is, simultaneously, after and before us, in a flow of mutation, differentiation, or becoming, which is the vital material core of thinking. There is also a speculative element at play in this reactivation of memories, as collective imaginings (Gatens and Lloyd 1999), which foregrounds the importance of creativity, literature, and the arts, as vehicles of philosophical enquiry. The strategy of disidentification or defamiliarisation is, also, a crucial tool of critique of the in-built power of dominant narratives and entrenched habits of thought. Disidentification can be seen as a creative form of unlearning "unearned privileges", through disengagement from the institutions of power and knowledge (Spivak 1990). The impact of disidentification is that it triggers both critical and creative visions as well as the imagining of becoming a world together. This approach sets a subtle balance between the negative critique of the power (potestas) of the present-as the record of what we have been and, hence, of what we are ceasing to be-and the visionary energy of what we are in the process of becoming. Taking seriously the definition of desire as positive and power as enabling (potentia), actualising the virtual activates empowering creative alternatives to the objectionable present. It is a political praxis of taking in the pain and damages of the world at present, step by step. This is not to deny the importance of the negative, but, rather, to assign it an analytic, not a substantive, force. We are, ontologically, oriented towards the affirmation of our innermost freedom-the freedom to become all we are capable of, all our bodies can take. This, also, means that binary opposition is secondary and any dialectical model of conceptualising difference obscures, ignores, or denies the positive force of actualisation, which constitutes the relational core of matter. We need to proceed, therefore, by gradual degrees of disengagement, from what is considered as the dominant or, even, the natural or normal state of affairs, events, and values. This is a crucial ethical project, of anticipating better futures through the unfolding of the virtual as affirmative ethics. To Constitute 'a Missing People' The point of the virtual is to compose a missing people, a complex subject formation that aims at producing its own lines of actualisation. These actualisations are produced by the transversal assemblages of a missing people, a 'we'-embodied, embedded, relational, and affective, bonded by affirmative ethics as communal praxis. The conceptual distinction between the perception of what we are ceasing to be-the present as the record of the past-and that which we are in the process of becoming-the present as the unfolding of the virtual/the future-offers critical and creative margins of intervention. They join forces in producing the multitude of ways in which the human is currently being recomposed. However, who is this 'we', whose subjectivity is now at stake? What are 'we' capable of becoming as a species, and as a set of technologically inter-linked material cultures? Embodied differently and embedded in diversity, relational and affective, depending on what our bodies can do, 'we' are not a unitary entity, but a materially differential one. Within the post-human predicament, we need to focus our collective efforts upon the projects of defining what 'we' could become as a species and a set of technologically inter-linked material cultures. The aim is to track the multiple, grounded, and, hence, specific and diversified ways, in which 'we' are becoming knowing subjects, as 'otherwise other', rather than the dialectical oppositions and pejorative differences posited by classical Humanist 'Man and the supremacist assumptions of 'Anthropos'. This position has several consequences: the first is that there is not, nor does there need to be, a panhumanity. The 'human' never was a universal or neutral term to begin with. It is, rather, a normative category that indexes access to privileges and entitlements. Appeals to the 'human' are always discriminatory: they create structural distinctions and inequalities among different categories of humans, let alone between humans and non-humans. Secondly, it is inappropriate to take the post-human predicament either as an apocalyptic or intrinsically subversive category. This way of narrowing our options down, to the binary extinction versus liberation (of the human), misses the point of this convergence. We need to resist, with equal lucidity, this double fallacy and embrace, instead, multiple accounts of embodied, embedded, relational, and affective processes of posthuman subject-formation. They, in turn, enable subtler and more complex cartographies of power and discourse. They start by questioning who 'we' might be, whose anxiety may take the form of calling for a new humanity bonded in fear and vulnerability; 'we', who may well be in this predicament together, but are not one and the same. The operational 'we' that I propose begins with the composition of a missing people, who embrace the common cause of resistance to the negativity of the present, by coconstructing affirmative modes of relation and values. This is a collective praxis, not an individual psychological disposition, and one which is sustained by the ethics of affirmation. The political imagination intervenes here as the motor of the virtual. It is the over-flowing anticipatory force that injects much-needed doses of hope for the future, affirmative visions of possible alternatives. They are fuelled, but not saturated, by the negative experiences, in so far as they demonstrate the ability to rework them collectively, as seeds of becoming. The politically transformative gesture consists of empowering creative 'counter-actualisations', or affirmative alternatives. Thus, vital neo-materialism and post-human theory focus, through critical and creative cartographies, on the margins of the expression of yet-unrealised possibilities, by concentrating on the challenge of heterogeneous subject formations. Affirmative ethics as collective praxis guides this politics. The process is driven by the actualisation of the virtual and the constitution of heterogeneous trans-subjectivities, in order to sustain the collective effort of differing affirmatively from the present. I have also defined these formations as trans-individual, trans-cultural, trans-species, trans-sexual, trans-national, and trans-human modes of subjectivity. Affirmative ethics is a praxis that begins with the production of adequate knowledge about the present, in order to critique it and resist. Adequate understanding of our life conditions is the faculty that grants us freedom from fatalistic determinism, through the force of the understanding. It is capable of providing qualitative differentiations between instances, ideas, and relations. This approach is modelled on Spinoza's ethics of joy, in that it connects adequate understanding to the analysis of our bondage, i.e., of power as potestas. Providing criteria to clarify such distinctions between negative/entrapping modes of relation and the affirmative/empowering ones, amounts to mapping different speeds of becoming. It, also, involves the ethical coding of different forms of knowledge and of detecting the possibility of enacting the yet-untapped possibilities of the virtual. To be accountable for the present, to be worthy of it, is neither a passive acceptance of the status quo, nor a flattening out of our differential locations. It is, rather, a multiplication, a complexification of the work of critical thinking, based on the generation of alternative processes of becoming. The source of affirmative ethics is the necessity to extract knowledge from pain, in order to make the actualisation of the virtual into a concrete possibility. To come to terms with failure, in order to "fail again, fail better" (Beckett 1989) and to reconstruct, again and again. This is why Deleuze reads Spinoza so carefully, so lovingly, and so pragmatically, without any romanticism about what is entailed in the process of affirmation (Deleuze 1988(Deleuze , 1990. This is a practical philosophy that aims at transforming the debris and the ruins into workable possible systems: despair into praxis. Confrontation with negativity and processing the pain are the means by which we achieve adequate knowledge about the condition we wish to overturn or modify. Critique is also clinical, as it is about detoxifying us from the effects of the negative. As I mentioned earlier, entire chapters of Spinoza's Ethics are about poison, sickness, and death, as well as about feeling diminished by the times that you are living in, which decrease our ability to act, to take in and take on the world. Microfascism, according to Deleuze and Guattari (1987), is such a decrease in our desire for freedom-an opaque sadness and impotence that settles into our souls and saps our life energies away. Negativity makes you feel disoriented and diminished. Such a negative affect signals you are diminished in your power of relating to the very conditions that engender your existence. It points to a deficit in relationality, since your relation to power has been squashed, squeezed, and chopped up by the nastiness, the violence, and the vulgarity of the times. Affirmative ethics labours as a practical exercise, to go beyond that disempowering mode of relation. Then, it entails the effort to activate in a stubborn and empowering relation with others, the force of the virtual, which is to say, the awareness that "yes, we are against aspects of the present, but we are already in the process of becoming something else". That "yes" is not a demented beatific acceptance of what is already the case, but rather the joyful counterpoint that leads to implementing yet-unexplored alternatives. It means: I prefer not to comply. The task of critique is to actually create a missing people as a heterogeneous assemblage, gravitating around affirmative ethics. To Escape the Epistemic Accelerationism of Cognitive Capitalism The social, environmental, and affective contexts, within which the double accelerations of advanced technologies and climate change are taking place, are anything but abstract. They are rooted in the grounded conditions of advanced capitalism. It is undeniable that contemporary vital materialism and its post-human philosophy resonates with a bio-genetic and technologically mediated economic system, which is threatening the survival of the globe. However, that does not mean that they are just the expression of the schizoid speed and accelerations of this system. They, rather, exceed the conditions that engender them and are not saturated by the present state of affairs. They negotiate with the conditions of the present, as both actual and virtual, in order to repurpose them. Applied to the discussion of the contemporary political, this means that the crucial problem is the different speeds of de/reterritorialisation, by cognitive capitalism and the toxic saturation of the present, which enacts to the detriment of the actualisation of the virtual. The violent erasure, or passive-aggressive blockage, of our collective desire to express and materialise virtual potentials, affects both subject formations and knowledge practices in society. They actually disorient and diminish us. Accelerationism is a possible strategy in this regard. It marks a full immersion into the immanence, with the aim to overtake the paths and flows of capital (Noys 2014). Thus, radical accelerationism calls for an inhuman form of rationalism that privileges the computational abilities of technological apparatus-notably its algorithmic logic-in the hope of turning them against profit and exploitation (Williams and Srnicek 2015). It is one thing, however, to argue that one way to defeat capitalism is by exacerbating and radicalising its contradictions, in the hope of making it implode. However, it is quite another to advocate the pursuit of annihilation as the only strategy, coupled with the enjoyment of violence (Land 1992(Land , 1993. This position, which Achille Mbembe (2017) has labelled "negative messianism", is a contemporary authoritarian position, populist both on the right and the left of the political spectrum. Such a stance has nothing in common with the project of affirmative ethics, an ethics that critiques power and invites us to cultivate empowerment, as the actualisation of affirmative relations and projects. Feminism, antiracism, radical ecology, and anti-fascism are among the political movements that have clearly stated their commitment to creating alternatives. Affirmative ethics is neither an endorsement of the shallow optimism of advanced capitalism nor an accelerationist strategy, though it is closer to the latter. It, rather, focuses on the construction of subjectivity as a differential, grounded perspective, which must encompass non-human forces and strike its own meta-stable alliances, within the flows of the deterritorialisation of advanced capitalism. As a critical and creative relational field, the virtual as political praxis actualises multiple possibilities, which evade the profit-led accelerations of capital and work within it to go elsewhere. It functions at different speeds, moves on different timelines, and is fuelled by different ethical affects, which do not always coincide with the surplus-value profit motive. It is opposed to the axiom of profit and the maximisation of the capital consumption of living matter, instead designing an alternative horizon of becoming. Since power is a multi-layered and dynamic entity, and since, as embedded, embodied, relational, and affective subjects, we are immanent to the very conditions we are trying to change, we need to make a careful ethical distinction between different speeds of both theoretical production-with the predictable margins of institutional capitalisation-and the construction of alternative knowing subject formations. These heterogeneous missing peoples are transversal subjectivities that interact and negotiate with the techno-social, psychic, and natural environments as well as resist overcoding by the capitalist profit principle and the structural inequalities it entails. Taking 'living matter' as zoe-/geo-/techno-centred process, transversal subject assemblages activate counter-proposals about what they are capable of becoming, which actualise the unrealised or virtual potential of a 'missing people'. Neo-materialist immanence, therefore, mobilises this transversal collective ability to produce knowledge otherwise, as well as in relation to other species. Zoe-/geo-/technocentred egalitarianism is the core of a post-human thought that might inspire, work with, or subtend informational and scientific practices as well as resist the full-scale commodification of life by advanced capitalism (Braidotti 2006). The barrier against the negative, entropic frenzy of capitalist axiomatic is provided by the grounded and transformative politics that ensue from the ethic of affirmation. In this regard, a neo-materialist vitalist position offers a robust rebuttal of a system, which is overcoded by the profit-minded axioms of bio-mediated, cognitive capitalism. Conclusions Actualising the virtual is a way of giving a measure of the possible, which is not a negative injunction of the present, but rather an affirmative gesture about possible patterns of becoming. In some ways, it is a leap of faith, in what heterogeneous assemblages of humans and non-humans may be capable of. What does it mean that you trust and love humans, not only for what they are-and are already ceasing to be-but, also, for what they are capable of becoming? It means to embrace an ethics of affirmation as collaborative coconstruction of horizons of hope. In my affirmative philosophy, this is a way of expressing the inexhaustible collective energy of those who are tired of the status quo. They instantiate the virtual possibilities of becoming, which are not completely blocked by the negativity of the present conditions. This positive becoming expresses a trust in the future, which allows us-the heterogeneous collective subjects-to 'back cast' paths of becoming from it. This is opposed to a teleological 'forecasting' from present to future, which imposes a programme of linear development onto these processes and, thus, preempts the unexpected consequences they could mobilise. This reveals the true meaning of the notion of amor fati, which is no passive acquiescence, but an active passion for the others of Man, as harbingers of possible futures, as well as as the social engineers of alternative patterns of becoming and new imaginaries. What does it mean to be enamoured by the virtual, the eventual, and the ephemeral possibility of alternatives, which seem to be flatly contradicted by everything that is going on in reality right now, in a world that is drowning, burning, cracking, and suffocating? It means to be not only disenchanted with the old patterns on oppression, but also in love with the joyful possibilities of endurance and the overturning of negativity. The affective language is no coincidence: affirmation is a shared collective passion that extracts hope from the ruins of disenchantment, with dogged and slightly irritating conviction. This praxis of forging communal solutions, through the confrontation of uncomfortable truths, is central to the critical edge of the ethics of affirmation. Accepting our shared exposure to ways of living and dying together, amidst humanled environmental and public health disasters, is, also, the starting point for a process of assessing what binds us together as a community. Beyond solipsistic fantasies, post-human thought as the actualisation of the virtual is a radical democratic project that combines critique with a struggle for community and social justice. Since this is the only world we have, 'we'-a missing people in the constant process of being constituted-have to be worthy of it, embrace it, the better to transform and take care of it. Or so I hope.
10,686.2
2022-05-17T00:00:00.000
[ "Art", "Philosophy" ]
The application of dynamic scanning in target detection and imaging based on an NR-PC flat lens In this paper a finite-difference time-domain method is used to model and analyze the application of dynamic scanning in target detection and imaging by using an effective negative-refraction photonic crystal (NR-PC) flat lens. The results show that there is a transmission peak, with a value far greater than unity, resulting from the influence of mini-forbidden bands and resonance excita-tion effect at a resonance frequency of 0.3068( a/ λ ). Thus, the lightwave emitted from the point source will provide strong backscattered waves after being focused on the target by the NR-PC lens that greatly improves the refocusing resolution and imaging resolution of the backscattered wave. Furthermore, a comparison with a non-dynamic scanning scheme clearly demonstrates that the dynamic scanning scheme provides improved refocusing resolution. In conclusion, our investigation optimized the performance of a detection and imaging system, and provided the basis for converting an idealized LHM lens into a physically realizable NR-PC flat lens. Initiated by the recent microwave experiment demonstration [1] of Smith et al. on the unique electromagnetic properties of Veselago's negative-refraction-index materials (NRIM) [2], there has been a progressive increase in experiments to verify these unique properties [3,4]. Because the NRIM have negative permittivity and permeability, and the electric and magnetic field vectors form a left-handed set with the wave vector, such artificial materials are also referred to as left handed materials (LHM). The realization of the LHM has provoked great interest in many specific applications, such as in the area of near-field target detection and imaging. Among these applications of LHM, focusing with lenses that have flat surfaces has been proposed [5]. Theoretical analysis and nu-*Corresponding author (email<EMAIL_ADDRESS>merical simulations [6] indicated that the so-called perfect lens made of LHM with no losses may achieve a focus resolution overcoming the optical diffraction limit. Generally, higher focus resolution yields better imaging resolution [7]. Though, in theory, it is quite reasonable to use an LHM flat lens for high-resolution near-field target detection and imaging [7], there is still much uncertainty about whether such materials actually exist in nature. Using different methods, Notomi has demonstrated 2D Photonic crystals exhibiting negative index or negative refraction effects [8], namely negative refraction photonic crystals (NR-PC). From these results we know that negative refraction usually occurs in the band gap of the equifrequency surface (EFS) in k space because the size of the contour of the EFS in the k space decreases with increasing frequencies. The contour of the EFS up to a certain frequency takes a quasi-circular shape [9], which means the light propagating in the PC is similar to that propagating in the isotropic medium at these frequencies. Hence, the effective negative index of refraction (n eff ) can be used to describe the propagation property of light with frequencies falling into a certain frequency spectrum for a given NR-PC, which could be used as a flat lens in near field target detection and imaging [10,11]. The two-dimensional photonic crystal structure examined in this paper, as shown in Figure 1(a), is formed by periodically drilling 7 rows (along the Z-axis) of 30 identical air holes (along the X-axis) in a GaAs matrix with a dielectric constant ε=12.96 (n=3.6). The air cylinders take on a triangular array, and the radius of the air cylinders is 0.4a (a represents the lattice constant). The refractive index of the photonic crystal is calculated for the TM mode and is drawn in Figure 1(b) using the algorithm in reference [8]. It is clear that n eff changes with normalized frequency ω=a/λ. From Figure 1(b), we know that when n eff takes the value of -1, the corresponding normalized frequency ω is about 0.3068. Based on the phenomenon of negative refraction, it is easy to envisage that the NR-PC flat lens and the LHM flat lens [5] share the same image-forming principles. As shown in Figure 1(a), the lightwave emitted from a point source on one side of the NR-PC lens can be focused at one focal point F 1 inside the NR-PC lens, and then at the other focal point F 2 outside the NR-PC lens. When a target is brought into the focal point F 2 , the lightwave from the source is focused on the target by the flat NR-PC lens, and the target will backscatter the focused lightwave, which will then be refocused in the vicinity of the source point by the same flat NR-PC lens. Because the focal point F 2 can be controlled easily by moving the point source along the lines of the surface of the lens or adjusting the distance between them, the target can easily be scanned and its image in the vicinity of the source point is significantly enhanced. Generally, the complete lightwave recorded at each receiving point is the combination of three parts, i.e. the wave emitted from the source, the wave reflected from the entrance and exit sur-faces of the NR-PC lens (for NR-PC of n eff ≠-1), and the refocused wave of the backscattered lightwave (scattering signal). Thus, the scattering signal is acquired by taking the difference between what is recorded by the detector with or without a target on the focal point F 2 . This means that target detection and imaging can be realized, with a high resolution ratio and without a complicated imaging algorithm, simply by computing the lightwave distribution of the scattering signal from the target [6]. In this paper, we mainly discuss the target detection and imaging properties of the dynamic scanning system by using the NR-PC flat lens. Numerical simulations with the 2D finite-difference time-domain (FDTD) method show that a sharp transmission peak of the lightwave appears at the resonance frequency 0.3068 (a/λ) for the NR-PC flat lens. In addition, the lightwave backscattered from the target is enhanced greatly, which significantly improves the lateral refocusing and imaging resolution, and as a result optimizes the performance of the target detection and imaging system. The design and calculation models of the NR-PC flat lens are further studied using the dynamic scanning scheme. Detailed comparison with non-dynamic scanning will be more helpful in evaluating a specific application of the dynamic scanning scheme on target detection and imaging. Design and calculation models of the NR-PC flat lens The Maxwell's equations in the photonic crystals can be written as and H are the electric and magnetic field vectors of the electromagnetic wave, the permittivity ε(r) is the relative dielectric constant, and µ(r)=1 is the relative magnetic permeability. The FDTD method [12] is well-established with a somewhat lower computational capacity in comparison to its high accuracy when compared with other methods. It has been widely used to study characteristics of electromagnetic waves in NR-PC. Its fundamental principle is that Maxwell's equations are first expressed as scalar equations of electric and magnetic field components in Cartesian coordinates, and then the differential quotient is replaced with the difference quotient with accuracy to the second order. In our simulation, a perfectly matched layer (PML) is used in the X and Y directions as boundary conditions [12]. Because these equations are functions of space and time, they can be discretized in the space and time domains by the Yee-cell technique and be used to find field solutions numerically. It is noteworthy that the mode of light propagating in photonic crystals (PC) is very different from that in LHM. For the LHM with refractive index n = -1, there is no reflection at the air-LHM interface, however light will experience multiple reflection and refraction at the air-PC interface, even for NR-PC when n eff = -1, leading to great losses or much lower transmissivity for the light propagating through the NR-PC. To optimize the performance of the focus-scanning scheme, an effective way to improve transmissivity is needed. The corresponding investigation of raising the transmissivity has already been presented in our previous thesis cited by OPTIK [13]. When the center frequency (ω p ) of the wave source is set at 0.3068 (a/λ), the transmission is enhanced dramatically, with the transmission coefficient up to 4500 at the frequency point of 0.3068 (a/λ). The physical mechanism can be explained by the redistribution of optical energy. The incident optical waves with a different frequency will experience intensive Bragg scattering when they are incident upon the NR-PC and propagating in the NR-PC because of the periodic distribution of negative-refraction media, which results in mini-forbidden bands and a photonic tunneling effect for a given NR-PC [14,15]. At the same time, the optical energy is highly localized, and the high transmissivity appears at the resonance frequency. Refocusing of the backscattered wave in target detection and imaging from an NR-PC flat lens The focusing-refocusing property is a very important performance parameter that supports the use of the NR-PC flat lens for lightwave target detection and imaging. By taking the measured width at 0.707-maximum of the normalized field intensity of the refocused beam profile as the definition of resolution [16], the performance of the target detection and imaging system based on the NR-PC flat lens can be further evaluated. In particular, for the detection and imaging of a small target at an early stage, high sensitivity is very desirable, thus, our investigation may have great significance for imaging systems. In our FDTD simulations, the center frequency of the point source is adjusted to 0.3068 (a/λ) for a high transmissivity. Meanwhile, we consider the typical situation where a point source and a detector are set to move together along the scanning line z = −λ with intervals of Δx = 0.2 μm. The 2D flat NR-PC lens with a thickness of d = 2λ is set at 0 ≤ z ≤ 2λ, and a target of a PEC square with a side length of L = 1/3λ is located at the focal point F 2 . First the performance of the complete lightwave is investigated. The simulation diagram of the complete lightwave field in the computation area and its corresponding field intensity distribution along the scanning line z = -λ are depicted in Figure 2(a) and (b). From the arrowheaded lines in Figure 2(a), it is clear that the imaging of the NR-PC lens obeys geometrical optics. Furthermore, there is a one-to-one correspondence between the energy distribution of the complete lightwave field (Figure 2(a)) and the distribution curve of its field intensity ( Figure 2(b)), and the maximum field intensity on the line z = -λ is obtained in the vicinity of the point source. Figure 2 depicts the properties of the complete lightwave, whereas the main role of the NR-PC flat lens in target detection and imaging is actually embodied in the characteristics of the scattering signal from the target. Therefore, to obtain the scattering wave, we may substrate the fields recorded without the target at F 2 from the fields recorded with the target at F 2 . Then, on the basis that the other parameters are the same as previously, we further contrast the properties of the scattering signal with those obtained when ω p = 0.2068(a/λ). Detailed comparisons are shown in Figure 3. By measuring the full width of the beam profile at 0.707-maximum as given in Figure 3, we find that the refocusing resolutions are approximately 0.3718λ and 1.59λ for ω p = 0.3068 (a/λ) and ω p = 0.2068 (a/λ). Obviously, the lateral refocusing resolution corresponding to 0.3068 (a/λ) is four times that of 0.2068 (a/λ). This is clear from the theoretical point of view in which the scattering signal of the target is greatly enhanced because the backscattered wave has a higher transmissivity in the narrow band around the frequency of 0.3068 (a/λ). This results in significant enhancement of the lateral refocusing and imaging resolution, and optimizes the performance of the focus-scanning scheme. We now see that the concept of a flat NR-PC lens is physically sound and experimentally feasible. Importance of dynamic scanning in target detection and imaging using NR-PC flat lens It should be noted that our research outlined above was based on the dynamic scanning scheme. This is different from the non-dynamic scanning scheme with the point source being set to stay static and only the detector moving along the scanning line. The dynamic scanning scheme requires the point source and the detector to move together. Further comparisons between the two scanning schemes will be more helpful for evaluating the superiority of the dynamic scanning scheme in target detection and imaging. In the following simulation, we consider the above defined flat NR-PC lens d = 2λ thick and the point source with ω p = 0.3068 (a/λ). Moreover, the square PEC target at focal point F 2 with side lengths of L = 1/6λ, L = 1/10λ and L = 1/30λ are detected separately using the two scanning schemes. The corresponding beam profiles of the normalized field intensity of the scattering signals are presented in Figure 4. As shown in Figure 4, the solid curves indicate the dynamic scanning scheme, and the dashed curves depict the non-dynamic scanning scheme. From Figure 4 and Table 1 we know that the dynamic scanning scheme may achieve better refocusing resolution than the non-dynamic scanning scheme in target detection and imaging. When using the dynamic scanning scheme to detect a square target with side lengths of L = 1/6λ, L = 1/10λ and L = 1/30λ, there were approximately 6%, 24% and 25% improvements in the refocusing resolution, when compared to those when the non-dynamic scanning scheme was used. In other words, decreasing the size of the target results in significant improvements to the superiority of the dynamic scanning scheme. In addition, regardless of which scanning scheme is chosen, the refocusing resolution improves with the increase in the target size, which obviously conforms to the general laws of physics. To further demonstrate the important role of dynamic scanning in target detection and imaging, we changed the shape of the targets by applying cylinder PEC targets with diameters of D = 1/6λ, D = 1/10λ and D = 1/30λ. Relevant refocusing properties after comparing with those of the square targets are presented in Figure 5 and Table 2. As shown in Figure 5, the solid curves indicate the situation where the square targets were under detection, while the dashed curves depict the situation when the cylinder targets were used. According to [7], for the LHM lens, the dynamic scanning scheme has a scanning resolution of 0.257λ when detecting a PEC cylinder target with a diameter of D = 1/6λ. For the NR-PC lens, the refocusing resolution was approximately 0.2564λ, which is nearly equal to that of the LHM lens. Therefore, the NR-PC flat lens and the LHM Figure 5 Beam profiles of the normalized field intensity of the scattering signal obtained by detecting different square (the solid curves) and cylinder (the dashed curves) PEC targets with the dynamic scanning scheme. lens have approximately equal image accuracy when using the dynamic scanning scheme. From Figure 5 and Table 2 we know that the refocusing resolution obtained by the detection of the cylinder target is far superior to that of the square target. Conclusions On the basis of the 2D FDTD method, we applied a dynamic scanning scheme to study the characteristics of the NR-PC flat lens. It was demonstrated that because of the influence of the mini-forbidden band and resonance excitation effect, high transmissivity will appear at the resonance frequency of 0.3068 (a/λ) when the lightwave goes through the NR-PC lens. In addition, the focusing characteristics of the NR-PC lens and the exponential amplification of the evanescent wave [5] make the scheme quite efficient in raising the backscattered wave, which leads to a refocusing resolution and imaging resolution with significant enhancement. The detailed performance analysis demonstrated that the dynamic scanning scheme is superior to the nondynamic scanning scheme in target detection and imaging. Furthermore, the NR-PC flat lens and the LHM lens have approximately equal image accuracy when using the dynamic scanning scheme, and the refocusing resolution obtained by the detection of a cylinder target is greatly superior to that of a square target.
3,759
2011-03-17T00:00:00.000
[ "Engineering", "Physics" ]
Specification procedures for multivariate stable-Paretian laws for independent and for conditionally heteroskedastic data We consider goodness-of-fit methods for multivariate symmetric and asymmetric stable Paretian random vectors in arbitrary dimension. The methods are based on the empirical characteristic function and are implemented both in the i.i.d. context as well as for innovations in GARCH models. Asymptotic properties of the proposed procedures are discussed, while the finite-sample properties are illustrated by means of an extensive Monte Carlo study. The procedures are also applied to real data from the financial markets. Introduction Stable-Paretian (SP) distributions are extremely important from the theoretical point of view as they are closed under convolution, and also they are the only possible limit laws for normalized sums of i.i.d.random variables.In fact, this last feature makes SP laws particularly appealing for financial applications since stock returns and other financial assets often come in the form of sums of a large numbers of independent terms.Moreover the empirical density of such assets is leptokurtic and in many cases skewed, thus making the family of SP laws particularly suited for related applications; see for instance the celebrated stability hypothesis that goes back at least to Mandelbrot (1963) and Fama (1965). The above findings prompted further research on the stochastic properties and inference of SP laws and on their application potential.The reader is referred to Samorodnitsky and Taqqu (1994), Adler et al. (1998), Uchaikin and Zolotarev (1998), Rachev and Mittnik (2000) and Nolan (2020) for an overview on the stochastic theory, statistical inference and applications of SP laws. In the aforementioned works, statistical inference and applications are mostly restricted to univariate SP laws.The respective topics for SP random vectors are much less explored.In this connection, certain distributional aspects of SP random vectors are discussed in Press (1972b), while Press (1972a) defines moment-type estimators of the parameters of a multivariate SP distribution utilizing the characteristic function (CF).Maximum-likelihood based estimation is discussed by Ogata (2013), and Nolan (2013), Bayesian methods in Tsionas (2016), Lombardi and Veredas (2009) use indirect estimation methods, whereas Koutrouvelis (1980), Nolan (2013) and Sathe and Upadhye (2020) consider CF-regression methods.As far as testing is concerned, the only available formal method seems to be the CF-based goodness-of-fit test of Meintanis, Ngatchou-Wandji, and Taufer (2015). In this article we propose CF-based goodness-of-fit procedures for SP random vectors in the elliptically symmetric case.Moreover we also consider the asymmetric case for which to the best of our knowledge goodness-of-fit have not been considered before.The remainder of the article unfolds as follows.In Section 2 we introduce a general goodness-of-fit test for elliptically symmetric SP laws.In Section 3 we particularize this test in terms of computation.Section 4 addresses the problem of estimation of the SP parameters and the study of asymptotics of the proposed test, while in Section 5 we extend the test to a multivariate GARCH model with SP innovations.The results of an extensive simulation study illustrating the finite-sample properties of the method are presented in Section 6.In Section 7 the case of asymmetric SP law is considered, with known as well as unknown characteristic exponent, by means of a different testing procedure that avoids integration over a complicated CF and computation of the corresponding density.The impact of high dimension on the test statistic is also discussed in this section.Applications are given in Section 8, and we conclude in Section 9 with a discussion.Some technical material is deferred to an online supplement along with additional Monte Carlo results. 2 Goodness-of-fit tests Let X be a random vector in general dimension p ≥ 1, with CF ϕ(t) = E(e it ⊤ X ), t ∈ R p , i = √ −1.Here we consider goodness-of-fit tests for the elliptically symmetric SP law.In this connection we note that SP random vectors are parameterized by the triplet (α, δ, Q), where α ∈ (0, 2] denotes the characteristic exponent, and δ ∈ R p and Q ∈ M p , are location vector and dispersion matrix, respectively, with M p being the set of all symmetric positive definite (p × p) matrices.On the basis of i.i.d.copies (X j , j = 1, ..., n) of X we wish to test the null hypothesis where we write X ∼ S α (δ, Q) to denote that X follows a SP law with the indicated parameters.For subsequent use we mention that if X ∼ S α (δ, Q), then it admits the stochastic representation where A is a totally skewed to the right SP random variable with characteristic exponent α/2 (α < 2) and N is a zero-mean Gaussian vector with covariance matrix Q, independent of A; see Samorodnitsky and Taqqu (1994, §2.5). Our test will make use of the fact that if X ∼ S α (δ, Q), then the CF of X is given by ϕ α (t; δ, Q) = e it ⊤ δ−(t ⊤ Qt) α/2 .The cases α = 2 and α = 1, respectively, correspond to the most prominent members of the SP family, i.e. the Gaussian and Cauchy distributions, while φ α (t) = exp(− t α ), will denote the CF of a spherical SP law, i.e. an SP law with location δ = 0 and dispersion matrix Q set equal to the identity matrix. As it is already implicit in (1) the parameters (δ, Q) are considered unknown, and hence they will be replaced by estimators ( δ n , Q n ) and the test procedure will be applied on the standardized data Specifically for a non-negative weight function w(•) we propose the test criterion where φ n (t) = n −1 n j=1 exp(it ⊤ Y j ) is the empirical CF computed from (Y j , j = 1, ..., n).Here the characteristic exponent α is considered fixed at value α = α 0 , while the case of unknown α will be considered in Section 7. Computational aspects It is already transparent that the test statistic T n,w depends on the weight function w(•), the choice of which we consider in this section.Specifically, from (4) we have by simple algebra where (6) Using the CF of the Kotz-type distribution We now discuss the computation of the test statistic figuring in (5).In doing so we will make use of an appropriate weight function w(•) that facilitates explicit representations of the integrals in ( 6).Specifically we choose w(t) = ( t 2 ) ν e −( t 2 ) α 0 /2 as weight function, which for (r, s) = (1, α 0 /2) is proportional to the density of the spherical Kotz-type distribution c( x 2 ) ν e −r( x 2 ) s , with c a normalizing constant; see Nadarajah (2003).With this weight function the integrals in (6) may be derived as special cases of the integral with the cases of interest being 0 < s(= α 0 /2) ≤ 1.In turn this integral can be computed by making use of the CF of the Kotz-type distribution in a form of an absolutely convergent series of the type (see Streit, 1991;Iyengar & Tong, 1989;Kotz & Ostrovskii, 1994) ), which for selected values of s reduces to a finite sum.For more details we refer to Section S1 of the online supplement. Given I ν,r (•; •), the test statistic figuring in (5) may be written as Using the inversion theorem In this section we will use the inversion theorem for CFs in order to compute the integrals defined by (6).Specifically for an absolutely integrable CF ϕ(t), the inversion theorem renders the density f (•) corresponding to ϕ(•) as In this connection, we start from the expression of the test statistic in (5), and adopt the weight function w(t) = e −r t α 0 .This choice amounts to taking ν = 0 in the Kotz-type density, which is the same as if we incorporate the CF φ α0 (•) of the SP law under test in the weight function. With this weight function, the statistic figuring in (5), say T n,r , becomes where by making use of the inversion theorem in (9), with f α (•) being the density of the spherical SP law with CF φ α (•). 4 On estimation of parameters and limit properties of the test Estimation of parameters The parameters δ and Q in (1) are assumed unknown and need to be estimated. Seeing that reliable procedures for calculation of stable densities are available, we use maximum likelihood estimation.To avoid searching over the space of positive definite matrices, we use the estimators δ n and where L p denotes the space of lower triangular p × p matrices, and, as before, f α (•) denotes the density of a p-dimensional stable distribution with CF φ α0 (•).Initial values for the optimization procedure are obtained using projection estimators of δ and Q as outlined in Nolan (2013, Section 2.3). Limit null distribution and consistency We present here the main elements involved in the limit behavior of the test statistic T n,w .In this connection and despite the fact that, as already emphasized, it is computationally convenient to use a weight function that is proportional to the density of a spherical Kotz-type distribution, our limit results apply under a general weight function satisfying certain assumptions and, under given regularity conditions, with arbitrary estimators of the distributional parameters.Specifically, we assume that the weight function satisfies w(t) > 0 (apart from a set of measure zero), w(t) = w(−t) and R p w(t)dt < ∞. We also suppose that the estimators figuring in the standardization defined by (3) admit the Bahadur-type asymptotic representations Then we may write from (4) It also follows that Ξ n − Ξ n,0 2 P −→ 0, where the approximating process Ξ n,0 (•) admits the i.i.d.representation upon which the central limit theorem applies, and which together with a subsequent application of the continuous mapping theorem entails where Ξ 0 (•) is a zero-mean Gaussian process with covariance kernel, say, K(s, t).In turn the law of T w is that of ∞ j=1 λ j N 2 j , where (N j , j ≥ 1) are i.i.d.standard normal random variables.The covariance kernel K(•, •) of the limit process Ξ 0 (•) enters the distribution of T w via the eigenvalues In this connection the maximum likelihood estimators defined by (12) satisfy certain equivariance/invariance properties (refer to Section S2 of the online supplement), and as a consequence the resulting test statistic is affine invariant.In this case we may set (δ 0 , Q 0 ) equal to the zero vector and identity matrix, respectively, thus rendering the limit null distribution free of true parameter values; see Ebner and Henze (2020) and Meintanis et al. (2015). Moreover the standing assumptions imply the strong consistency of the new test under fixed alternatives. Proposition 4.1 Suppose that under the given law of X the estimators of the parameters δ and Q satisfy, ( δn, Qn) a.s. as n → ∞. Proof Recall from (4) that Now the strong uniform consistency of the empirical CF in bounded intervals (see Csörgő, 1981) entails , an application of Lebesgue's theorem of dominated convergence on ( 14) yields ( 13). Since w > 0, T w is positive unless T w > 0 unless the CF of X coincides with the CF of a SP law with α = α 0 and (δ, Q) = (δ X , Q X ), and thus by the uniqueness of CFs, the test which rejects the null hypothesis H 0 in (1) for large values of T n,w is consistent against each fixed alternative distribution.The above limit results have been developed in a series of papers, both in the current setting as well as in related settings, and with varying conditions on the weight function and the family of distributions under test; see for instance Henze and Wagner (1997), Gupta et al. (2004), Meintanis et al. (2015), Hadjicosta and Richards (2020a), and Ebner and Henze (2020).In this regard, the solution of the above integral equation, and thus the approximation of the the limit null distribution of T n,w , is extremely complicated, and in fact constitutes a research problem in itself.This line of research has been followed by a few authors.We refer to Matsui and Takemura (2008), Hadjicosta and Richards (2020b), Meintanis et al. (2023) and Ebner and Henze (2023).In these works, the infinite sum distribution of T w is approximated by a corresponding finite sum employing numerically computed eigenvalues and then large-sample critical points for T n,w are found by Monte Carlo.It should be noted that such approximation is specific in several problem-parameters: the distribution under test, the type of estimators of the distributional parameters, and the weight function employed, and thus have to be performed on a case-to-case basis.A different more heuristic approach is moment-matching between the first few moments of T w (computed numerically) and a known distribution, like the gamma distribution or one from a Pearson family of distributions; see Henze (1990Henze ( , 1997) ) and Pfister et al. (2018), while yet another, Satterthwaite-type, approximation is studied by Lindsay et al. (2008). The validity and usefulness of the above approximation methods notwithstanding, we hereby favor Monte Carlo simulation and bootstrap resampling for the computation of critical points and for test implementation, not only because these are large-sample approximations performed mostly in the univariate setting and thus inappropriate for small sample size n and/or dimension p > 1, but more importantly because, in the case of GARCH-type observations considered in the next section, the finite-sample counterpart of this distribution may even involve true parameter values; see for instance Henze et al. (2019). The case of the stable-GARCH model Assume that observations (X j , j = 1, ..., n) arise from a multivariate GARCH model defined by where (ε j , j = 1, ..., n) is a sequence of i.i.d.p-dimensional random variables with mean zero and identity dispersion matrix, and Q j := Q(X j |I j−1 ), with I j denoting the information available at time j, is a symmetric positive definite matrix of dimension (p × p).We wish to test the null hypothesis stated in (1) for the innovations ε j figuring in model (15).Note that in view of (2), Q j may be interpreted as the conditional covariance matrix of the corresponding latent Gaussian vector N . We also employ initial values in order to start the estimation process.As an estimator of ϑ we use the equation-by-equation (EbE) estimators proposed by Francq and Zakoïan (2016).The reader is referred to Section S3 of the online supplement for more details on the EbE estimator. Numerical study We S8 and S9 of the online supplement. For comparison, we include results obtained when using the test of Meintanis, Ngatchou-Wandji, and Taufer (2015) using the Gaussian weight function exp(− t 2 ).The test, denoted by M a in the tables, depends on a tuning parameter denoted by a for which we consider the choices a = 4, 6, 10 and a = 15.As pointed out by the authors, the number of operations required to compute their test statistic is in the order of n 2a (for integer-valued a) and becomes time-consuming for larger values of n and a.We therefore use the Monte Carlo approach suggested by the authors to approximate the value of the test statistic (using 1 000 replications for each approximation); see p. 180 of Meintanis et al. (2015).When testing for multivariate normality (i.e.H 0 with α = 2), we consider the test of Henze, Jiménez-Gamero, and Meintanis (2019), denoted by HJM in the tables, with weight function exp(−1.5 t 2 ), which yielded good results in the original paper. The rejection percentages of the tests are shown in Tables 1-4.All simulations results are based on 1,000 independent Monte Carlo iterations and a significance level of 10% is used throughout. We first consider the case where we test H 0 with α = 2, i.e. when testing for departures from multivariate normality.Table 1 shows that, when heaviertailed symmetric Paretian alternatives are considered, the newly proposed tests based on T r are more powerful than the existing tests M a of Meintanis et al. (2015), and have power slightly lower but comparable to the test HJM of Henze et al. (2019).In light of the above-mentioned computational complexity of the existing tests, this gain in power makes the new tests attractive for implementation in practice.Moreover, the favorable power is visible for all the considered choices of the tuning parameter r, the choice of which, as opposed to a in M a , has no significant impact on computational complexity.Finally, as is expected, the power of the new tests increase as the sample size is increased.Similar conclusions can also be made in the case of elliptically symmetric t alternatives (see Table S2 of the online supplement). Considering skew normal alternatives, even more favorable behavior can be observed in the results shown in Table 2. Notice that the tests M a and HJM seem to have very low power against skew normal alternatives, which is not the case with the newly proposed tests. We now shift our attention to the case of testing H 0 with α = 1.8.Table 3 shows that, compared to the existing tests, the new tests are quite powerful against heavier-tailed alternatives, i.e. alternatives with stability index less than 1.8.Despite the evident dependence of the performance of the new tests on the tuning parameter r, we note that the power is very competitive to the existing tests in most cases, and significantly outperforms the existing tests for most choices of r.For lighter-tailed alternative distributions, the new tests exhibit some under-rejection for certain choices of r.Nevertheless, the problem seems to disappear as the sample size is increased. Another advantageous feature of tests based on T r is that the high power is also visible in the higher-dimensional setting where p = 6.In fact, in this case the power of the tests seem to increase as the dimension is increased from p = 4 to p = 6.In contrast, the tests M a exhibit a clear decrease in power as the dimension p is increased. We finally consider testing H 0 with α = 1, i.e. when testing for departures from multivariate Cauchy.Overall, in agreement with previous observations, Table 4 shows that the new tests seem to be competitive in terms of power, with the existing tests having some advantage when the true data-generating distribution has a stability index greater than 1.However, this advantage in power disappears when considering the higher-dimensional case p = 6.Similar behavior can be seen in the case of t and skew Cauchy alternatives (see Tables S6 and S7 in the supplement). Monte Carlo results for GARCH data We consider a CCC-GARCH(1, 1) model as defined in ( 16) with κ x = κ q = 1.As parameters, we take A 1 = 0.2I p , B 1 = 0.3I p , and correlation matrix R with all off-diagonal entries set to 0.5.To test whether the innovations ε j were generated by an elliptically symmetric SP law, we use the statistic in (10) applied to the residuals ε j defined in ( 17).Denote the value of the test statistic by T r := T r ( ε 1 , . . ., ε n ). We use the following bootstrap scheme to determine critical values: 1. Independently generate innovations ε * 1 , . . ., ε * n from the SP law S α0 (0, I p ). 2. Construct a bootstrap sample X * 1 , . . ., X * n using the recursive relation 16) to obtain estimates Q * j , j = 1, . . ., n, and recover the bootstrap residuals ε * j = ( Q * j ) −1/2 X * j , j = 1, . . ., n 4. A bootstrap version of the statistic is given by T Steps 1-4 are repeated many times, say B, to obtain realisations T * r (1), . . ., T * r (B) of the bootstrap statistic.The null hypothesis is rejected at significance level ξ whenever T r exceeds the (1 − ξ)-level empirical quantile of the bootstrap realisations {T * r (b)} B b=1 .In the Monte Carlo simulations that follow, instead of drawing B bootstrap, we employ the warp speed method of Giacomini et al. (2013) which involves drawing only one bootstrap sample for each Monte Carlo iteration. Table 5 (power against symmetric SP laws) exhibits similar favorable power properties of the newly proposed test as was observed in the i.i.d.case.Notice that the tests all have empirical size close to the nominal level of 10% when using critical values obtained by means of the bootstrap scheme given above.Also see Tables S13 and S14 of the online supplement. Table S12 (power against skew normal alternatives) shows that the new tests are especially competitive in terms of power when the true innovation distribution is not elliptically symmetric.Finally we mention that although the tests of Meintanis et al. (2015) are competitive when the innovations have an elliptically symmetric Student t-distribution (see Table S11), the slight advantage in power disappears rapidly as the dimension p increases. Testing asymmetric SP laws We now shift our focus to the more general case of testing whether multivariate observations from X originate from a SP law, which need not necessarily be elliptically symmetric.In this connection note that the general multivariate SP law depends on a location vector δ ∈ R p and a spectral measure Γ(•) on the unit sphere S p .Accordingly we wish to test the null hypothesis H 0 : X ∼ S α (δ, Γ), for some δ ∈ R p and some spectral measure Γ on S p , where we write X ∼ S α (δ, Γ), when X follows a skew SP law with the indicated parameters.We will be considering the case of H 0 for fixed α = α 0 as well as the case of testing the null hypothesis H 0 with an unspecified α. In this connection note that if X ∼ S α (δ, Γ), then the CF of X is given by with See, e.g., Nolan et al. (2001). In testing the null hypothesis H 0 , we consider an entirely different idea for the test statistic first put forward by F. Chen et al. (2022).Specifically, we consider a test statistic formulated as a two-sample test between the original data X n = (X j , j = 1, ..., n) and artificial data X 0n = (X 0j , j = 1, ..., n) generated under the null hypothesis H 0 .More precisely, we propose the test Specification procedures for multivariate stable-Paretian laws criterion where ϕ n (t) = n −1 n j=1 e it ⊤ Xj is the empirical CF of the data at hand, while ϕ 0n (t) = n −1 n j=1 e it ⊤ X0j is the empirical CF computed from the artificial data set X 0n generated under the null hypothesis H 0 with Γ and δ estimated from the original observations X n . By straightforward computations, we obtain where I w (x) = R p cos(t ⊤ x)w(t)dt, as also defined in (6).Clearly then the numerical approaches of Section 3 are no longer required and the simplicity of this test lies in the fact that in (20) only the computation of I w (•) is needed.Specifically the need for tailor-made weight functions such as those employed in Section 3 is circumvented.Furthermore we no longer need to compute the density of the underlying SP law as in (11).In this connection, suppose that the weight function w(x) figuring in I w (x) above is chosen as the density of any spherical distribution in R p .Then the integral I w (x) gives the CF corresponding to this spherical distribution at the point x.Furthermore it is well known that this CF may be written as Ψ( x 2 ), where Ψ(•) is called the "kernel" of the specific family of spherical distributions.Thus the test statistic figuring in (20) may be written as where the kernel Ψ(•) can be chosen by the practitioner so that the resulting expression in ( 21) is as simple as possible.In this connection, as already clear from the preceding paragraphs, a simple kernel is the kernel of the spherical SP family of distributions with Ψ(ξ) = e −rξ α/2 , r > 0, α ∈ (0, 2].Implementation of the test however relies on estimation of the spectral measure Γ(•) appearing in (18).Motivated by a result of Byczkowski et al. (1993, Theorem 1), we assume that Γ(•) can be approximated by the discrete spectral measure Γ(•) = K k=1 γ k I s k (•), with weights γ k corresponding to mass points s k ∈ S p , k = 1, . . ., K, and I s k (•) being the indicator index.So in order to apply the test, we use the stochastic representation of X 0 ∼ S α0 (δ, Γ) as where (A k , k = 1, ..., K) are i.i.d.(univariate) SP variates following a totally skewed to the right SP law with α = α 0 .In turn this representation is used in order to generate observations (X 0j , j = 1, ..., n) under the null hypothesis H 0 with δ and (γ k , k = 1, ..., K) replaced by appropriate estimates δ and ( γ k , k = 1, ..., K), respectively.The estimates δ and ( γ k , k = 1, ..., K) are obtained as shown in Nolan et al. (2001) and outlined in Section S5 of the online supplement. Monte Carlo results We now turn to a simulation study to demonstrate the performance of the test based on (20), say T Ψ for simplicity, in the bivariate case.We specifically consider the following alternative distributions: (A1) the asymmetric SP law S α (δ, Γ) with CF defined in ( 18), where we take δ = 0 and Γ( (A2) spherically symmetric SP distributions, denoted by S α . The statistic in T Ψ (X n ; X 0n ) in ( 21) is subject to randomness introduced by the artificial data X 0n .To address this randomness in practical implementation, we follow F. Chen et al. (2022) and base our test on the statistic where, for each (r = 1, . . ., m), the set X r 0n is a random sample of observations generated from the SP law S α0 ( δ, Γ), i.e. a random sample satisfying H 0 . Critical values of the test can be obtained using the following parametric bootstrap scheme: 1. Generate a bootstrap sample X * n from the SP law S α0 ( δ, Γ). 2. Calculate bootstrap estimates δ * and Γ * from X * n and generate m random samples (X * r 0,n , r = 1, . . ., m), from the SP law S α0 ( δ * , Γ * ).6 shows the empirical rejection percentages of tests based on T Ψ with Ψ(ξ) = e −rξ α 0 /2 , where α 0 denotes the hypothesized stability index in H 0 .We write T r , r > 0, for this test.In the case of unknown α in Table 7 we use the same weight function with α 0 = 2. A bootstrap version of the statistic is T The left-hand side of Table 6 shows the rejection percentages when observations are sampled from an asymmetric SP distribution, whereas the right-hand side show the results when observations are sampled from a symmetric SP distribution.In all cases, the proposed procedure seems to respect the nominal size of the test, although it seems to be somewhat conservative especially for smaller sample sizes.Nevertheless, the tests have good power against alternatives, which increases with the extent of violation of the null hypothesis.Corresponding results when testing H 0 with α = 1.8 are given in Table S16. The case of unknown α In practice, the true value of stability index α will typically be unknown and need to be estimated from data.Suppose the hypothesis of interest is To test this hypothesis, we again use the test based on the statistic in ( 21), but now generate the artificial data X 0n from the SP law S α ( δ, Γ), where α, δ and Γ are projection estimates obtained as outlined in Section S5 of the supplement. Note that data can be generated from S α ( δ, Γ) using the stocahstic representation in ( 22) with α 0 replaced by α.The bootstrap procedure for obtaining critical values follows similarly to the procedure described in Section 7.1.The empirical rejection percentages (for the bivariate case, p = 2) are shown in Table 7 for several distributions.Note that when observations are As alternatives, we consider the skew normal (SN ν ), symmetric Laplace and generalized Gaussian (GG ν ) distributions.An observation X from the GG ν distribution (refer to Cadirci et al., 2022) is generated according to X = U V 1/ν , where U is uniform on S p−1 and V ∼ Gamma(p/ν, 2).Table 7 shows that the proposed test procedure has power against non-SP alternatives, and noting the increase in power associated with increasing sample size, the results suggest that the procedure is consistent against non-SP alternatives. The high-dimensional case It should be clear that so far, and despite the fact that the new test applies to any dimension p, the underlying setting is not that of high dimension (p > n).In this connection we point out that the extension of goodness-of-fit methods specifically tailored for the classical "small p-large n" regime to high or infinite dimension is not straightforward, and therefore it is beyond the scope of the present article.This is not restricted to our setting alone but applies more generally, and the collections of Goia and Vieu (2016) and Kokoszka et al. (2017) reflect the need for statistical methods specifically tailored for nonclassical settings.If we restrict it to our context, in the latter settings such methods have so far been mostly confined to testing for normality and the interested reader is referred to Bugni et al. (2009), Nieto-Reyes et al. ( 2014), Bárcenas et al. (2017), Kellner and Celisse (2019), Yamada and Himeno (2019), Specification procedures for multivariate stable-Paretian laws Jiang et al. (2019), Górecki et al. (2020), Henze and Jiménez-Gamero (2021), H. Chen and Xia (2023) and W. Chen and Genton (2023).If however the setting is that of regression, with or without conditional heteroskedasticity, the number of parameters rapidly increases with the dimension p.This is one extra reason that corresponding specification methods in high/infinite dimension need to be treated separately, and in this connection the methods of Cuesta-Albertos et al. ( 2019) and Rice et al. (2020) appear to be of the few available for testing the (auto)regression function. For an illustration of the special circumstances that are brought forward as the dimension grows, consider the test statistic in (10) for α 0 = 2 and without standardization, i.e. replace Y j by X j ∼ S 2 (0, I p ), j = 1, ..., n.Then from (11), and by using the density of the p-variate normal distribution with mean zero and covariance matrix 2I p , we obtain Λ r (x; 2) = π r p 2 e − x 2 /(4r) , and consequently , and thus our test statistic contains sums of terms, each of which is of the order e −2p , as p → ∞. (For simplicity we suppress the terms 4r and 4(r + 1) which occur as denominators in these sums, as they are anyway irrelevant to our argument). In order to get a feeling of this result, write X j 2 = 2 p k=1 (X jk / √ 2) 2 =: 2S p , where, due to Gaussianity and independence, S p is distributed as chisquared with p degrees of freedom.Thus the expectation of the quantity e − Xj 2 figuring in T n,r coincides with the value of the CF of this chi-squared distribution computed at the point t = 2i.To proceed further, notice that the CF of S p /p at a point t is given by the CF corresponding to S p computed at t/p, and recall that the CF of the chi-squared distribution with p degrees of freedom is given by ϕ Sp (t) = (1 − 2it) −p/2 .Hence, in obvious notation, and reverting back to the CF of S p we get ϕ Sp (t) ≈ e ipt , and thus ϕ Sp (2i) ≈ e −2p , as p → ∞.A similar reasoning applies to X j − X k 2 implying that the expectation of e − Xj −X k 2 (also occurring in T n,r ) being approximated by e −4p , as p → ∞.Hence, by using these approximations, we obtain as p → ∞.Consequently our test statistic degenerates in high dimension, a fact that calls for proper high-dimensional modifications of the test criterion, which is definitely a worthwhile subject for future research. Nevertheless, we have obtained some initial Monte Carlo results that show a reasonable performance for the test criterion in cases where the dimension is much higher than the maximum p = 6 considered so far.These results may be found in Tables S18 and S19 of the Supplement. Application to financial data We consider daily log returns from 4 January 2010 to 30 June 2017 of stocks of two mining companies listed on the London Stock Exchange: Anglo American (AAL) and Rio Tinto (RIO).The complete data set (available from Yahoo! Finance) consists of 1,891 log returns.We model the log returns using the CCC-GARCH(1, 1) model (with intercept) given by where Q j is as in ( 16) with κ x = κ q = 1 and B 1 assumed diagonal.We are interested in determining whether the innovations ε j have a bivariate stable distribution.Since the innovations are unobserved, we apply our test to the residuals ε j = Q −1/2 j (Y j − ω), where the estimates Q j and ω are obtained using EbE estimation as discussed earlier.To obtain critical values of the tests, we apply the bootstrap algorithm of Section 6.2 with B = 1, 000.Table 8 show that, when a stability index of α ∈ {1.75, 1.8} is assumed, the tests based on T r do not reject the null hypothesis that the CCC-GARCH-(1, 1) innovations have a S α distribution (at a 10% level of significance).On the other hand, the null hypothesis of stable innovations is rejected when a stability index of α = {1.7,1.85, 1.9, 2} is assumed. The correct choice of the innovation distribution has important implications for value-at-risk (VaR) forecasts.For the considered stable distributions, we fit the model in ( 25) to the first 1,000 observations and calculated onestep-ahead 5% and 1% portfolio VaR forecasts for long and short positions for the remaining time period (i.e.891 forecasts for each position).The portfolio is assumed to consist of 50% AAL shares and 50% RIO shares. Table 8 shows the empirical coverage rates of the forecasted VaR bounds, that is, the proportion of times that the value of the portfolio exceeded the bounds.For the cases where our test supports the null hypothesis, i.e. when α ∈ {1.75, 1.8}, the empirical coverage rates of the value-at-risk bounds are quite close to the nominal rates.In addition, the p-values of Christofferson's (1998) LR cc test (given in brackets in Table 8), indicate that, if the stability index of the innovation distribution is chosen either too high or too low, the true conditional coverage rates of the forecasted VaR bounds are significantly different from the nominal rates. Conclusion We have studied goodness-of-fit tests with data involving multivariate SP laws.Our tests include the case of i.i.d.observations as well as the one with observations from GARCH models, and cover both elliptical and skewed distributions.Moreover they refer to hypotheses whereby some parameters are assumed known, as well as to the full composite hypothesis with all parameters estimated from the data at hand.The new procedures are shown to perform well in finite samples and to be competitive against other methods, whenever such methods are available.An application illustrates the usefulness of the new procedure for modeling stock returns and explores the subsequent forecasting implications. required for the test statistic T n,w can be computed by making use of the CF of the Kotz-type distribution.Specifically for x = 0, we have . On the other hand, if x = 0, the value of I ν,r (x; s) may be computed from absolutely convergent series which for 1/2 < s < 1 is given by Streit (1991) as , while for 0 < s < 1/2 we have see Kotz and Ostrovskii (1994). In the special case s = 1 the computation simplifies to , see Iyengar and Tong (1989), and for s = 1/2 the bivariate case was treated by Nadarajah and Kotz (2001) leading to which for ν = 0 simplifies to S2 Affine invariance A desirable feature of potential estimators δ n and Q n of the location vector δ and the dispersion matrix Q are the following equivariance/invariance properties for each b ∈ R p and each non-singular (p × p) matrix A. As a consequence the test statistic T n,w := T n,w (X 1 , ..., X n ) satisfies i.e. it is affine invariant.We note that this property is in line with the fact that if X ∼ S α (δ, Q), then AX + b ∼ S α (Aδ + b, AQA ⊤ ), meaning that the SP family of distributions is itself invariant with respect to affine transformations X → AX + b. S3 EbE estimator for GARCH parameters We outline the procedure for computing the EbE estimator of ϑ of Francq and Zakoïan (2016).Note in this connection that, under the CCC-GARCH model, n is given by , where fα denotes the density of a symmetric univariate SP law with stability index equal to α, with location zero and dispersion equal to one.Letting D j = diag( q 1,j ( ϑ n )), we then obtain the correlated residuals ε ′ j = D −1 j X j from which R is calculated using maximum likelihood to obtain S4 Calculation of the stable density Implementation of our test procedure relies on the evaluation of the density of a p-variate spherical SP law.Efficient evaluation of the density is also needed for maximum likelihood estimation of the parameters. In our numerical work, we utilize the fact that if X follows a spherical SP law with CF φ α (•), then the density of X can be expressed as where f R (•) is the density of X , the amplitude of X.This reduces the problem of calculating f α to calculating the univariate density f R .Various integral expressions for f R are given in Nolan (2013) and an implementation to numerically evaluate f R is the function damplitude in the R package stable (provided by Robust Analysis Inc., 2016). To speed up calculations, we pre-calculate f R (u) for each u ∈ {k/(N − k), k = 0, . . ., N − 1}, for some large value of N (= 10, 000 in our simulations).Intermediary points are approximated using cubic spline interpolation and, for u ≥ N , we set f R (u) = 0. S5 Estimation of the discrete spectral measure Below we outline the projection estimation procedure of Nolan et al (2001) as implemented in our work.The estimation procedure assumes that the stability index α is known and that the data have been centered.As this is usually not the case, we first estimate the one-dimensional parameters (α j , β j , σ j , δ j ), j = 1, . . ., p, for each of the coordinates of the p-dimensional data set and center the data set using the location estimate δ = (δ 1 , . . ., δ p ).Furthermore, if α is unknown, we estimate it by α = p −1 p j=1 α j .Motivated by a Theorem 1 of Byczkowski et al (1993), we assume that Γ(•) can be approximated by the discrete spectral measure Γ(•) = K k=1 γ k I s k (•) as defined in the main paper, where we take For all estimates calculated in our simulations, we took the number of projections as K = 10. For each k = 1, . . ., K, the projections X ⊤ 1 s k , . . ., X ⊤ n s k are i.i.d. with a centered, univariate α-stable distribution with skewness and dispersion parameters β k and σ k .These parameters are estimated using maximum likelihood, after which the estimates of the K projections are combined using the method described in Section 2.2 of Nolan et al (2001) to recover an estimate of the spectral measure Γ above. Table S1 Percentage of rejection of H 0 with α = 2 against stable alternatives.Tests done at the 10% significance level.Due to the computational complexity of the HJM test, it is only included in selected cases to reduce run time.99.9 99.9 99.9 99.8 41.0 36.0 17.5 12.7 11.5 100.0 t4 99.9 99.9 100.0 100.0 63.5 50.Table S3 Percentage of rejection of H 0 with α = 2 against skew normal alternatives.Tests done at the 10% significance level.Due to the computational complexity of the HJM test, it is only included in selected cases to reduce run time. Table 1 Percentage of rejection of H 0 with α = 2 against stable alternatives.Tests done at the 10% significance level.Additional cases are shown TableS1of the online supplement. Table 2 Percentage of rejection of H 0 with α = 2 against skew normal alternatives.Tests done at the 10% significance level.Also see TableS3of the online supplement. Table 3 Percentage of rejection of H 0 with α = 1.8 against stable alternatives.Tests done at the 10% significance level.Also see TableS4of the online supplement. Table 4 Percentage of rejection of H 0 with α = 1 against stable alternatives.Tests done at the 10% significance level.Also see TableS5of the supplement. Table 5 Percentage of rejection of H 0 with α = 2 against stable alternatives in the CCC-GARCH(1, 1) case.Also see TableS10of the online supplement. Table 6 Percentage of rejection of H 0 with α = 1.5 against bivariate stable alternatives.Tests done at the 10% significance level.Also see TableS15of the online supplement. Table 7 Percentage of rejection of H ′ 0 using tests based on the statistic in (21) with Ψ(ξ) = e −rξ .Tests done at the 10% significance level.Also see TableS17of the online supplement. sampled from a S 1.8 distribution (symmetric SP law with stability index 1.8), the rejection percentages are close to the nominal level, indicating that the test has reasonable empirical size. Table 8 p -values of tests that the CCC-GARCH innovations have a Sα distribution for several choices of α, along with empirical coverage rates of forecasted VaR bounds calculated under H 0 .p-values of the LRcc test are given in brackets.All p-values less than 10% are underlined. Table S2 Percentage of rejection of H 0 with α = 2 against t alternatives.Tests done at the 10% significance level.Due to the computational complexity of the HJM test, it is only included in selected cases to reduce run time. TableS5Percentage of rejection of H 0 with α = 1 against stable alternatives.Tests done at the 10% significance level. Table S6 Percentage of rejection of H 0 with α = 1 against Student t alternatives.Tests done at the 10% significance level. TableS7Percentage of rejection of H 0 with α = 1 against skew Cauchy alternatives.Tests done at the 10% significance level. Table S8 Percentage of rejection of H 0 with α = 2 against stable alternatives when using the test in (8) with the Kotz-type weight function.Tests done at the 10% significance level.TableS9Percentage of rejection of H 0 with α = 1 against stable alternatives when using the test in (8) with the Kotz-type weight function.Tests done at the 10% significance level. Table S10 Percentage of rejection of H 0 with α = 2 against stable alternatives in the CCC-GARCH(1, 1) case.Tests done at the 10% significance level.TableS11Percentage of rejection of H 0 with α = 2 against t alternatives (GARCH model errors).Tests done at the 10% significance level.TableS12Percentage of rejection of H 0 with α = 2 against skew normal alternatives in the CCC-GARCH(1, 1) case.Tests done at the 10% significance level.Due to the computational complexity of the HJM test, it is only included in selected cases to reduce run time.TableS14Percentage of rejection of H 0 with α = 1.5 against stable alternatives (GARCH model errors).Tests done at the 10% significance level.TableS15Percentage of rejection of H 0 with α = 1.5 against stable alternatives.Tests done at the 10% significance level.TableS16Percentage of rejection of H 0 with α = 1.8 against stable alternatives.Tests done at the 10% significance level.TableS17Percentage of rejection of H ′ 0 using tests based on the statistic in (21) with Ψ(ξ) = e −rξ .Tests done at the 10% significance level.TableS19Percentage of rejection of H 0 with α = 1.5 against stable alternatives.For these results, Q and δ were assumed known and not estimated.Tests done at the 10% significance level.
10,619
2023-10-20T00:00:00.000
[ "Economics", "Mathematics" ]
Controlling metal–insulator transitions in reactively sputtered vanadium sesquioxide thin films through structure and stoichiometry We present a study of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {V}_{2}\hbox {O}_{3}$$\end{document}V2O3 thin films grown on c-plane \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {Al}_{2}\hbox {O}_{3}$$\end{document}Al2O3 substrates by reactive dc-magnetron sputtering. Our results reveal three distinct types of films displaying different metal–insulator transitions dependent on the growth conditions. We observe a clear temperature window, spanning 200 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\circ }$$\end{document}∘C, where highly epitaxial films of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {V}_{2}\hbox {O}_{3}$$\end{document}V2O3 can be obtained wherein the transition can be tuned by controlling the amount of interstitial oxygen in the films through the deposition conditions. Although small structural variations are observed within this window, large differences are observed in the electrical properties of the films with strong differences in the magnitude and temperature of the metal–insulator transition which we attribute to small changes in the stoichiometry and local strain in the films. Altering the sputtering power we are able to tune the characteristics of the metal–insulator transition suppressing and shifting the transition to lower temperatures as the power is reduced. Combined results for all the films fabricated for the study show a preferential increase in the a lattice parameter and reduction in the c lattice parameter with reduced deposition temperature with the film deviating from a constant volume unit cell to a higher volume. Vanadium sesquioxide ( V 2 O 3 ) is a transition metal oxide which, like several other such oxides, exhibits a structural phase transition with temperature 1 . In bulk form V 2 O 3 undergoes the transition at around 155 K where its crystallographic structure changes from a rhombohedral phase at high temperatures to a low temperature monoclinic phase. Coupled to the structural phase transition is a change in the resistivity of the material from a metallic state to an insulating state at low temperature 2 , as well as a change in the magnetic state from a paramagnetic to an antiferromagnetic state. V 2 O 3 is also a thermochromic material changing its optical properties during the transition 3,4 . As opposed to bulk V 2 O 3 , which shows a sharp change in both structure and resistivity at the transition, thin films display transitions affected by the choice of substrate, fabrication method, deposition conditions and thickness [5][6][7][8][9] . Through these choices the scale and magnitude as well as the transition temperature can be controlled via strain in the film 10 induced by the lattice mismatch between V 2 O 3 and substrate material and through stoichiometry as the transition is sensitive to the amount of vanadium and oxygen deficiencies present in the film [11][12][13][14] . The morphology of the films also plays a large role as a nanotextured phase coexistence 15,16 has been observed for V 2 O 3 in thin film form, both using direct imaging 17 as well as through secondary effects such as the modification of the coercivity of overlying magnetic layers 18 . The nanoscale structure has furthermore been observed utilizing nanoscopic contacts to investigate the resistivity of V 2 O 3 films. These results show the metal to insulator transition to occur through avalanches as opposed to a smooth transition with the size of the observed jumps in resistivity following a power law behaviour 19 . In this article we investigate how the structural and electrical properties of V 2 O 3 thin films grown by reactive dc-magnetron sputtering can be controlled by the fabrication conditions. In order to achieve this we perform a systematic study of the effects of (1) substrate temperature, (2) www.nature.com/scientificreports/ sputtering power on the overall film properties. We are able to identify how their properties can be tuned and controlled through these parameters and investigate the underlying crystallographic differences in the films and how they affect their properties. We observe a clear correlation between the structural properties and the controllable deposition parameters enabling tuning of the structural as well as electrical properties of the films. We show that the growth temperature is an important factor for the crystalline properties of the fabricated films and how it affects the MIT of the films strongly. Films grown at different temperatures display distinct MIT's which can be classified into three types of transitions ranging from films showing a large hysteresis to films with a suppressed transition. Within an intermediate deposition temperature range (400-600 • C ), classified as type II, we observe a controllable transition wherein the amount of interstitial oxygen in the films can be used to tune the transition. Oxygen is known to occupy interstitial sites in transition metal oxides and results have shown that it affects the metal-insulator transition even for minute quantities 2,20 . Within bixbyite, a metastable polymorph of V 2 O 3 , oxygen has been confirmed to occupy interstitial sites with minimal changes in the structure 21 . Films displaying a transition of type II are highly sensitive to the oxygen stoichiometry with an increase in oxygen interstitials suppressing and shifting the transition to a lower temperature 11,20 . In this type, changes in the crystal structure of the film are limited but large differences can be obtained in the transition behaviour. Our results show that different growth parameters affect the films properties in a coupled manner with films grown at higher sputtering power and O 2 flow settings showing similar structural and electrical properties as films grown at lower sputtering power and O 2 flow. The results therefore reveal the possibility for detailed tuning of the structural properties, stoichiometry and metal-insulator transition via the deposition conditions. Results Growth temperature dependence. The primary factor in the crystalline properties of the V 2 O 3 films is the substrate temperature during growth. For this study an initial characterization of several films grown at varying substrate temperatures was therefore performed and has been described elsewhere 11 . For V 2 O 3 these studies have already revealed a temperature window where highly epitaxial films are obtained between roughly 400 • C and 600 • C. This temperature window is at substantially lower values than those used for fabricating epitaxial V 2 O 3 thin films using other methods such as rf sputtering of compound targets and molecular beam epitaxy which are generally around 700 • C 6,12,18,[22][23][24][25] . Reactive dc magnetron sputtering is therefore a highly viable method for the fabrication of highly crystalline V 2 O 3 films for both research and applications without the need for post-deposition annealing [26][27][28] . All the films were grown at 0.4 Pa pressure with a 20 sccm flow rate for argon. Two different oxygen flow rates were used for this series, 1.4 sccm and 1.6 sccm. The sputtering power was kept at a constant 150 W and the temperature was varied from 350 • C up to 670 • C in 45 • C steps. Although the fabrication conditions are varied substantially during growth, with respect to the chamber oxygen environment, substrate temperature and sputtering power, the main phase of the films was in all cases observed to be V 2 O 3 . All films reported in this article revealed a clear peak in X-ray diffraction scans corresponding to the V 2 O 3 [0006] lattice spacing. It should be noted that as the temperature is changing the reaction rate of the vanadium with the oxygen is also changing and, therefore, the stoichiometry is affected as well. Reciprocal space mapping. In order to investigate the crystallographic parameters of the films and the role of the deposition temperature, selected samples were scanned by reciprocal space maps. The RSM scans were focused on the (1 0 − 1 10) peak of the V 2 O 3 film. This peak was chosen as it is the highest intensity asymmetrical peak of V 2 O 3 . Recording the X-ray intensity at an asymmetrical peak allows the determination of both the inplane and out-of-plane lattice constants, a and c as well as a determination of the lateral correlation length and mosaicity of the film from the peak structure. Figure 1 shows RSM scans for a series of films grown at different deposition temperatures. The scans reveal peaks corresponding to relaxed V 2 O 3 as well as a fully strained layer at the substrate interface with a lateral reciprocal space vector value corresponding to that of the underlying Al 2 O 3 substrate. The values of the lattice parameters are strongly correlated to the deposition temperature. With increasing deposition temperature both the a and c lattice parameters relax reaching almost bulk values for the 620 • C film. Figures 2a,b show that the relative change in the a lattice parameter is larger than in the c lattice parameter. In Fig. 2c,d, the lateral correlation length can be seen to increase within the temperature range 440-575 • C, as well as the mosaicity decreasing. This change in the crystallographic parameters indicates that larger and better ordered crystals are obtained in that temperature range. However, it should be noted, that as the crystals increase in size, they might have internal stresses in them which will affect the MIT as proposed by Schuler et al. 10 . Surface morphology. Atomic force microscopy scans recorded for films deposited within the high quality epitaxy temperature window display low roughness surfaces exhibiting atomic terracing, see Fig. 3. Outside of this window the films exhibit a granular structure with increasing roughness. For the film grown at 670 • C there is a clear change in morphology and crystal islands of 4-6 nm height can be seen. A similarly changed morphology with lesser extent is observed to form at 620 • C. In Fig. 4, the root mean square roughness extracted from the images can be seen. Most of the films have similar roughness values of well below 0.5 nm. The films grown at 530 • C and 575 • C show considerably lower values approaching 0.25 nm. The surface roughness increases dramatically at 670 • C where it is almost 4 times larger at 1.69 nm. Electrical characterization. As the structural phase transition and MIT in V 2 O 3 are linked, the easiest way of observing it is with resistance measurements. Figure 5 shows resistance measurements for both decreasing and increasing temperature between 10 and 300 K. The data was recorded for films fabricated at different deposition www.nature.com/scientificreports/ temperatures. Visible in the figure is the hysteresis associated with the first order phase transition of V 2 O 3 . For the films grown below 400 • C, the films exhibit high resistance at room temperature with a very narrow hysteresis (type I). As the growth temperature is raised, the room temperature resistance is reduced and the hysteresis increases (type II). For temperatures above 600 • C the transition becomes much sharper and a large hysteresis is seen (type III). However, the transition temperature is shifted to a lower value compared to the bulk value of around 155 K. This shift in transition temperature has been linked to the stoichiometry of the film 8 as well as the local strain in the films 10 . The ratio between the vertical and lateral lattice parameters has been discussed as a governing factor in modifying the transition temperature 5 both for overall thin film properties as well as under local stress induced using contact probe pressure 29 . Sputtering power dependence. The properties of V 2 O 3 films are strongly dependent on the stoichiometry. Within this study several growth parameters are varied which affect directly not only the crystallographic structure and quality of the films but also the stoichiometry. The most apparent parameter affecting the stoichiometry is that of the amount of oxygen present in the chamber enabling the oxidation of the vanadium atoms sputtered from the vanadium target. The stoichiometry is therefore also dependent directly on the amount of vanadium atoms emanating from the vanadium target which can be controlled directly by the magnetron sputtering power. XRD. Figure 6a shows X-ray diffraction scans for a series of V 2 O 3 films grown with different power settings while maintaining other parameters fixed. For this series the substrate temperature was 485 • C and O 2 flow rate was 1.6 sccm. Thickness characterization by X-ray reflectivity revealed the growth rate to increase with the power setting with values of 55, 86, 121 and 163 pm/s for the 100, 150, 200 and 250 W settings, respectively. In www.nature.com/scientificreports/ order to maintain a fixed film thickness for the series the deposition time was controlled to compensate for the changing growth rate. For power settings in the range 100-200 W the films display a highly epitaxial nature with a strong V 2 O 3 (0006) peak and Laue fringes extending on both sides revealing the vertical coherence length to be close to that of the film thickness. The peak position varies only slightly for the three films indicating that the general crystallographic nature is not changed substantially within this confined power range although it extends over a factor of two. At the highest tested setting of 250 W the crystal quality is reduced, although this film is still highly textured, with a corresponding decrease in peak intensity. This film furthermore displays a higher peak position indicating an increased compressive strain in the out-of-plane direction compared to the epitaxial films. Electrical characterization. As has been shown in this paper, the deposition settings strongly affect the structural as well as electrical properties of V 2 O 3 films. This is especially clear for the temperature dependence of the electrical resistance of the films as slight changes in deposition conditions, although not causing large changes in the structural quality of the films, can affect strongly the scale and magnitude of the MIT of the material 8,11 . Figure 6b shows the resistance of V 2 O 3 films deposited at different magnetron sputtering power and O 2 settings. Similar to changes in the resistivity for films deposited at different O 2 flow settings the temperature dependent resistivity of the films is directly dependent on the sputtering power. 1.69 nm Figure 4. The root mean square roughness taken from the AFM images as a function of temperature (Fig. 3). The value for the 670 • C film is cut off from the graph as it was an extreme outlier, its value was 1.69 nm. Resistance [ ] Figure 5. Resistance measurements as a function of temperature for films deposited under different substrate temperatures. The results reveal three distinct classes of resistance curves which can be classified into three types of films depending on their growth temperature. Type I ( < 400 • C) display a high room temperature resistance with a narrow hysteresis, type II films (grown at 400-600 • C) display a lower room temperature resistance and a stronger hysteresis, and type III ( > 600 • C) show a sharper transition with a stronger hysteresis shifted to lower temperatures. www.nature.com/scientificreports/ Comparing the results of resistance measurements for films deposited at higher power to films deposited at lower power but with a smaller O 2 flow setting reveals clear similarities. A higher power in a fixed O 2 flow setting effectively increases the metallic portion of sputtered flux from the magnetron increasing the V/O ratio in the films. Even though differences in the plasma chemistry are expected during growth with different power settings, such as for the ionization level and stoichiometry, the films show the same structural and transport properties. The metal-insulator transition can therefore be both directly tuned through the O 2 flow during deposition as well as through the power setting while maintaining a highly defined single crystalline structure. The control of the O 2 flow or sputtering power allows us to tune exactly the amount of oxygen interstitials (or excess oxygen in the film) at a given deposition temperature resulting in films with exactly the same transition behaviour. Any other means which can cause the same phenomenon (i.e. control in oxygen incorporation) can result in the same transition performance. Discussion. In this study we focus our attention on the effect of the growth settings on the structural and electrical properties of V 2 O 3 films grown by reactive dc-magnetron sputtering. The electrical properties of the films show a strong dependence on the deposition conditions varying in magnitude and transition temperature, even when structural variations are observed to be minor from X-ray diffraction measurements. www.nature.com/scientificreports/ Figure 7a shows the c lattice parameter as a function of the a lattice parameter for all of the films. It can be seen that there is a clear trend in the distribution of the lattice parameters. The unit cell expands preferentially with regards to the a lattice parameter at lower temperatures. From the results presented in the graph, the dominant factor in affecting the observed lattice parameters is the growth temperature while the oxygen flow rates affects the parameters to a lesser extent and preferentially the a lattice parameter. This result is further illustrated in Fig. 7b,c which shows the c/a ratio and unit cell volume as a function of growth temperature. Films grown at a higher temperature show a c/a corresponding closer to the bulk value with increasing growth temperature while films grown at lower temperatures reveal a lower c/a ratio. For the temperature series, films grown at the higher temperatures had larger a and smaller c lattice parameters than films grown at the lower temperatures with the highest temperature reaching bulk values in both the a and c lattice parameters. Although the point of origin for these values is shifted, this expansion of the a lattice parameter and compression of the c lattice parameter is in agreement with the thermal expansion of V 2 O 3 . With increasing temperature the a lattice parameter has been observed to increase while the c lattice parameter has been shown to decrease with increasing temperature up to ∼ 600 • C 31 . Figure 6b shows the resistance as a function of temperature for the series grown at 485 • C with different sputtering power. The graph shows a clear change with an increase in the resistance and shift of the transition to higher temperatures with increasing power (i.e. with reduced oxygen content of the films). These results indicate that extra oxygen incorporated in the films sits at interstitial sites which give rise to an increase in the a lattice parameter while having much less impact on the c lattice parameter (Fig. 7a) and effectively a small change in the c/a ratio as seen in Fig. 7b. Comparatively larger changes are observed in the unit cell volume of the films as it scales proportionally more strongly with the in-plane lattice parameter. The increased strain in the a lattice parameter arising from the interstitial atoms leads to more stabilization of the metallic state, suppressing the formation of the insulating state 11 . These changes in the transition are in agreement with recently published results which show that interstitial oxygen defects as parts of O Frenkel pairs in V 2 O 3 lower the energy cost of the transition and thereby shift the energy balance in the crystal towards the high temperature metallic phase, reducing the temperature of the transition 20 . A similar series where the oxygen flow rate was varied but at a higher growth temperature of 575 • C revealed an increased a lattice parameter with the oxygen flow setting but substantially smaller c lattice parameter compared to the 485 • C series in accordance with the c lattice parameter being closer to bulk values. With the higher growth temperature the films relax towards bulk values of the c and a lattice parameters (Fig. 2a,b) with a larger relative change in the a lattice parameter and larger impact on the c/a ratio. Coupled to this reduction in the a value the probability of inhabiting oxygen interstitials is reduced and the suppression of the transition is lessened as can be seen in Fig. 5 where films grown at a higher temperature reveal a more pronounced MIT. Changing the deposition temperature we observe three distinct types of transitions. Films grown at low temperatures, below 400 • C display a high room temperature resistance with a narrow hysteresis shifted to higher temperatures (type I). Films grown in the temperature range 400-600 • C display a stronger hysteresis and lower room temperature resistance (type II). At the higher temperatures (above 600 • C) the films display a sharper transition with a larger hysteresis which is shifted to lower temperatures (type III). The higher room temperature resistance of the type I could be due to less crystallinity of the films as can be seen in the reciprocal space maps shown in Fig. 1 which display broader peaks of lower intensity compared to the other films. Type I films should therefore include more domain boundaries which act as scattering points for electron conduction. For type II films the crystallinity of the films is well defined. These films display transitions which can be directly tuned through the growth conditions i.e. their electronic properties are directly dependent on the stoichiometry via oxygen interstitials. In type III, the films are fully relaxed and act similar to bulk V 2 O 3 thus showing wider hystereses and more prominent transitions although they are shifted towards lower temperatures as the growth temperature is increased. Previous results published in the literature have observed a correlation between the temperature and magnitude of the MIT and the c/a ratio of the lattice parameters 5,32,33 . Figure 8 shows a combined plot of the electrical resistance of the films fabricated for this study arranged according to their c/a values. The graph highlights the onset temperature of the MIT and the extent of the hysteresis in the transition with respect to the c/a values. The onset temperature of the transition shows a clear dependence on the c/a ratio increasing in temperature with decreasing c/a value. The extent of the hysteresis shows, however, limited dependence on the c/a value. This results is especially clear for films with transitions of type II which have a c/a ratio in the range 2.805-2.815 where the amount of oxygen interstitials affects the transition strongly. These results therefore clearly illustrate that the crystallographic lattice parameters are not necessarily the dominant factors in controlling the MIT of reactively sputtered V 2 O 3 thin films even though structural analysis reveals the films to be of a highly crystalline nature. Changes in the MIT can therefore not be directly linked to epitaxial strain in the films as the role of defects, interstitial oxygen and local strain in the film can not be neglected when investigating the coupling between the crystallographic nature of the films and the MIT. Conclusions We have shown that using reactive magnetron sputtering, highly epitaxial thin films of V 2 O 3 can be grown on sapphire substrates. The high level of crystallinity can be reached at temperatures substantially lower than reported previously within a 200 • C wide temperature range. Within this range the surface roughness of the films is below 1 nm and the films reveal atomically flat terraced structures. We observe the growth temperature to have a direct effect on the nature, scale and magnitude of the MIT. Films grown at temperatures below 400 • C (type I) display a narrow hysteresis shifted to higher temperatures and resistance values. Films grown within the temperature window 400-600 • C display a stronger hysteresis (type II) which can be tuned by the sputtering Methods Thin film growth. The V 2 O 3 thin films studied in this work were all fabricated by reactive dc-sputtering using a custom built magnetron sputtering chamber 34 . A vanadium target with 99.5% purity was used and the sputtering power, substrate temperature and O 2 flow setting controlled and maintained fixed for each film deposition. Prior to deposition the base pressure of the system was below 4 × 10 −6 Pa. The sputtering gas was 99.999% pure argon gas with a fixed flow setting of q Ar = 20 sccm throughout the deposition. Oxygen gas of purity 99.999% was used at flow rate settings ranging from q O 2 = 1.4 sccm up to 2.0 sccm. The chamber pressure during sputtering was maintained at 0.4 Pa using a throttle valve in front of the turbomolecular pump. For this study we use 1 × 1 cm 2 single crystalline sapphire substrates with c-plane [0001] surface orientation. During growth the substrate temperature was controlled in the range of 300-700 • C using a 3.8 cm diameter circular plate heater located 4 mm below the substrate holder. The deposition temperature corresponds to the temperature on the sample holder as determined by calibration of the temperature as a function of the heater power settings. The sputtering power was varied between 100 and 250 W. Prior to insertion into the vacuum chamber, the substrates were cleaned, sequentially, in ultrasound with acetone, methanol and isopropanol, for 5 min each. Following the chemical cleaning, the samples were rinsed with deionized water, dried with N 2 and subsequently put into the load lock of the vacuum chamber. Under vacuum the substrates were annealed at 620 • C for 20 minutes and then allowed to reach their respective deposition temperature for 15 min. Before deposition the target was pre-sputtered in pure argon for 8 min, followed by another 7 min in the intended argon/ oxygen mixture before opening the shutter. The chamber pressure was monitored with 2 different gauges. Firstly, a full range combined pirani and cold cathode gauge, for high and low pressure measurement, capable of measuring from atmosphere down to 5 × 10 −7 Pa. Secondly, a capacitance manometer, for high accuracy measurement of the absolute pressure during growth, operating in the range between 10 Pa down to 0.001 Pa. The thickness of the films was maintained at ∼ 60 nm through timing of a shutter located in front of the sputtering magnetron. After finishing the deposition, the power to the heater was turned off and the films allowed to cool to room temperature before being retrieved for ex-situ characterization. Structural characterization. The structural properties of the films, which comprises the focus of this study, were investigated by X-ray diffraction (XRD), X-ray reflectivity (XRR) and reciprocal space mapping (RSM) measurements using a Panalytical X'Pert Pro diffractometer at room temperature. www.nature.com/scientificreports/ For the X-ray measurements there were 2 different optic setups. Firstly for the 2 θ/ω scans a 2 bounce hybrid monochromator with 1/8 • slit was used on the incident side, while the diffracted side had a parallel plate collimator (0.27 • ) with a 0.1 mm slit. Secondly, for the ω rocking curves and RSM's, the incident side had the hybrid monochromator while on the diffracted side there was a triple axis analyzer crystal with a 12 ′′ acceptance angle. From the XRR measurements the thickness and density of the films were determined by fitting using the X'Pert Reflectivity program. For all the films the density was close to bulk value, being in the range 4.95-5.02 g/ cm 3 , whereas the bulk value is 5.02 g/cm 330 . The thickness of the films investigated in this study was between 55 and 65 nm. X-ray diffraction measurement and reciprocal space mapping were utilized to determine the internal crystallographic parameters, such as the in-plane and out-of-plane lattice parameters, lateral correlation length and mosaicity of the films. The surface morphology of the films was investigated by atomic force microscopy using a Park XE-100 instrument in both contact as well as non-contact mode. Electrical characterization. The resistance of the films was recorded using a mini cryogen free system from Cryogenic Uk ltd. Measurements were performed with a 2 point setup using a Keithley 2400 sourcemeter connected to a Keithley 7001 switch system with a 7012-S 4 × 10 matrix card. With this setup it was possible to measure 2 samples simultaneously. The sourcemeter and the switch card have 10 G and 1 G input resistances respectively limiting the maximum resistance measurable to approximately 5 G . To connect leads to the sample, contacts were e-beam evaporated onto the edges of the films using a shadow mask. These contacts consisted of 5 nm of chromium followed by 100 nm of gold. Wires from the system were attached to the contacts using conductive silver paint. Each sample was scanned decreasing the temperature from 300 to 10 K followed by a second scan increasing the temperature from 10 K up to 300 K. Both temperature scans were performed at a scan rate of 0.8 K/min.
6,708.8
2021-03-18T00:00:00.000
[ "Materials Science", "Physics" ]
The illusion confusion In Batty (2010b), I argue that there are no olfactory illusions. Central to the traditional notions of illusion and hallucination is a notion of object-failure—the failure of an experience to represent particular objects. Because there are no presented objects in the case of olfactory experience, I argue that the traditional ways of categorizing non-veridical experience do not apply to the olfactory case. In their place, I propose a novel notion of non-veridical experience for the olfactory case. In his (2011), Stevenson responds to my claim that there are no olfactory illusions. Although he agrees that it is natural—or at least commonplace—to think there are no olfactory illusions, he argues that there are and provides examples of them, many of which he suggests have analogs in the visual and auditory domains. In this paper, I examine the nature of the disagreement between us. I argue that Stevenson fails to argue against my conclusion that there are no olfactory illusions. INTRODUCTION AGAINST OLFACTORY ILLUSIONS Let me begin with an overview of my previous arguments 1 . In Batty (2010a), I argue for a view according to which olfactory experience has representational content-that is, there is a way that the world appears to a subject when she has an olfactory experience. I set this discussion against suggestions previously in the literature (albeit brief) that olfactory experience may have no representational content-that is, that there is no way that the world appears to a subject when she has an olfactory experience 2 . These are views according to which olfactory experiences are "mere sensations," or "raw feels." I argue that driving these suggestions are differences between visual and olfactory phenomenology-that is, differences in what these two kinds of experiences are like for the subject. Visual experience is incredibly rich, seemingly offering up an array of three-dimensional objects. For this reason, the view that visual experience is worlddirected-indeed directed at the objects in our environmentcomes naturally to us, with the most common version of such 1 It must be noted that all of my previous arguments concern human olfaction. I will have something to say about the olfaction of other creatures at the end of the paper. 2 For example, both Peacocke (1983) and Lycan (1996Lycan ( , 2000 suggest that the phenomenology of olfactory experience does not uphold a representational view. In the opening chapter of his Sense and Content (1983), Peacocke suggests that "a sensation of [smell] may have no representational content of any sort, though of course the sensation will be of a distinctive kind" (3). This is all he has to say, however. Still, his remarks suggest a sensational view of olfactory experience. Echoing Peacocke, Lycan claims that "phenomenologically speaking, a smell is just a modification of our consciousness, a qualitative condition or event in us" (2000,281), "lingering uselessly in the mind without representing anything" (1996,245). Lycan does go on to argue that olfactory experience is representational; but it is clear from these remarks that he thinks that we cannot uphold such a view on the basis of the phenomenology of olfactory experience. He, in turn, proposes that the appropriate notion of content for olfactory experience is a teleological one (1996). a view the representational, or content, view. The case of olfactory experience is different. Although we might think that it presents a wealth of apparent properties, it does so with much less structure than its visual counterpart. As I have put it elsewhere, compared to visual experience, olfactory experience is "just plain smudgy." Despite this, I argue that there is a representational view of olfactory experience available and, as it turns out, we are able to draw that view from a certain debate about visual content. In the visual domain, there is significant disagreement about how visual experience represents that objects are thus and so. One view is that visual content is abstract and that your visual experience of a ripe tomato, for example, represents that there is "something or other" at a given location that it is red, round, and so on. This view is contrasted with the view that visual content is objectinvolving. On this view, the tomato itself (that very thing, there, before you) is a constituent of the content of your experience. That is, your experience represents that the particular tomato is at a given location and it is red, round, and so on. Unlike what the abstract view claims, your experience does not represent merely that "something or other" has those properties. Drawing on several examples, I argue that olfactory experience does not represent particular objects in the way that some have argued vision does and, as a result, an object-involving view of olfactory experience is not available 3 . These examples all draw on what we might call day-to-day, or typical, olfactory experiencesnamely, those that we have out in the world and not those that we might have in a controlled laboratory environment 4 . As most of us will never find ourselves in the laboratory environment, there remains an interesting question regarding the content of our typical olfactory experiences. Examining these typical cases olfactory experiences, I demonstrate that everyday olfactory experiences do not possess the robust spatial representation present in the visual case and, as a result, does not allow us to single out particular objects in our environment 5 . That is to say, unlike visual experience, olfactory experience does not reveal the particular objects that, in the case of veridical experience at least, bear the olfactory properties that it presents. This claim, I argue, is just the claim that olfactory experience does not achieve figure-ground segregation. Still, as I argue, an abstract view is a remarkably good fit for the olfactory case and suggestions that olfactory experience is merely sensational incorrectly cast an object-involving view as the only option for olfactory experience. The right view about the representational content of olfactory experience, I conclude, is one according to which it has a weak form of abstract content. In any circumstance, a given olfactory experience represents that there is something or other "here," or "at" the perceiver, that has certain olfactory properties. I call this the abstract view of olfactory content. In Batty (2010b), I turn to issues of misrepresentation with respect to the typical olfactory experience. In particular, I argue that the abstract view of olfactory content explains some of our intuitions about how olfactory experience can misrepresent the world. I point out that the notion of an olfactory hallucination is something that comes naturally to us while the notion of an olfactory illusion does not. This is reflected in the scientific literature on olfaction, in which reference to hallucination is common, but illusion rare. It has also been reflected in the philosophical domain-albeit in personal conversation and not in print-with a hesitancy in answering the question "Are there olfactory illusions?" As we know, the answer to the visual analog is quick and easy: yes there are visual illusions, and there are many examples at the ready. In my experience, the olfactory question is met with a sense of cautiousness, even confusion, over just what the question itself is asking. Whether there are olfactory hallucinations, however, is met with immediate assurances that there are. Taking this discrepancy as a datum, I argue that the abstract view of olfactory content can explain the discomfort we have with the notion of an olfactory illusion as well as the apparent comfort we have with its counterpart-the olfactory hallucination. What the abstract view shows us is that, in the case of olfactory experience, the traditional distinction between illusory and hallucinatory experience does not apply. In turn, it directs our attention to a novel notion of non-veridicality-one that has been absent from philosophical discussions of illusion and hallucination. Traditionally, philosophers have thought that a perceptual experience can misrepresent, or be non-veridical, in one of two ways: the experience can be illusory or it can be hallucinatory. To take a common example, a navy blue sock can look back to you. What you suffer in this case is an illusion with respect to the sock's color. The sock is there, but your visual experience "gets its color wrong"; the experience attributes a property to the sock that the sock does not have. In the case of a hallucination, there is no object there and your experience is not accurate even in that sense. Macbeth famously suffers in just this way; there is no dagger before him and when it appears as though there is, he undergoes a hallucination. Central to the traditional notions of illusion and hallucination, then, is a notion of object-failure; in each, an experience fails in representing a particular object. This much illusion and hallucination have in common. But the nature of that object-failure falls into two kinds. In the case of illusion, a visual experience misattributes a property to an existent object. In the case of hallucination, experience reports that there is an object there, when there is no such object. This difference in the kind of object-failure committed marks what I call the "traditional distinction" between illusion and hallucination. In order to see why the traditional distinction does not apply to the olfactory case, consider for a moment the visual case. In the case of the typical visual experience, we can ask two separate questions of the object of experience, o: For any property F that o appears to have, does o really have F? (V-Attribution) Is o there at all? (V-Existence) If the answer to either is "no," then visual experience fails to present an object accurately. As I put it above, it commits objectfailure. But, as we know, they commit object-failure in different ways. If the answer to V-Attribution is "no," my experience misattributes a property to an existent object. And if the answer to V-Existence is "no" my experience reports that an object is present when it is not. This difference in the kind of objectfailure committed-the difference between visual illusion and visual hallucination-is marked by the different content of V-Attribution and V-Existence, in what we ask of a given object of experience. Now consider the olfactory case. If there were olfactory analogs of V-Attribution and V-Existence, we could ask of an object of olfactory experience, x: For any olfactory property F that x appears to have, does x really have F? (O-Attribution) Is x there at all? (O-Existence) But, as I have argued previously, olfactory experience only ever reports that there is something or other at a perceiver that is F. This is unlike the visual case where a perceiver's experience typically represents particular objects in one's environment. That is to say, unlike visual experience, olfactory experience is disengaged from any particular object. This is why an object-involving account of its content is unsuitable. In what follows, I will refer to this point as the claim that there are no "presented objects" in olfactory experience 6 . This explains why we are uncomfortable with the notion of an olfactory illusion. The idea that a smell is misattributed to an object does not grip us and this is because the content of olfactory experience does not support this kind of claim. That is, in olfactory experience, there is no particular thing of which we can ask, as in V-Attribution, "it appears to be F, but is it really as it appears?" For this reason, I conclude that there are no olfactory illusions 7 . But, now we are faced with a puzzle. This is because, for the same reasons, there are also no olfactory hallucinations. There is no particular thing of which we can ask, as in V-Existence, "yes, it appears to be there, but is it?" But, as I have argued, the notion of an olfactory hallucination is a notion that we are comfortable with. If what I say about the illusion case is right, however, it ought not to be. The abstract view of olfactory content can solve the puzzle. As we have seen, the abstract view draws attention to the kinds of questions that we are unable to ask of olfactory experiencenamely, questions that refer to particular objects. But, as any account of content will, it also draws attention to the kinds of questions that we are able to ask in evaluating an olfactory experience. And, considering these questions, I argue, is the key to solving the puzzle. What questions are we able to ask, then? Given the content of olfactory experience, we can ask of a given olfactory experience and an apparent property F: is there something or other at the perceiver that is (or has) F? In asking this question, we do not pick out any particular object (as olfactory experience does not allow for this). Rather, we ask whether there is anything at all around that is F. And, due to its content, a question of this type is the only one we can ask of when evaluating an olfactory experience for veridicality. Notice, however, that this question bears similarities in form to O-Existence-the question that is meant to capture a traditional notion of hallucination for olfactory experience. O-Existence asks whether a particular object that appears to be F is around; the present question asks whether there is anything around that is F. We do not ask whether F has been misattributed to an object-as we would in O-Attribution-but whether F-ness is instantiated at all. The only difference between the present question and O-Existence is that it is not a particular objects" to denote circumstances in which olfactory experience presents particular objects, as an object-involving view of its content would have it. 7 Note that it will not help here to argue that sometimes physical objects ("source objects," as we might call them) seem to have properties that they do not in fact have. My claim is that, given the nature of the phenomenology of olfactory experience, we are never in a position to know what particular object has, or is the source, of the properties that we perceive. That is to say, while olfactory experience predicates properties of "something or other," it is otherwise silent on the nature of that object-whether it be, in fact, an odorous effluvium or a "source object." Interrogating olfactory experience further will not tell us what olfactory objects are. So, although we do attribute-and at times incorrectly-properties to source objects, we do not do this on the basis of olfactory experience alone. Arguably, when we do, we do so on the basis of a network of background beliefs about source objects gained from past experience and/or the exercise of other modalities in discovering those sources. Again, those source objects are not revealed to us in olfactory experience itself and, as a result, any mistaken attribution to them we make does not provide a counterexample to my conclusion. object after which we ask. Instead, we ask after a certain property. In each case, however, we ask whether it exists or, better yet, is there. Because of these similarities, I argue that it is understandable that the notion of an olfactory hallucination resonates with us. To be sure, as it turns out it is not the traditional notion of hallucination that does. But it is a notion of hallucination nonetheless-and a novel one at that. As we have seen, when olfactory experience is non-veridical, it incorrectly reports that something or other at the perceiver has a certain property. But this is just to say that when olfactory experience is non-veridical, it incorrectly reports that a certain property is present in the perceiver's environment. As a result, I conclude that the notion of non-veridicality that is suited to olfaction is one of property hallucination. It is a notion of misrepresentation, or non-veridicality; but it is one that is disengaged from any particular object. This novel notion of non-veridicality explains two features of the olfactory case. First, it provides the key to understanding why we are comfortable with the notion of an olfactory hallucination, but not comfortable with that of an olfactory illusion. Secondly, in providing a new way of thinking of non-veridicality for the olfactory domain, it also solves the puzzle brought about by the conclusion that there are no olfactory illusions. In particular, it draws attention to reasons for thinking that there are olfactory hallucinations other than those provided by the traditional distinction between illusion and hallucination 8 . IN SUPPORT OF OLFACTORY ILLUSIONS: STEVENSON'S VIEW In what follows, I will take the premises of my argument for granted-in particular, the claim that, in the typical olfactory case, olfactory experience does not achieve figure-ground segregation and, in turn, object-involving status. Recently, Richard Stevenson has responded to my argument that, based on these considerations, there are no olfactory illusions 9 . As we will see, although his embody conclusions of empirical study, Stevenson's 8 One might worry that my claim that non-veridical olfactory experiences are best characterized as property hallucinations blurs certain intuitive distinctions that we make. For example, consider the two following cases: (1) a case in which there is no odorant at all in the room, and yet you smell coffee, and (2) a case in which there are only dry flowers in the room but in which you misrepresent their smell as coffee. On my view, the experiences of each would both count as property hallucinations. They are each cases in which, on the abstract view, the content of their respective experiences will be the same. And, in turn, in evaluating the veridicality of each, all we can ask is "is the coffee smell instantiated?" Still, just because the content of olfactory experience does not distinguish between a case in which we have an odorant, or odorant source, and one in which we do not, this is not to say that we cannot maintain the intuitive difference between these two cases. It remains open to explain that difference as a result of inference from past experience, background beliefs as well as the contribution of other sense modalities-the latter, in particular, for the case of (2). See also fn. 7 for a related point. 9 Stevenson does not directly address my notion of property hallucination. Given that my arguments for property hallucination in the olfactory case turn on my arguments against the existence of olfactory illusions, we can interpret his failure to do so as resulting from his denial of my conclusion regarding olfactory illusion. If there are olfactory illusions as tradition would have them, then there is no need to posit a novel notion of non-veridicality for the olfactory case. I will, however, return to the benefits of this novel notion later in the paper. own examples of illusion comprise contextual and constancy effects that could, or do, occur in day-to-day olfactory interactions with the world. The empirical studies he cites simply make it clearer that there are such effects. As the point of the present paper is to examine whether Stevenson's cases succeed in overturning my arguments against olfactory illusions in these typical olfactory cases, my and Stevenson's question is the same: are standard cases of non-veridicality for olfactory experience rightly characterized as olfactory illusions? Stevenson's argument proceeds in two, roughly consecutive stages. First, Stevenson argues that there are olfactory illusions by drawing attention to those cases in which we find them. Secondly, Stevenson examines why the notion of an olfactory illusion has not resonated with us. In this way, his approach is like mine. It is true, according to Stevenson, that we are (or have been) uncomfortable with the notion of an olfactory illusion. Like me, he believes that this is in need of explanation. Stevenson begins by spending some time discussing the term "illusion" and the kinds of phenomena that it denotes. He tells us that the term "illusion" derives from the Latin "illusio" which, as he cites, has the following meaning: "deceit, to mock or make sport with, the saying of the opposite of what is meant" (1888) 10 . Stevenson takes this definition to involve both an objective and a subjective component. On the objective side, a subject is presented with what is not the case-the "opposite" of what is the case, as the definition states. In this way, the subject is deceived, mocked, or made sport with. But, given that the subject is deceived, she does not notice that there is a disparity between the way the world is and what is being presented to her as the case. Still, she is capable of noticing, Stevenson suggests, given the right kind of circumstances or instruction. This is what Stevenson means by the subjective component of the definition. I take it that it is the term "deception" which "suggests a potential for subjective awareness of [the] disparity" (1888); "illusion," defined in terms of "deception," also carries with it that suggestion. As Stevenson notes, these two aspects of the meaning of "illusion" are not always apparent in the empirical literature on olfaction. Rather, it is the objective component of the term that has currency of use. Although there are subtle differences in the use of "illusion" in the empirical literature, he tells us that, in general, it is used to refer to "a disparity between some objective state of the world and ones [sic] perception of it" (1888). This forms what I will call his working definition of "illusion." This definition, he claims, captures those phenomena that psychologists accept as cases of visual, auditory and somatosensory illusions. Although Stevenson claims that this definition proves enough to pinpoint cases of olfactory illusion, he recognizes that it leaves out any reference to an awareness of the misrepresentation. As he claims, this omission is of little consequence for the cases of visual, auditory and somatosensory illusions. But, as he argues, it has invited the view that there are no olfactory illusions. As evidence of our resistance to the notion of an olfactory illusion, he observes, like me, that the indices of many popular perception textbooks, as well as those of recent specialist books on olfaction, lack any mention of olfactory illusion. 10 All references to Stevenson will be to Stevenson (2011). As a way drawing out to the difference between us, then, Stevenson argues that we could take this evidence as indicating one of two things: either (1) that there are no olfactory illusions or (2) that those illusions escape notice. As I outlined above, I argue for (1) and this itself explains our discomfort. As we know, my arguments turn on the traditional distinction between illusion and hallucination together with observations about the phenomenology of olfactory experience. Because olfactory experience is not object-involving, the notion of an olfactory illusion not only has no resonance with us, but also has no application to the olfactory case. Unlike me, Stevenson opts for (2). After arguing that there are cases in which olfactory illusions occur, Stevenson claims that we are typically unaware of having experienced an olfactory illusion, and this accounts for why we might think that there are none. He states this point in terms of verification. We are not only typically unaware that we are undergoing (or have undergone) an olfactory illusion; even if we suspected that we were, we are unable in most cases to verify whether we are (or were) in fact suffering one. Still, as he claims, we would be mistaken to move from this epistemological point to the conclusion that there are no olfactory illusions. Instead, we ought to see our tendency to make this move as the result of a failure to appropriately consider the subjective aspect of the meaning of "illusion" and realize that, unlike their visual, auditory and somatosensory counterparts, olfactory illusions are not the kinds of things of which we are typically aware. In arguing for (2), however, Stevenson first provides evidence against (1). It is his argument against (1) that I am primarily concerned with in this paper. I will, however, turn to his argument for (2) in my conclusion. At present, I turn to (1). AGAINST (1): EMPIRICAL EVIDENCE OF OLFACTORY ILLUSIONS My discussion of (1) proceeds in two stages, in line with what I take to be the two arguments that Stevenson gives for the existence of olfactory illusions. His first argument forms the bulk of his discussion and involves setting out examples of olfactory misrepresentation that fit his working definition of "illusion." The second of his arguments occurs in the discussion section of his paper and requires substantial reconstruction. In doing so, we see that Stevenson employs a further notion of illusion-one that, I argue, is the same as the traditional notion that I adopt. Given this, we see that there are two notions of illusion at work in his paper. I will argue that Stevenson is not successful in showing that, in accordance with either of these two notions, there are olfactory illusions. Let us turn, then, to the first stage of Stevenson's argument. According to Stevenson, what are the cases that we can rightly describe as those of olfactory illusions? Given his working definition of "illusion," each involves a "disparity" (1888), as he puts it, between the way the world is and one's experience of it. In turn, his arguments assume that there is indeed an objective way that the world is with respect to olfactory phenomena (e.g., quality, intensity, hedonic value), and one that could in principle be accurately represented in olfactory experience. As he puts it: "[a] misperception assumes that there is a veridical state, in which the mind accurately reflects some objective state of the environment" (1893). According to Stevenson, cases meeting his working definition fall into two categories, each defined by the type of disparity that exists between the external stimulus and a subject's experience 11 . There are the cases in which the same stimulus is experienced differently by a given subject at different times. And there are the cases in which different stimuli are experienced by a subject as the same. According to Stevenson, both of these types of disparity parallel accepted cases of illusion in other modalities 12 . Let us consider cases of same stimulus-different percept first. According to Stevenson, this category contains a set of cases in which context is thought to affect olfactory experience-in particular, contextual effects of perceived quality, intensity, and hedonic value. In what follows, I will set out several examples of these contextual effects. Stevenson does provide more cases for each category. He also provides examples of variation in the perceived location of a chemical stimulus, as well as an example of an olfactory analog of binocular rivalry. I will set aside these latter two cases. For my purposes, it is enough to consider the perceptual phenomena that fall under the category of "contextual effects 13 ." In the qualitative category, Stevenson tells us that experiments have shown that the compound dihydromyrcenal is perceived to be more "woody" when smelled in the context of citrus smelling odors, and more "citrusy" when smelled in the context of "woody" smelling odors. In each case, the stimulus remains the same; how a subject perceives that stimulus to be-i.e., the odorant's apparent properties-changes given what other odors it is perceived alongside. If we recall that Stevenson's working 11 In discussing Stevenson's examples, I adopt his use of "disparity" to refer to that difference between the way things appear and the way that they are. It is a term that is rarely used in the philosophical literature, with philosophers often adopting characterizations in terms of the inaccuracy of a representation. 12 I will avoid going into the details of these illusions in other modalities. For present purposes, it enough to note that he thinks that there is this parallel. 13 I set aside cases of perceived location and binaral rivalry for reasons other than brevity. To give Stevenson's discussion of olfactory localization full treatment would involve dealing with difficult questions regarding the status of the retronasal as truly olfactory. Given that my claims regarding olfactory illusion center on orthonasal olfaction, I consider only the orthonasal. I set aside his consideration of binaral rivalry because it isn't clear that it constitutes an illusion, even in his working sense. In the case of binaral presentation, one's olfactory experience switches back and forth from the presentation of an odor located discretely at one nostril to an odor located discretely at the other. In each case, the odorant is indeed at the nostril at which one's experience represents it as being. What one's experience does not represent is that there is another odorant present at the other nostril. (Assume that experience gets the quality and intensity "right." He does not claim that there is any other disparity that that of localization.) But surely in each case (switching from one nostril to the other) one's experience "accurately reflects some objective state of the world" (1888)-namely, that a certain odorant is located at a certain nostril. What it does not report is that there is an additional odorant located at the other. But this is just a failure to perceive something in one's environment. By Stevenson's own lights, the experience hasn't conveyed any information that is false; it has simply failed to convey all of the information about the perceiver's environment. Accurately representing some objective state of the environment does not involve representing every feature of that environment. That is too strict a constraint on veridicality-arguably one that we would never meet. What matters for determining whether an experience is veridical is whether what experience does represent is represented correctly-i.e., veridically. definition of an illusion is "a disparity between some objective state of the world and ones [sic] perception of it" (1888), then it would seem that such a case meets this definition. Given that, in each case, the target odorant appears to be "more F," for some apparent property F, the implication is that there is some way that the target odorant is, irrespective of context 14 . On Stevenson's definition, then, both the "more citrusy" and "more woody" contextual effects constitute illusions with respect to perceived quality. Stevenson claims that similar effects are reported for perceived intensity and hedonic value. For example, in the case of intensity ratings, experiments have shown that intensity ratings of a range of odor concentrations are affected by intermediate exposure to the same stimulus at weaker, or stronger, concentrations. So, for example, if after having initially rated the intensity levels of a range of odor concentrations subjects are then exposed to a stronger concentration of the same odorant as a biasing task, those subjects later judge the initial concentration range to be less intense. And, as Stevenson tells us, the opposite effect results from intermediate exposure to a weaker concentration. According to Stevenson, this is a case in which there is a disparity between the objective state of the stimulus, as he would put it, and a subject's perception of it. As in the case of perceived quality above, the stimulus remains unchanged throughout the experiment; however, how that stimulus appears to be-that is, its perceived intensity-changes given the context of perception, in this case one created by the biasing task. The suggestion is that, prior to the biasing task, there is no disparity between the intensity properties of the stimulus and the subject's perception of them. It is only after the biasing task that the subject suffers an illusion with respect to the intensity of that stimulus. Finally, in the category of hedonic judgment, Stevenson cites a series of experiments in which labels reflecting positive and negative contexts have been shown to affect judgment of the pleasantness of an odorant stimulus. As he tells us, in a particular experiment, previous exposure with the label "toilet cleaner" (i.e., a negative context) affects the judgment of a pine odor's pleasantness in later contexts labeled "Christmas tree" (i.e., a positive context). Similarly, initial exposure to the same odorant with the label "Christmas tree" affects judgment of its pleasantness in later context labeled "toilet cleaner." In the first case, perceivers judged the shift in pleasantness to be less than they did in the second case, when the labels were reversed. This is despite the odorant stimulus remaining constant throughout. Verbal labels, then, can affect judgments of pleasantness. Although Stevenson does not state this explicitly, these are, for him, cases of illusion because of the relation that experience bears to our hedonic judgments. In particular, the case suggests that those judgments are made on the basis of experience such that a difference in judgment indicates a difference in the associated olfactory experience. It is only if this is true that differences in 14 In line with Stevenson's characterization of illusion, I take it that this is a feature of the odorant that could in principle be represented veridically in olfactory experience. In what follows, I will leave out reference to these counterfactual circumstances. But it should remain understood that, according to Stevenson, they could obtain. hedonic judgment could tell us anything about the existence of illusions in the olfactory case. For illusions are cases of perceptual misrepresentation, as Stevenson claims earlier; they cannot only be matters of inaccuracy of judgment-although, if we take our illusory experiences at face value, our judgments will be inaccurate as well. With this in mind, it is clear that, for Stevenson, cases of variation in hedonic judgment involve a disparity between some objective state of the stimulus and a subject's perception of it. The stimulus remains the same, after all. To be sure, in the experiment he cites, this disparity might underlie each of the subject's initial judgments, given that in both cases the odorant is perceived with verbal labels. It might be that "the veridical state, in which the mind accurately reflects some objective state of the environment" (1893) is one had in the absence of any verbal label. (And, prima facie, this seems plausible). Despite this, even double disparity in this case shows that, on Stevenson's working definition, there are cases of olfactory illusion. That is, if both labeling cases are ones of disparity, then so much the better for his argument that there are olfactory illusions 15 . Now to cases of different stimulus-same percept. In this category, Stevenson cites two instances of perceptual stability, or constancy phenomena. The first example involves intensity. According to Stevenson, research has found that variations in the flow and, in turn, concentration of an odorant over the olfactory epithelium is registered by neural responses of the olfactory nerve. Despite this, such variation does not arise at the level of experience. Rather, despite variation in the concentration of an odorant passing over the olfactory epithelium, subjects perceive odor stimuli as relatively stable with respect to intensity. Stevenson suggests that these results show that the epithelium is not only sensitive to the stimulus itself, but to the rate of airflow over it. Due to this added sensitivity, the olfactory system adjusts for variations in concentration relative to changes in airflow. The result is constancy with respect to the perceived intensity of the stimulus. Given Stevenson's working definition of "illusion," we have a case where there is disparity between the objective state of the stimulus and the nature of the experience resulting from it. In this case, we have a difference in odorant concentration that fails to show up at the level of experience. This subdued sensitivity to differences in an odorant stimulus amounts to an illusion, Stevenson suggests, because a veridical experience of it would represent its actual concentration (presumably in the form of what we call intensity of olfactory quality). Because that actual concentration is not represented at the level experience, Stevenson indicates that atleast some of our representation of concentration is illusory 16 . 15 Stevenson cites similar experiments in which a target stimulus is judged to be more pleasant if presented with odorants that are typically judged to be less pleasant, and less pleasant if presented with odorants that are typically judged to be more pleasant. Again, it must be that, for Stevenson, underlying cases of variation in hedonic judgment is a disparity between some objective state of the stimulus and a subject's experience of it. If this is true, these cases also constitute illusions on his working definition of "illusion." 16 Given that Stevenson presents these as relatively common instances of perceptual constancy, it might turn out that much of our representation of concentration is illusory. It is unclear whether this is something that Stevenson would be happy to accept. One way to avoid that result would be to claim Stevenson's second example involves constancy in perceived quality despite differences in, or changes to, the chemical constitution of an odorant stimulus. Drawing on work he presents in Stevenson (2006, 2007), Stevenson tells us that degraded input, or varying formulations of a stimulus at the receptor site, can be completed at the level of experience. Because of the complexity of the olfactory environment, one might not receive information about all of the components of a certain odor stimulus, for example coffee, and yet still be able to smell that that coffee is present. What accounts for this ability are prior encodings of odorant stimuli in the form of stored templates of patterns of receptor excitation in the olfactory cortex. As Stevenson claims, a "perfect fit" (1892) between input and template is not required; rather the olfactory system is able to recognize certain sub-patterns of receptor activation against existing templates of activation. The result is, however, not a "partial" experience of coffee; it is an experience of coffee. Without these templates, Stevenson (2006, 2007) claim, it is unclear how such constancy might be achieved. Like constancy of intensity, then, it would seem we have a case where there is disparity between the objective state of the stimulus and the nature of the experience resulting from it. In this case, we have a difference in chemical constitution that fails to show up at the level of experience. In sum, Stevenson alleges that all of the cases of same stimulus-different experience and different stimulus-same experience involve misrepresentation and, in particular, illusion. He argues that each case involves a circumstance in which there is a disparity between some objective state of the world and a subject's experience of that state. In accordance with his working definition of "illusion," then, these are all cases of illusion. OLFACTORY ILLUSIONS? In what follows, I will take for granted that each of these cases is one that we can assess for veridicality. I will also take for granted that there is some objective state of the world that our olfactory experience is capable of misrepresenting and does so in each of these cases. Given these assumptions, I want to now consider whether, or how, Stevenson's arguments affect my own. As a way of making headway on these questions, it is important to first note that my notion of non-veridicality could handle these cases of alleged illusion 17 . Recall that my notion of nonveridicality involves the consideration of whether, for a certain olfactory feature F, there is anything at all at the perceiver that is F. So, to take the case of dihydromyrcenal as an example, evaluating the "more woody" case for veridicality involves asking whether there is anything at all at the perceiver that has, objectively, that degree of woodiness. Or, as I have also put it, it involves simply that olfactory experience represents concentration relative to air flow over the epithelium. In this case, our judgments of intensity would be more eligible for accuracy at the level of experience. I leave this proposal, however, for another time. The important point is that it is not a proposal that Stevenson wishes to entertain, opting instead for claims of illusion in these cases. 17 In what follows, I will simply refer to my notion of non-veridicality for the olfactory case, as opposed to my notion of property hallucination for it. Given that I argue that the latter is the only way that (human) olfactory experience can be non-veridical, there is no room for confusion here. asking whether, in those perceptual circumstances, that degree of woodiness is instantiated. If the answer is "no," then the experience is non-veridical. As I am assuming with Stevenson, that degree of woodiness is not instantiated at the perceiver-there is nothing at all that is "more woody" at the perceiver. In this case, then, the answer to my question is "no," and one's experience in this circumstance counts as non-veridical. Notice, however, that my notion of non-veridicality for olfactory experience is no different than Stevenson's notion of illusion. Remember that, according to Stevenson, an illusory experience involves "a disparity between some objective state of the world and ones [sic] perception of it" (1888). But this is just what, on my notion of non-veridicality for olfactory experience, a non-veridical experience involves. To consider whether F-ness is instantiated at a perceiver is to consider whether the perceiver's experience "accurately reflects some state of [her] environment" (1893). If it does not, then there is a disparity between that state of the environment and a perceiver's experience of it. To return to the case of one's experience of the woodiness of dihydromyrcenal, Stevenson's notion of illusion requires that we ask whether that degree of woodiness is instantiated by some state of the environment, where "environment" presumably denotes the space around the perceiver eligible for inhalation 18 . But my notion of non-veridicality asks the same-that s, whether that degree of woodiness is instantiated at the perceiver. Given what Stevenson has told us, then, "Does S's experience of F-ness accurately reflect some state of the environment?" amounts to asking "Given that S has an experience of F-ness, is F-ness instantiated at the perceiver?" Just like Stevenson's notion of illusion, my notion of non-veridicality does not ask after any particular thing that appears to be F. Rather, in asking whether anything at all instantiates F-ness, it asks whether, to use Stevenson's terms, there is a state of the environment in which F-ness is instantiated. As it stands, then, Stevenson's working notion of illusion fails to address my arguments against olfactory illusions. Both of us provide the same analysis of his cases. But if we truly disagree, then we ought to provide different analyses of them. At this point, then, any purported disagreement between us amounts to a mere difference in terminology. He calls his cases of disparity illusions, while I do not. But, other than that label, our characterizations of them amount to the same. Because of this, if Stevenson is to refute my arguments, he must do more to address them directly. I hinted at what else is required above when I claimed that, because my notion of non-veridicality does not ask after any particular thing that appears to be F, it amounts to the question of whether there is a state of the environment in which F-ness is instantiated. My conclusion that there are no olfactory illusions hinges on the observation that olfactory experience is not object-involving, that there are no presented objects in olfactory experience. Recall that, on that traditional way of categorizing non-veridical experience, both illusion and hallucination involve what I call object-failure-that is, a failure to represent a particular object accurately. If there are no presented objects, then that categorization fails. And, as I argue, there are no such objects. This is because the very nature of olfactory experience-its "smudginess," as I have put it-doesn't allow for a distinction between figure and ground. These considerations of phenomenology constitute my reasons for denying that there are olfactory illusions. What is required for Stevenson to address my arguments, then, is an argument for the conclusion that, in the cases of alleged illusion he cites, there is a presented object that appears to be other than it is. Stevenson appears to argue for just this in his later discussion section-although he does not turn back directly to his example cases. Before moving on to these arguments, it is important to note some potentially misleading claims that Stevenson makes when introducing this discussion. After presenting his alleged cases of olfactory illusion, Stevenson claims that "the apparent actuality of olfactory illusions would seem to call into question Batty's (2010b) claim that olfactory experience lacks object status" (1895). As it stands, this claim is far too quick. It carries with it the implication that Stevenson has discussed his cases of olfactory illusion in terms of presented objects. But he does not make any claim of the sort, focusing instead on states of the environment. But, as we have seen, casting these alleged cases of illusion in terms of mere states of the environment is not enough to address my arguments. As it stands, then, "the apparent actuality of olfactory illusions" does not "call into question Batty's (2010b) claim that olfactory experience lacks object status" (1895) 19 . As I claimed above, more needs to be said to establish this claim. Stevenson then seems to recognize this when he goes on to claim that olfactory experiences do in fact achieve "object status" (1895). Although he cites other authors who have claimed that olfactory experience achieves object status, it is most helpful to consider what Stevenson himself has argued with respect to this claim. Stevenson (2006, 2007) argue for an object-based model of theorizing about olfaction, a model they call the Object Recognition Model (from hereon ORM). In particular, they argue that olfactory experiences represent "olfactory objects." Given that they also refer to these objects as "odor objects," it is safe to assume that, on the ORM, the objects represented in olfactory experience correspond to odors-or, collections of volatile molecules in a perceiver's environment. One of their common examples is the "coffee object." Returning to a type of view about content that I discussed in section one, we will see that the ORM suggests that olfactory experience is object-involving-that is, that it represents that a particular object is present in your environment as opposed to some object or other, as my abstract view maintains. In turn, this suggests that Stevenson's notion of illusion at this point of his paper is in fact the more robust, traditional notion rather than the "working definition" that he relies on previously. If 19 Strictly speaking, I do not deny that olfactory experiences lack object status. I argue that olfactory experiences represent objects, just not particular objects, and not in a way that allows for olfactory illusion. That is, I argue that olfactory experience is not object-involving. Given this, I will assume that by "lacks object status" Stevenson means "is not object-involving." olfactory experience is object-involving, then it is eligible for misrepresentation in both of the traditional ways. In particular, to return to a previous question, we can ask of an object of olfactory experience, o: For any property F that o appears to have, does o really have F? (O-Attribution) That is, there is some particular thing of which we can ask, as in O-Attribution, "it appears to be F, but is it really as it appears?" But O-Attribution is the question that captures the traditional notion of illusion. If the ORM is true, then, my claim there are no olfactory illusions is shown false. What are we to make of the ORM? If the ORM is to encompass a successful response to my argument against olfactory illusions, then olfactory experience must single out objects in the requisite way-that is, it must be object-involving. As a way of understanding why Wilson and Stevenson think it does, it is important to look briefly at the traditional model of theorizing about olfaction that their ORM aims to replace-and why it does so. They call this model the Stimulus Response Model (from hereon, SRM). Given the history of scientific theorizing about olfaction, we can extract two core claims of the SRM. First, the SRM assumes that olfactory experience is analytic-that is, those features of a chemical stimulus that trigger receptor excitation will map onto features of the resulting experience. In other words, the SRM claims that, in some important sense, olfactory experience can be "broken down" into those initial features of the stimulus and/or receptor types sensitive to those features. Secondly, and relatedly, the SRM assumes that a characterization of olfactory experiences is exhausted by an account of how the particular features of the stimulus and/or receptor site are presented in experience. On the SRM, no appeal to objects is necessary to provide that characterization. According to Wilson and Stevenson, the SRM proves unsatisfactory because olfactory experience doesn't live up to the standards that the SRM sets for it. This is because olfactory experience is, as they tell us, largely synthetic. That is to say, rather than producing an experience of an array of discriminable properties, the various properties of the stimulus produce a largely irreducible experience-a "wholistic unitary percept" (2007, 1821), as they put it. One particularly telling way that they deliver this point is by asking us to consider the complexity of the average odorant stimulus. Much of what we encounter with our noses are chemical mixtures. The coffee odor, for instance, consists of over 600 volatile compounds that together give rise to what we might call the "coffee experience." It is a distinctive experience-one that gets us up in the morning. But it is not an experience in which we are able to discriminate anything close to the number of causally efficacious components of the stimulus responsible for it. As it's been noted in the empirical literature, it is now commonly accepted that even the experts are only ever able to distinguish two or three of the major components that constitute a given odor. So, while the coffee stimulus has a remarkable complexity, it does not have a perceived complexity 20 . Compared to the complexity of the stimulus itself, the coffee experience is simple. It's just of coffee. But this is not the way that our experience of the coffee odor should be if the SRM is true. Although, as Wilson and Stevenson concede, olfactory experience can fail to be wholly synthetic, if it were analytic, our experience of the coffee odor would be different than it in fact is. We might think that, if the SRM were true, there would be no such thing as the coffee smell per se-just an array of apparent properties. But there is. Given this, the SRM fails to capture the phenomenological facts of our experience. Wilson and Stevenson therefore conclude that it is a misguided model and must be rejected. In place of the SRM, Wilson and Stevenson propose the ORM. We already know that such a view is object-based, that olfactory experience represents "olfactory objects," or "odor objects." We also know that it is safe to assume, given to their name, that these objects correspond to odors in our environment. But, what are these perceptual objects? Or, to put it another way, in what sense do odors in the environment show up at the level of experience? Their criticism of the SRM provides the answer to this question. According to Stevenson, odors show up as those "wholistic unitary percepts" (2007, 1821), as the synthetic percepts that the SRM fails to predict. The "coffee object," then, is that largely synthetic percept that results from sniffing the coffee odor. Now, it is not simply because olfactory experience is largely synthetic that Wilson and Stevenson claim it is object-involving. It is rather what it can achieve as a result of its being synthetic that they claim secures the view. According to Wilson and Stevenson, the "defining feature for [perceptual] objecthood" (2007, 1823) is figure-ground segregation, and they argue that olfactory experience can achieve just that 21 . Their reasons for thinking so draw on similar considerations as those of Stevenson's case of constancy of perceived quality 22 . In order to draw attention to how olfactory experience achieves figure-ground segregation, Wilson and Stevenson ask us to consider the complexity of our olfactory environment. At any given moment, we are barraged with volatile molecules given off by the various things in our environment. Insofar as almost everything in our environment gives off these molecules, we can say that everything smells. And a remarkable number of those molecules make their way to the olfactory epithelia with every intake of breath. Despite this, our olfactory system is able to achieve the most impressive of discriminatory feats. In the midst of the "confusion" of our olfactory environment, as they put it, we are able to smell coffee. The "wholistic unitary percept" (2007, 1821) coffee is an apparent figure, one that stands out in the midst of a complex, and noisy, background. This "experiential prominence" in the midst of that noisy background is what Wilson and Stevenson refer to as figure-ground segregation. It must be noted, however, that, unlike the visual case, Wilson and Stevenson claim that figure-ground segregation is achieved aspatially. According to Wilson and Stevenson, olfactory experience is, in and of itself, aspatial. To return to our previous example, the coffee object is an apparent object-just not one that is presented in space. Still, according to Wilson and Stevenson, given experiential prominence and, in turn, the achievement of figure-ground segregation, it is an apparent object nonetheless. After all, figure-ground segregation is, for them, the defining feature of perceptual objecthood and, if correctly characterized as such and achieved, constitutes the presentation of an object. Wilson and Stevenson agree with me, then, in an important respect-namely, that spatial figure-ground segregation is not something that applies to olfactory experience. Other than myself and Stevenson's common focus on standard olfactory experiences, then, there is an additional point of agreement between us. But is this enough to show that, in such cases, olfactory experience presents objects and, in turn, is eligible to be illusory? As a way of answering this question and in order to compare our respective views, we need to say something more about the ORM. According to Wilson and Stevenson, underlying experiential prominence is the template mechanism that I referred to earlier, in my discussion of Stevenson's case of constancy of perceived quality 23 . Wilson and Stevenson argue that, over time, the olfactory system builds up a store of templates in the olfactory cortex of patterns of receptor input. Once stored, these templates allow the system to recognize those patterns against variable arrays of receptor input. In turn, this kind of processing endows us with important discriminatory abilities such as the ability to smell coffee although there are other smelly things about. Contributing to these achievements, then, are learning and memory. In short, the growing store of templates constitutes learning; drawing on those templates in processing olfactory information amounts to the execution of memory 24 . If experiential prominence is rightly characterized as figureground segregation, then Wilson and Stevenson's view is one according to which olfactory experience is object-involving. This is because the very nature of figure-ground segregation is such that it allows a perceiver to single out a particular object in her environment. We must now consider whether experiential prominence demonstrates that olfactory experience is object-involving and, in turn, secures the claim that it achieves figure-ground segregation. It is not clear that experiential prominence establishes this. The problem lies in the fact that my view is consistent with all of the phenomenological data that Wilson and Stevenson cite. In order to see that this is so, let's return to the coffee example and look at what my view of olfactory representation is able to say about this case. On my view, when we smell the coffee, there is a distinctive property, or set of properties, presented to us in olfactory experience. I will also grant that, in certain circumstances, that property, or set of properties, stands out from other properties instantiated in a perceiver's environment-namely in those circumstances in which we smell coffee. Given the complexity of the olfactory environment, and the way that olfactory experience is given those facts, it would be foolish to deny this experiential prominence. Moreover, I can also grant Wilson and Stevenson's claim that,in 23 Again, see page 6 of this paper. 24 Wilson and Stevenson say much more about the physical mechanisms underlying what I have referred to as "template mechanisms." For my purposes, it is enough to provide a model of their view. olfactory experience, such prominence is achieved in virtue of a relative match between stored templates in the olfactory cortex and patterns of receptor excitation. Where my view will differ from Wilson and Stevenson's is in what the result of that template matching is-that is, in what that experiential prominence amounts to. On my view, it amounts to the presentation of a property, or a small set of properties presented together as a result of that template matching 25 . This much is in keeping with Wilson and Stevenson. But, unlike what Wilson and Stevenson claim, that those properties "show up" at the level of experience indicates the presence of some object-just not any object in particular. Notice that, at this point, I have granted all of the perceptual data that Wilson and Stevenson cite in favor of figure-ground segregation. In doing so, I stop short of positing that the presentation of those properties, as distinct in a complex environment, amounts to the presentation of a particular object. But, again, it does not stop short at the expense of any of the perceptual data that Wilson and Stevenson cite in favor their view. In particular, and most importantly, that data that they take to be indicative of figure-ground segregation is accounted for without taking that step. What this shows is that it isn't clear that experiential prominence is best characterized as figure-ground segregation. This is because, as a comparison with my view has demonstrated, Wilson and Stevenson haven't shown that it is an apparent figure that shows up at the level of experience. But demonstrating that there is such a figure-or object-is exactly what is required in order to establish that the more robust notion of illusion is one that can occur in olfactory experience. To return to our previous question, Stevenson must establish that O-Attribution is a question that we can ask of olfactory experience. But his own "objectbased" view of olfactory experience does not. Given this, he fails to demonstrate that my claim there are no olfactory illusions is false. It is important to note that responding to present worries about ruling out my abstract view requires more that simply drawing attention to the fact that there exist patterns of excitation at the receptoral level, nor to the fact that that such patterns are stored in long-term memory to expedite later olfactory discrimination. What is at issue is whether these patterns and combinations show up, at the level of experience, as perceptual objects. The question is whether the experiential output of template matching-Wilson and Stevenson's "wholistic unitary percepts" or "synthetic odor objects"-ought to be characterized in object-involving terms. And it isn't clear that there are the materials with which to adjudicate between that kind of view and 25 Here I am not claiming that olfactory experience achieves the perceptual grouping required to solve the Many Properties Problem. I am simply, for the sake of comparison, adopting Wilson and Stevenson's claim that, in some cases, we are able to distinguish two or three components of an odorant stimulus. While they claim that, even in these cases, we are presented with olfactory objects, I here claim that a view that denies that there are such objects can accommodate the data they cite. It is important to note that amongst the data they cite is not the claim that olfactory experience can report on different arrangements of those properties along some dimension-e.g., the spatial dimension. But it is this kind of achievement that underlies the ability of a sensory system to solve the Many Properties Problem. mine-at least if we are relying on observations of experiential prominence to decide it. Are we now left at an impasse, with each of us able to account for the relevant data and nothing left to adjudicate the issue? I think that we are not. I grant that figure-ground segregation allows us to single out a particular object in our environment. That is, I grant that figure-ground segregation forms the basis of object-involving content. Wilson and Stevenson agree. But they also assume something stronger than I do: that if the distinction is to apply in the realm of olfaction, it must apply non-spatially. But not only has this revision of the concept proven problematic, it also deprives us of the ordinary spatial notion of figure ground, a notion which we do need-just not for humans. To see that this is so, compare our olfactory experiences to those of other animals. The hammerhead shark, for example, enjoys a sense of smell that is directional. Given its extremely wide head, a stimulus coming from the extreme left of the hammerhead's head will arrive at the left nasal cavity before it does the right. If the stimulus is blood, the hammerhead's response is instantaneous-it turns in the direction of its source. I take it that we are quite happy to admit that the hammerhead represents the location of a food source, much in the same way that we are able to represent, via audition, the location of a "bang" outside. In the latter case, we are happy to admit that auditory experience achieves figure-ground segregation-and does so spatially. Given this, it is plausible to conclude that the hammerhead also achieves the same in its olfactory experience. That is to say, the hammerhead shark is a creature that enjoys spatial figure ground representation and thus object-involving olfactory content. Clearly we are not like the hammerhead, as Wilson and Stevenson admit. But it would be strange to conclude that the hammerhead's olfactory experiences are to be evaluated according to one notion of figureground segregation, while ours are not. If we are to account for the difference between us and the hammerhead, then, we require the spatial notion of figure-ground segregation. What this shows is that the spatial notion of figure-ground segregation remains useful in the olfactory case. We can make distinctions with it that we need to make-for example, we can explain the difference between us and the hammerheads. What's more, it allows for a unified notion of figure-ground segregation across the sense modalities. In those types of experience in which we think of figure-ground segregation as achieved-vision, audition and touch, for example-we do so on the basis of the richness of its spatial representation. In those types of experiences in which we worry whether, or wonder if, figure-ground segregation is achieved-arguably olfaction and taste-I take it that we so on the basis of the observation that those types of experiences are not as spatially rich as those where we grant happily that there is figure-ground segregation. What this suggests is that figure-ground segregation forms a kind, one defined by the type of spatial representation achieved by an experience. If, as I have argued above, we ought to evaluate olfactory experience in accordance with this notion of figure-ground segregation, then we ought to accept my abstract view. And, if we accept that view, then we are committed to accepting three further things. First, we are committed to accepting my analysis of experiential prominence over Wilson and Stevenson's, driven as mine is by the abstract view of olfactory content. Second, and relatedly, we ought to accept my conclusion that there are no olfactory illusions. Finally, given the accuracy conditions set forth by the content of olfactory experience, we ought to accept that the appropriate notion of non-veridicality for the olfactory case is one of property hallucination. Now Stevenson says little about the notion of property hallucination per se, focusing instead on the negative stage of my 2010b argument that there are no olfactory illusions. Still, let me say something further about the benefits of adopting a notion of property hallucination and of a non-object based notion of non-veridicality. Scientists and philosophers alike have long been interested in non-veridicality, or perceptual misrepresentation. But it has also been assumed that non-veridicality falls into one of two categories-illusion and hallucination. As I noted in section 1, these ordinary notions each involve the misrepresentation of objects, or "object-failure," as I have called it. It is true that, with property hallucination, I am also talking about non-veridicality. But what is interesting about property hallucination is that it is a form of non-veridicality that current accounts of non-veridicality do not allow for, focused as they are on the representation of particular objects. Drawing attention to property hallucination, then, identifies a new category of non-veridicality. Given that both scientists and philosophers have been interested in the information putatively conveyed in olfactory experience, and the nature of the ways in which experience may misinform a subject, the introduction of property hallucination presents a novel way of thinking about, and categorizing, olfactory misrepresentation. But the interest of property hallucination for olfaction is not only restricted to the olfactory case. It is also helpful in driving further thinking about perceptual experience in general. That is, its introduction forces us to re-think the nature of veridicality and non-veridicality more generally across all of perceptual experience. For example, the notion of property hallucination opens up the possibility that there are cases in other modalities that are best characterized as those in which we do not perceive particular objects but only certain properties, and that this novel notion of non-veridicality best accounts for those cases. One case that I have discussed previously is the visual experience of looking at a uniformly colored expanse 26 . To be sure, this is not a typical visual experience, as I argue the analog case for olfaction is; but it is one that, if in fact a misrepresentation of color, is plausibly categorized as a case of property hallucination. A third category of non-veridicality, then, is incredibly interesting because it allows us to look deeper at the experiences of other modalities, comparing and contrasting the ways in which experiences in those can mislead. Finally, adopting my third category of non-veridicality directs our attention to the possibility that there might be even further categories of non-veridicality-whether these other, previously unconsidered notions turn out to be categories in their own right, or sub-species of those we already adopt. Not only, then, does my notion of property hallucination introduce a new category that we previously lacked in describing perceptual misrepresentation; it also directs attention to the possibility that our account of non-veridicality might be lacking in further, equally interesting, ways. And this further result, I take it, would be interesting for philosophers and scientists alike. CONCLUSION Earlier I promised to say something further about what I labeled Stevenson's (2), namely his claim that olfactory illusions typically escape notice. Obviously I disagree that they do. I argue that there are no olfactory illusions and so there is nothing in this case to escape our notice. Still, my abstract view of content can explain why we might think, like Stevenson claims, that the difference between olfaction and other modalities, "relates to issues of verification (i.e., ones [sic] capacity to independently confirm what one is smelling" (1888). To take the case of vision as an example, it is easy to see how we are able to verify what we seem to see. In the case of visual experience, because we are able to discriminate individual objects, we are able to ask, and in principle capable of verifying, whether that object is in fact in the scene before our eyes. Given that it is presented as such, we are also in principle capable of verifying whether the properties it appears to have are those that the object in fact has. In each case, we go out and explore the environment; we go to that object that we appear to see and "interrogate" further. These two capacities for verification are implied by our previous two questions about misperception, V-Existence and V-Attribution. But, as I have argued, the olfactory analogs of each-O-Existence and O-Attribution-do not in fact apply to olfactory experience. This is because there are no presented objects in olfactory experience; olfactory experience is not object-involving. It is unclear, then, how we are able to verify what we smell. Like the visual case, we may very well explore our environment further; but it is not the case that we are able to pinpoint that object we appear to smell and "interrogate" it further. The most we are able to do is locate those properties we appear to smell, to determine if it is in fact what we thought it was, or if it appears to be elsewhere around us. But notice that this is just to ask after whether a property, or set of properties, is instantiated in the environment. It is not to ask after any particular object. It is no wonder, then, we feel suspicious about our abilities to verify our olfactory experiences. We simply are unable to do so in the same way as we are in the visual case. But, unlike what Stevenson claims, this difference is a result of the fact that there are no presented objects. In fact, if we take Wilson and Stevenson at their word, then it would seem that we would be able to verify what we smell in the much stronger sense of "verification" present in the visual case. That is, we ought to be able to pinpoint a particular object in our environment and ask after it. But we cannot. Not only, then, is abstract view vindicated with respect to its claims about olfactory illusions; it is also able to explain those considerations about verification that, as it turns out, Stevenson himself is unable to accommodate. ACKNOWLEDGMENTS I would like to thank Fiona Macpherson and Tim Sundell for helpful discussions when writing this paper. Their input was invaluable in producing the final product. I am also grateful to two anonymous referees for their comments and feedback on the penultimate draft. Their comments and advice helped me to improve the paper greatly.
16,368.8
2014-03-25T00:00:00.000
[ "Philosophy", "Psychology" ]
GeoSPM: Geostatistical parametric mapping for medicine Summary The characteristics and determinants of health and disease are often organized in space, reflecting our spatially extended nature. Understanding the influence of such factors requires models capable of capturing spatial relations. Drawing on statistical parametric mapping, a framework for topological inference well established in the realm of neuroimaging, we propose and validate an approach to the spatial analysis of diverse clinical data—GeoSPM—based on differential geometry and random field theory. We evaluate GeoSPM across an extensive array of synthetic simulations encompassing diverse spatial relationships, sampling, and corruption by noise, and demonstrate its application on large-scale data from UK Biobank. GeoSPM is readily interpretable, can be implemented with ease by non-specialists, enables flexible modeling of complex spatial relations, exhibits robustness to noise and under-sampling, offers principled criteria of statistical significance, and is through computational efficiency readily scalable to large datasets. We provide a complete, open-source software implementation. In brief We present GeoSPM, an approach to the spatial analysis of diverse clinical data that extends a framework for topological inference, well established in neuroimaging, based on differential geometry and random field theory. We evaluate GeoSPM with extensive synthetic simulations, and apply it to large-scale data from UK Biobank. Our approach is readily interpretable, easy to implement, enables flexible modeling of complex spatial relations, exhibits robustness to noise and under-sampling, offers principled criteria of statistical significance, and is scalable to large datasets. INTRODUCTION Human beings vary along a rich multiplicity of social and biological dimensions, whose complex interactions across health and disease present a challenge for medical science and systems biology in general. The combination of large-scale data with machine learning promises to cast brighter light on this complexity than conventional inferential techniques, illuminating distributed, long-range dependencies hitherto obscured. Our interventions are increasingly grounded in an understanding of the factors that shape disease trajectories and determine individual responses to treatment. One comparatively neglected dimension is the literal dimension of space: each of us inhabits a particular location that may reflect or modify our individual biological characteristics and the influence of (and on) other spatially distributed variables. Spatial factors may be static or vary over time, arising at multiple scales, ranging from the domestic to the inter-continental. Their reference frames may be set by internal communities, by external geographies, or by a complex blend THE BIGGER PICTURE Many aspects of health and disease are distributed in space, requiring models of topological organization. The complexity of the task, however, makes spatial analysis comparatively rare in medicine. Here, we introduce GeoSPM, a platform for topological inference from clinical data based on a mature mathematical framework-statistical parametric mapping-validated by decades of use in neuroimaging. We provide comprehensive synthetic evaluation of the approach, and illustrate its application on large-scale data from UK Biobank. The interpretability, flexibility, scalability, ease of implementation, robustness to noise and under-sampling, computational efficiency, and provision of principled criteria of statistical significance, provided by our open-source platform should catalyze wider use of spatial analysis across medicine. Development/Pre-production: Data science output has been rolled out/validated across multiple domains/problems of the two. Their spatial organization may be linear or consistently distorted by individual or environmental movement within these frames of reference. Spatial factors may disclose or alter characteristics of biology directly, or render them more or less clinically accessible or actionable. Space arises not only in epidemiology, environmental medicine, healthcare policy, and public health, but in the fundamental organization of biology itself. Yet outside a few specialist areas spatial analysis is comparatively rare in medicine. An indicative survey of published paper titles and abstracts in Microsoft Academic Graph, spanning 30 years of medical research, reveals only 1,897 journal papers at the intersection of geospatial analysis and medicine, with an annual citation distribution for those cited more than once nonetheless substantially higher than a matched biomedical sample (mean 2.75 versus 2.13, Mann-Whitney U test, p < 0.001, Figure S1, see supplemental note). The comparative scarcity is arguably in part explained by the difficulty of the task. The spatial factors arising in a medical context are often entangled, their sampling is sparse and frequently corrupted by noise, and the underlying signals tend to be weak. But spatial analysis is hard even where the data regime is benign, for the problem is essentially multidimensional and is rarely, if ever, open to analytic solutions. The fundamental challenge is reflected in the wide array of techniques in current use. A survey of 397 papers published since January 1, 2017, in the joint domains of health and spatial modeling identifies local indicators of spatial association, 1 spatial scan statistics, 2 inverse distance weighting, 3 kernel density estimation, 4,5 spatial regression in terms of spatial lag and spatial error models, 6 geographically weighted regression (GWR), 7 land-use regression, 8 kriging, 9,10 generalized linear mixed models, 11 generalized (geo-)additive models, 12,13 hierarchical Bayesian spatial analysis, 14,15 and model-based geostatistics, 16,17 among others. This methodological diversity reflects differing demands on the spatial aspects of the model and the breadth of specific questions that arise in a spatial setting. With the question may vary the modeling objective, and the theoretical assumptions that underpin it. Common objectives include spatial prediction, the analysis and regression of spatially varying or spatially confounded associations, and the investigation of spatial point patterns. Arguably the most general and taxing research questions involve inference-whether explicit or not-to a topological organization, for example, identifying the location and extent of a spatially organized signal buried in noise. Such questions typically-if not always-require methods that treat space as a continuity, produce spatially continuous estimates, and provide principled measures of spatial uncertainty. Dominant in this category are methods that adopt a nonlinear multivariate approach, taking advantage of the flexibility and expressivity it offers. Although potentially powerful, they require joint expertise in the method and the domain of its application, depend on prior specification of model parameters, and tend to demand substantial computational resource even for data of moderate scale. Furthermore, in the generalized linear framework, space commonly enters the model as a latent random effect-usually derived from a suitable Gaussian process. This approach adjusts for spatially correlated variance within an otherwise non-spatial framework, with the fixed effects remaining constant across the spatial field. 18 These obstacles motivate the pursuit of alternatives outside the multivariate paradigm for the task of topological inference. The direct counterpoint is a mass-univariate approach, where a complex multivariate model is replaced by a spatially indexed ensemble of simpler models. GWR modifies the predictors in a regression model through a spatially localized weight matrix, so that a variation of the model is estimated at each location and the resulting estimates exhibit spatial smoothness. Although GWR estimates can be derived from a prespecified grid, in practice only sampled locations or grids of modest size tend to be evaluated owing to the difficulty of correcting for multiple comparisons in a topologically informed manner. 19 Spatial inference with GWR is commonly limited to regression coefficient or coefficient of determination maps that simply indicate the local goodness of fit, 20 without employing formal tests of significance. 21,22 Finally, these are regression models relating a response to a set of spatially organized predictors, not models of the spatial variation of a set of variables within a topological framework of uncertainty: our primary concern. Here, we propose, implement, and validate an approach to the spatial analysis of diverse clinical or public health data that draw upon differential geometry and random field theory, with the topological objective of identifying connected neighborhoods and peaks of spatial significance. In particular, we leverage the procedures used in statistical parametric mapping (SPM): a framework for making topological inferences about spatially structured effects, with well-behaved spatial dependencies. 23 This approach has been established for decades in the realm of (structural and functional) volumetric neuroimaging. The core idea is to transform sparse spatial signals into a form suited to mass-univariate statistical testing on a chosen point grid: for example, testing that the spatial or regional expression of a particular variable is greater than would be expected under the null hypothesis of no regional effect. The probability of observing topological features in the observed map, such as peaks or clusters (i.e., level sets above some threshold), can then be evaluated with classical inference based on random field theory, and used to ascribe a p value to spatially organized effects. This principled approach radically simplifies one important domain of spatial analysis, rendering it potentially more sensitive and robust to noise, and places it on a formal inferential footing, yielding a general-purpose geostatistical tool readily deployable across a multitude of medical fields where the modeling objective requires inference to the topological organization of a set of signals of interest. For example, we may use the approach to infer the location and extent of regional expression of spatially organized variables-taken alone or in conjunction-such as disease prevalence in a community, while accounting for multiple potentially interacting confounding factors, and without relying on any a priori parcellation of the space. In what follows, we (1) offer a detailed rationale for our approach; (2) proceed to evaluate it across an extensive array of synthetic simulations where the nature of the spatial relationships, sampling, and corruption by noise are prespecified; and (3) demonstrate its application on large-scale data from UK Biobank (https://www.ukbiobank.ac.uk/). 24 The numerical analyses serve to establish face validity; the empirical analysis to demonstrate predictive validity. We provide a complete, opensource software implementation of our framework (https:// github.com/high-dimensional/geospm), released as an extension to SPM; namely, geospatial SPM or ''GeoSPM.'' Supplemental note S4 and Figure S56 provide an overview of GeoSPM's class structure as implemented in MATLAB. EXPERIMENTAL PROCEDURES Resource availability Lead contact Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Holger Engleitner (h.engleitner@ucl.ac.uk). Materials availability This study did not generate new unique reagents and did not use any additional materials aside from the data and code cited below. Data and code availability The data analyzed in this study are available on application to UK Biobank (https://www.ukbiobank.ac.uk). The open-source software implementation of GeoSPM presented in this study is available on GitHub: https://github. com/high-dimensional/geospm (https://doi.org/10.5281/zenodo.7258971). Overview Our approach builds on the well-established regression analysis framework implemented in SPM12 (http://www.fil.ion.ucl.ac.uk/spm/), the most widely used platform for spatial inference in brain imaging. Within this framework, a set of explanatory variables is associated with a multivariate, spatially structured response, whose components represent measurements taken at regular locations in a spatial domain. The association between explanatory variables and response is estimated at each location separately, using the same general linear model (GLM). This yields a collection of univariate multiple regression models that share the same model architecture and design matrix but differ in the response variable and the estimated parameter values. Crucially, random fluctuation, or variations in the response variable that are not explained by the GLM, are treated as realizations of a random (spatial) field with certain contiguity or smoothness properties. This is mass-univariate inference from a spatial perspective. A distinguishing feature of SPM is the manner of correcting for multiple comparisons when testing mass-univariate model parameters (i.e., regression coefficients) for significance. The large number of tests, performed simultaneously, gives rise to a proportionally large number of false positives by chance alone. Conversely, the strong spatial correlations among the components of the response violate assumptions of mutual independence, and render simple Bonferroni correction inappropriately strict. SPM applies a more suitable correction by modeling the residuals as a random Gaussian field, so that p values are meaningful in terms of identifying significant peaks and clusters in a discretized spatial domain. Heuristically, topological inference of this kind automatically accounts for spatial dependencies; in the sense that smooth random fluctuations will produce a smaller number of maxima than rough random fields with less spatial dependence (even though the total area above some threshold could be the same). It can be shown that the smoothness of the residual fields is a suitable approximation to the smoothness of a t statistic map derived from the model, which in turn reflects the spatial dependence of the covariates. 25,26 The kind of data we are concerned with comprise variables of interest observed at locations in a continuous spatial domain D. D is usually a subset of R 2 representing coordinates of a geographic space. More precisely, every element in a spatially referenced dataset associates a vector y i of P variable observations ðy i1 ; .; y iP Þ T˛RP with a location x i˛D ðy i ; x i Þ : i = 1; .; N: SPM typically requires data sampled at regular locations across a grid, spanning the spatial domain. However, we wish to analyze data that are irregularly and sometimes sparsely sampled. This can be resolved by distributing each data point locally-over regular grid locations-using a spatial Gaussian kernel of suitable and fixed variance. From a data-centric point of view, we can interpret this spatial transformation as estimating the contribution of an individual observation to regular sample points, where the contribution has a maximum value at the observation location and then diminishes with increasing distance. In this way, the dependent variable in the univariate regression at any location of space is essentially a weighting of individual observations according to their proximity to that location: the higher the local response, the closer the observation. We can do this with impunity because we are interested in the explainable differences in these contributions at prespecified (grid point) locations. These explainable differences are assessed with normalized effect sizes (i.e., classical statistics), which are not affected by the total contribution or variance. 23,27 The chosen variance of the Gaussian kernel is a parameter-hereafter called the smoothing parameter-deliberately left open to the analyst to specify the appropriate degree of spatial coarse graining (i.e., spatial smoothness of the data features in question). Since SPM naturally handles volumetric data, we are free to use the third dimension to model multiple smoothing values on a continuous positive scale, rendering them as different spatial ''scales'' or ''features'' of a response variable. 28 Here, two coordinates represent the location in space (i.e., location space), and the third coordinate tracks spatial spread (i.e., scale space), allowing the regression analysis to operate at different scales simultaneously. It is appropriate to permit inference under varied assumptions of uncertainty, allowing the analyst to draw conclusions from the similarities and differences obtained across the range of plausible spatial scales. The analyst is also free to implement mechanisms that select an optimal parameter under some criterion: here we suggest one pragmatic method of doing this. Note that this scale-space implementation of topological inference automatically accounts for dependencies in moving from one scale to another and enables topological inference in terms of maxima or clusters in both location and scale space (i.e., a particular effect can be declared significant at this location and this spatial scale). For simplicity, we will focus on topological inference at a given spatial scale. Downstream of the above spatial transformation of data features, the statistical approach is formally identical to a standard SPM analysis. The output comprises a series of volumes representing regression coefficients, statistical contrasts derived from these model parameters, the statistical parametric maps-of classical statistics based on these contrasts-and, finally, thresholded binary maps that indicate whether the voxels in the corresponding statistical map are significant at the chosen (suitably corrected) p value. Synthetic data and generative models The statistical validity of the proposed approach is underwritten by the assumptions on which SPM rests. Nonetheless, it is helpful to examine its construct validity, in comparison with alternative methods (e.g., kriging), and face validity, in terms of its ability to recover known effects in different situations. Such validation is best performed with a known (spatial) ground truth, under manipulations of sampling and noise traversing the plausible space of possibility as far as is practicable. Note, however, that no aspect of the modeling approach-as opposed to its validation-may be allowed to rely on a ground truth, for in topological inference-as opposed to prediction-no ground truth is generally available. We cannot, for example, use a ground truth to tune a hyperparameter without excluding precisely the inferential context we are interested in. For maximum flexibility and control over the evaluation process, here we use synthetic data drawn from a generative model with a spatially varying distribution of one or two joint binary variables. The spatial variability of the distribution is determined by the locale and extent of shapes with a fractal boundary. Fractals characteristically exhibit detail across an infinite range of spatial scales, which makes them ideal candidates for a spatially structured ground truth with sensitivity to the widest possible range of spatial scales. The use of binary variables to generate two distinct signal levels for the response allows us to focus on data that are generated in a spatially structured way; namely, in a regionally specific fashion under various levels of noise or stochasticity. A full description of the process is provided in supplemental note S2. Demonstration with UK Biobank data To demonstrate the application of GeoSPM to real data, we chose to explore the potential association between a common disease-type 2 diabetes-and a small number of demographic variables in UK Biobank drawn from the area of Greater Birmingham. It should be stressed that the sole purpose of this analysis was to illustrate the application of the method, not to make inferences about the data itself, which would require more detailed investigation than our foundational focus here permits. The objective instead is to illustrate how spatial variation of a variable of interest may be examined, with specific attention to two important contexts: where the effect of the variable must be isolated from a set of known potential confounders, and where the joint effects of two or more variables are of interest. A detailed description of the variable selection and preprocessing is given in supplemental note S2.2. Numerical experiments Kriging We evaluate GeoSPM in comparison with the well-established multivariate geostatistical method of kriging, described in detail in supplemental note S2.3. All kriging computations were done in R using the gstat package, 29 which is available at https://cran.r-project.org/web/packages/gstat/index. html. For each variable of interest kriging produced an image of the predicted mean and an image of the corresponding prediction variance, which is derived solely from the arrangement of positions in the data, i.e., the prediction variance does not depend on the values of the observations, only on their locations. Synthetic experiments: Noise parameterization The numerical face validation experiments are based on three univariate models (snowflake, anti-snowflake, snowflake field) and two bivariate models (snowflake, anti-snowflake) as depicted in Figures S2 and S3. For all models, we ran experiments at different sampling levels, N univariate e f600; 1200; 1800g and N bivariate e f1600; 3200g, and increased the noise parameter g from 0:0 to 0:35 in 0:01 increments (Figure 1). For each triplet (model, N, g), 10 independent datasets were randomly generated. Each generated dataset was processed by GeoSPM as well as gstat. For GeoSPM, the spread of the spatial response at locations x i , i.e., the spatial distribution of the response following smoothing, was modeled at increasing smoothing parameter values ([ = 10) using the 95% iso-density diameters of the bivariate normal distribution, s = ð10; 15; 20; 25; 30; 35; 40; 45; 50; 55; 60Þ T . This measure of spread is the diameter of a circle that contains 95% of the probability mass of a two-dimensional Gaussian distribution at its center. The largest value of the smoothing parameter, 60, was chosen to be half the height of the grid for the univariate models. The regression coefficients estimated by GeoSPM were tested using a one-tailed t test at p < 0.05 FWE (voxel-level, family-wise correction), producing a stack of [ binary maps of significant areas for every variable of interest. To derive corresponding maps-one per variable-for kriging, we compared a standardized form of the kriging prediction b y std ðj; kÞ with the critical value of the upper tail probability p < 0.05 of the normal distribution. We standardized b yðj; kÞ at each grid cell ðj; kÞ using its estimated (positional) variance b sðj; kÞ and assuming a null mean of 0:5 to produce b y std ðj;kÞ: b y std ðj; kÞ = b yðj; kÞ--m null b sðj; kÞ ; whereðj; kÞ˛D 0 ; m null = 0:5: For a fair comparison with kriging, one of the [ smoothing values and its associated maps produced in a run of GeoSPM had to be chosen. We based this choice on maximizing the spatial coverage by the significant areas at each spatial scale (see Figure 2), while minimizing the spatial overlap between them. A spatial condition in the context of the observed variables Y˛R P in our models is obtained by applying a threshold of 0:5 to all observations, recording as 1 if an observed variable value exceeds the threshold or 0 if it does not. Each observation of a univariate model can thus be assigned one of two spatial conditions, or one of four conditions in the case of a bivariate model. We obtain the significant areas for each spatial condition by running a separate analysis in GeoSPM on a set of data that represents the spatial condition of each observation as a one-hot encoding, i.e., with each category represented as a set of binary dummy variables. This approach enabled us to derive a score for each of the [ smoothing values, which simply comprised the total number of significant grid cells that appeared for exactly one of the spatial conditions, thereby ignoring any overlap. The smoothing value with the highest score was selected, together with the binary maps of significant areas computed from it. Ties were broken by choosing the smallest scale. The binary maps for each variable were assessed relative to their respective target maps, which were derived by thresholding the corresponding marginal distribution of the model, adding grid cells with a probability greater than 0:5 to the target. We applied a number of representative image Lines denote the mean score across 10 random model realizations, shaded areas its SD to either side of the mean. Areas of overlapping performance are identified by additive shading. GeoSPM degrades more slowly and gracefully as noise increases compared with kriging. Comparable results for model term Z 2 are shown in Figure S10. OPEN ACCESS Article interaction term is formed in the usual manner, by multiplying the observed values for both variables, yielding augmented observations: y 0 = ðy 1 ; y 2 ; y 1 $y 2 Þ T . The regional arrangement of the model is the same as the one employed for the bivariate snowflake model shown on the left of Figure S2. A single sampling level N interaction = 15000 was used and the interaction parameter c 3 was increased from 0:25 to 0:5 in steps of 0:05. For each level of c 3 , R = 10 independent datasets were randomly generated. We set a single value for the smoothing parameter s = 60, which was the highest value for the noise experiments. As before, a one-tailed t test at p < 0.05 FWE (voxellevel family-wise correction) determined areas of significance and the same set of image segmentation metrics was computed for the binary maps. UK Biobank experiments Results for the UK Biobank data were obtained by a single invocation of GeoSPM for each of the four models listed in Table S6. We choose a smoothing value of 7 km, specified as the diameter of a patch enclosing 95% of the density the bivariate normal distribution with equal variances. This represents 20% of the width and height of our Birmingham analysis area, and seemed appropriate for identifying local variation sensitive to the plausible spatial scale of distinct geographically defined communities. This time, a two-tailed t test at p < 0.05 FWE (voxel-level family-wise correction) was used for thresholding the statistic maps. Analysis is restricted to areas where the combined smoothing density of all observations is at least 10 times the kernel peak value. Ethical approval UK Biobank has approval from the North West Multi-centre Research Ethics Committee as a Research Tissue Bank (RTB) approval. This approval means that researchers do not require separate ethical clearance and can operate under the RTB approval. RESULTS Our numerical experiments with a known generative model enabled us to measure performance against a known ground truth under circumstances varying in density of sampling and contamination with noise, enclosing the range likely to obtain in real-world scenarios. It also permits robust evaluation of graded interaction effects. In total, 2,160 independent simulations with synthetic data were performed for the univariate models, 1,440 for the bivariate models and 60 for the interaction model. Summarizing scores within the three sets of simulations, we derive performance curves for GeoSPM and kriging solutions in each case. We then proceed to illustrate the application of GeoSPM to real world data from UK Biobank. Synthetic models Displayed in the following figures are sets of independent simulations comparing the performance of GeoSPM (in yellow) versus kriging (in green) as a function of contaminating noise, measured by five different indices of retrieval fidelity, using the snowflake (Figure 3) or anti-snowflake ( Figure 4) bivariate ground truths, and low or high data sampling regimes (similar results for the univariate ground truths are reported in Figures S7-S9 in Lines denote the mean score across 10 random model realizations, shaded areas its SD to either side of the mean. Areas of overlapping performance are identified by additive shading. As is the case with the snowflake models, GeoSPM degrades more slowly and gracefully as noise increases compared with kriging. Comparable results for model term Z 2 are shown in Figure S11. supplemental note S3.1, as are the results for the second term in the bivariate models in Figures S10 and S11 of supplemental note S3.2). A visual summary of the recovered binary maps underlying these performance curves-for the bivariate snowflake model and the high sampling regime-affords a further qualitative comparison between the two methods ( Figure 5). It is evident that GeoSPM offers superior efficiency across most of the noise range in all models and on all metrics. Grid cells that lie in the target region are shown in white, those outside in gray. The number of significant tests out of 10 repetitions is superimposed in color for each grid cell: dark blue indicates at least one significant test and dark red indicates the maximum number of 10, while cells with no significant test did not receive any color. Starting with a low value for the interaction effect c 3 on the left, recovery of the interaction term Z 1 3 Z 2 in region R 3 is weak, while recovery for variable Z 1 in the same region is stronger. This correlates with the fact that observations ð1; 1Þ occur with only a slightly elevated probability p 3 = 0:6 compared with their null probability of 0:525 when c 3 equals 0 in the same setting. As c 3 increases toward the right, recovery in the same region for term Z 1 3 Z 2 increases (p 3 = 0:725 at the right), while recovery for variable Z 1 decreases (probability p 1 = 0:125 at the right for observing ð1; 0Þ, which is half of what it would be if there was no interaction effect). GeoSPM used t tests with a family-wise error corrected p value of 0.05. GeoSPM models generally remain stable at higher levels of noise than kriging. Both GeoSPM and kriging exhibit sensitivity to the sampling regime, both in terms of variability and stability, but the effects are dwarfed by the difference between the two approaches. The type of ground truth has negligible impact. In addition, neither changing the (cross-) covariance function used for kriging from a Maté rn function to a Gaussian nor applying a different regime for dealing with coincident observations-such as averaging-yields a discernible improvement to the performance of kriging in this context (see Figure S12 in supplemental note S3.3). Additional results based on an extended selection of covariance models for kriging show comparable outcomes and are similar to those presented in Figures 3, 4, and 5, as documented in Figures S13-S20 in supplemental note S3.4 and Figures S21-S25 in supplemental note S3.5. For a more in-depth view of the behavior of kriging parameters and variograms, refer to supplemental note S3.6 and S3.7. The recoveries obtained from simulations of the interaction model clearly show GeoSPM's ability to detect an interaction between two spatially distributed factors, even toward the lower end of the approximate interaction effect size range ( Figure 6). Plots of the same five indices above demonstrate successful retrieval for these interaction simulations quantitatively ( Figure 7). As we increase the size of the approximate interaction effect c 3 , retrieval results for the interaction term Z 1 3 Z 2 approach those of the previous, noise-free bivariate snowflake model (setting aside the different sampling regimes). At the same time, recovery for variable Z 1 decreases in the interaction region R 3 (but not elsewhere), as the interaction term explains more variance. Once the recovery for variable Z 1 in region R 3 has vanished, the corresponding retrieval scores are about half of those for the same term in the noisefree model, which agrees with our expectation, because only one of two snowflake shapes in the target are still retrieved at that stage. UK Biobank models In real-world scenarios there is usually no explicit ground truth against which an inference can be tested: the conclusion rests on the integrity of the underlying statistical assumptions. Our illustrative analysis of UK Biobank data 24 therefore does not seek to quantify GeoSPM's fidelity but to demonstrate its potential utility in the medical realm. We focus on two aspects: the derivation of marginalized spatial maps that disentangle a factor of interest from a set of (interacting) confounders, and the use of Lines denote the mean score across 10 random model realizations, shaded areas its SD to either side of the mean. We increase the approximate interaction effect in region R 3 of the grid from left to right, so that the probability of observing ð1; 1Þ grows while the probability of observing ð1; 0Þ or ð0; 1Þ shrinks (the probability of observing ð0; 0Þ stays the same). As a result, scores increase for the interaction term Z 1 3 Z 2 as it captures more of the overall variance, whereas scores for variable Z 1 decrease, until the only significant recovery occurs in region R 1 , which represents half of the target for Z 1 and explains why the overall decrease saturates. The propensity to develop type 2 diabetes is related to age, sex, BMI, and household income, among other factors: a known pattern clearly replicated in UK Biobank. A map of diabetes may therefore reflect not just the propensity to develop the disease but also the spatial structure of associated factors, both causal and incidental. If we are pursuing a previously unknown spatial factor-pollution, for example, 32-34 -we would wish to void our diabetes map of known confounders, yielding a spatial distribution of fully marginalized propensity. We demonstrate GeoSPM on individual-level UK Biobank data drawn from Birmingham. Figure 8 presents the regression coefficient maps and significant t test areas for four separate models of diabetes with incrementally greater numbers of covariates. The first, univariate, model of diabetes (model 1) reveals an extensive concentric organization, positive in the center and negative in the periphery, especially in the north and south. The map becomes more tightly circumscribed with the addition of sex, age, and BMI in model 2: the two negative areas in the north and south are no longer significant, and a stronger negative region emerges west of the center. With the addition of further covariates and their interactions, the spatial structure of diabetes that remains unexplained converges on a set of focal, central regions, displayed in detail in comparison with the univariate model in Figure 9. Here the regional expression of diabetes is not explained by the modeled covariates, suggesting the presence of other factors in play to be subsequently investigated. In general, the ensemble of significant areas for each model indicates the spatial structure that remains unexplained for the corresponding set of covariates, while the intensity and sign of each regression coefficient map represent the degree of spatial association of its covariate in the ensemble. With this in mind, the individual maps for diabetes represent a spatial distribution of propensity marginalized against the other covariates, but not an absolute rate of disease. We can now also examine the conjunctions of multiple maps, not necessarily derived from the same model, within a second-level analysis. Conjunctions are here simply the intersections of two or more thresholded t maps, identifying areas Model 1 is a univariate model of diabetes, model 4 adds sex, age, BMI, household income, and an interaction term BMI 3 household income. Outlines show significant areas in the corresponding twotailed t test at p < 0.05 FWE (voxel-level family-wise correction). The smoothing parameter value is 7,000 m. The color map scale is the same as in Figure 8. where the regression coefficients and their associated variables are jointly significant. Applied to the outputs of our most complex model above, the approach and resulting conjunctions are shown in Figure 10. Pairwise conjunctions show a single region where diabetes and male sex are colocalized; a distinct region where diabetes and age are inversely associated; a very narrow region with an inverse relation between diabetes and BMI; and a single region where diabetes is inversely related to household income. Finally, a three-way conjunction identifies a region where diabetes is spatially associated with younger age, male sex, and lower income (Figure 11). Such conjunction maps identify regions where two or more variables of interest are significantly expressed together, representing subpopulations whose intersectionally characteristic features may inform responsive action or further investigation. This concludes our illustration of GeoSPM. Note that the fact that GeoSPM was able to identify significant regionally specific effects provides a provisional form of predictive validity; under the assumption that these effects were present in the population-and could therefore be used to predict response variables. DISCUSSION We propose, implement, and validate an approach to drawing spatial inferences from sparse clinical data, extending to geostatistics a mature, principled framework for topological inference-SPM-that is well established in the realm of brain imaging. Compared with kriging, GeoSPM combines similar fidelity under optimal conditions with substantially less sensitivity to noise and under-sampling, greater robustness to failure, faster computation, graceful handling of multiple scales of spatial variation, and formal inferential support. Its simplicity and accessibility facilitate widespread application of the comprehensive software implementation we have provided, built on the validated SPM open-source codebase, across a wide range of applications in medicine and beyond. Here, we consider six points concerning the application, extension, and limitations of our approach. First, GeoSPM is applicable to problems of topological spatial inference, whose formulation conforms to the minimal assumptions of the underlying statistical framework. The types of data, the choice of model evaluated at each point, and the size and density of the evaluated grid are not under any strong constraint. Eliminating the spatial dimension allows each point-wise model to be more flexible than the data or computational resource could otherwise sustain. The model could even be complicated spatially, extending to encompass a local patch within otherwise the same framework. This is a key strength in medical applications, where a spatial effect typically needs to be disentangled from a wide array of others. A binary conjunction is formed of the significant areas of a two-tailed t test at p < 0.05 FWE (voxel-level family-wise correction) between type 2 diabetes and, in turn, sex, age, BMI, household income, and BMI 3 household income. Purple outlines show significant areas in the two-tailed t test of each variable, green outlines show significant areas of conjunction: significant areas of conjunction arise in diabetes combined with each of sex (male), age (younger than 56.6 years), BMI (below 27.9 kg/m 2 ), and household income (below £35,015). No significant areas of conjunction exist for diabetes and BMI 3 household income. Locations shown in darker gray tone are not significant for any of the variables. The smoothing parameter value is 7,000 m. Second, although here prototyped on temporally stationary data, GeoSPM can be configured with time instead of the spatial scale in the third dimension, enabling graceful modeling of both spatial and temporal correlations. This has been used, for example, in the context of electrophysiology 35 where extra dimensions can include peristimulus time or, indeed, fast oscillatory frequencies. The effects of manipulating noise and spatial dependencies can then be evaluated across individual time series. Equally, the third dimension could be used for multimodal data projected within the same grid, informing the inference by multiple sampling modalities. Third, the smoothing parameter may be constrained by prior knowledge or independent estimation from the data, even if evaluating a set of models over a plausible range is arguably the most robust approach. One may alternatively rely on the properties of the inferred maps, as suggested in our validation analyses. All competing spatial modeling frameworks rely on chosen parameters to some degree; ours is reduced to a single readily interpretable one. Fourth, no model could perfectly remedy defects in the data itself, such as inadequate or biased coverage. The former can be mitigated by confining inference to spatial locations exhibiting sufficient sampling density; the latter, analogously to structured missingness, is not easily remediable within this or any other inferential framework, and presents no more or less of a problem. Fifth, GeoSPM, like SPM itself, is a platform for standard frequentist statistical inference, revealing the organization of spatially structured variables without causal implications of any Figure 11. Example of a multiple conjunction (here quaternary) of geographic regression significance maps for a single run of UK Biobank model 4 A binary conjunction is formed of the significant areas of a two-tailed t test at p < 0.05 FWE (voxellevel family-wise correction) between type 2 diabetes and, in turn, sex, age, BMI, household income, and BMI 3 household income. Purple outlines show significant areas in the two-tailed t test of each variable, green outlines show significant areas of conjunction: we can identify a significant area where younger males of lower income are associated with having type 2 diabetes in Birmingham. The smoothing parameter value is 7,000 m. kind. But, also like SPM, it is open both to Bayesian extensions, and causal modeling upstream or downstream of the core framework. There are many ways of querying data, both with classical mass univariate and Bayesian analyses of this kind. Although not illustrated here, model comparison using the F-statistic is a common application that could be enabled by GeoSPM. For example, one could ask whether household income has an effect on the regional prevalence of diabetes, having accounted for other demographic variables, by comparing (general linear) models that do and do not include household income as an explanatory variable. Finally, the SPM approach, in any formulation, is designed for topological inference, not discrimination between distributed spatial patterns, which may also arise in healthcare, and requires explicit modeling of spatial interactions that only a multivariate model could conceivably deliver. Indeed, such use would violate the underlying assumption of benign regional dependence, as do analogous attempts in the domain of lesion-deficit mapping of the brain. 36 GeoSPM maps may nonetheless be used to select features where the fragility of the multivariate model, or the applicable data regime, compel it. ACKNOWLEDGMENTS This work is aligned with a project on ''Novel methods to explore the value of cognitive health in a place'' supported by the Health Foundation, an independent charity committed to bringing about better health and health care for peo-
9,296
2022-04-05T00:00:00.000
[ "Computer Science" ]
DESIGN AND ANALYSIS OF HYDRAULIC POWERPACK AND PUMPS. This paper is about design and development of hydraulic system for clamping of workpiece on Vertical Machining Center (VMC). Clamping system plays vital role in any manufacturing system it provides good clamping and also increases the production efficiency.so we made a device which overcome the drawbacks of manual clamping and provide safe clamping system. This operated on hydraulic principle Pascals law. Using this we made hydraulic clamping system which semi-automatic type clamping system. Which increase productivity and reduce the cycle time of product. We also analyze the component on ANSYS 16 software for various clamping load on component. ISSN: 2320-5407 Int. J. Adv. Res. 7(7), 320-328 321 4. To minimize operations time per product. Problem Definition 1. In VMC (vertical machining center) whenever we clamping the job manually there are some drawbacks 2. Less Accuracy (Clamping Force) -Whenever human being applies the force manually on the clamping device it will changes time to time or human to human. Due to these the job/workpiece clamping accuracy is decreases. The human clamping force does not constant. 3. Feedback Loop -When the workpiece clamped on the VMC, the machine and worker both does not get clamping indication i.e. (job is clamped or not). 4. Time Consuming -Whenever the manual clamping is carried out it takes a lot of time for clamping and declamping. And also analyzing the clamp is fixed or not. 5. Surface Finish -When the manually clamping is carried out we does not get required surface finish. 6. Others -Whenever the manually clamping is carried out on VMC machine, if workpiece will not be clamped perfectly it will reduce the accuracy of other process that will done on VMC machine. For manually clamping we always required qualified / skilled labor. From above problems, Maintenance as well as operating cost increases. So, such all problems related to manually clamping, we will be designing an automatic clamping system for that we will moving towards hydraulic system this more accurate also widely used in industry. So, our work is to design hydraulic power pack unit for clamping and de-clamping of workpiece on VMC machine. O.J Bakker et al-[2] -This paper analysis the latest studies in the field of achieve fixture design and its relationship with flexible clamping and reconfigurable fixture system. In this paper reveals that performance and flexibility are the driver behind the different fixturing contact that have been proposed. Helps to improve accessibility. Sunny N Shahane et al-[3] -This fixture was designed and built to hold support and locate fire tube boiler plate to ensure that it is drilled with accuracy. Which can help in improving productivity and time? This automation reduced the human effort also the design enabled vibration free operation. It increased the productivity and reduced the cycle time of 1 hour to 15 times. Sridharkeshava K B et al-[4] -This paper gives a brief introduction to the general and classic principles of jigs and fixtures Design for Clamping operation. The workpiece location clamping stability under dynamic machining and frictional conditions at the interface between jigs and fixtures elements and workpiece are taken into account-1. Manufacturing considerations 2. Clamping location, pool guiding and workpiece and mounting. K.M Viramgama et al-[5] -This paper will give brief overview if about the 3-2-1 Locating principle to design the fixture for complex parts and other clamping principle. From the study we can conclude that for designing the fixture the geometry method 3-2-1 principle is very useful for the complex component. Srinivas R et al -[6] -A hydraulic system is a group of hydraulic elements arranged in an order and using these hydraulic element powers is transmitted using a confined liquid i.e. Oil. Hydraulic power units are drive system for hydraulic machines. Component Details The component is Bajaj ape differential casing. A differential is device which used for obtaining two different speeds of rear wheels of vehicles while turning. This split type construction having two parts of casing which are attached by means of bolting. The fig no.1 shows the differential casing one part. The component is made up of grey cast iron (ASTM48). This is made by sand casting process. The differential is also known as secondary gearbox of vehicles. 322 Operations to be performed on differential casing are drilling of four holes and tapping this hole on VMC machine. For that we develop hydraulic clamping system to increase the accuracy of production and for reduction of cycle time. Hydraulic Circuit A hydraulic circuit is a system which has different interconnection between many hydraulic parts in which the hydraulic liquid is flow and generate power. This power is used to achieve a specific function resulting in work being performed. Before the hydraulic circuit can be designed, the following things must be defined 1. The type and number of each type of hydraulic actuator to be used on fixture. 2. The operating pressures required. 3. The sequence of operation. 4. Type of control required. The circuit comprises the following components: 1. Active components-Hydraulic power pack 2. Transmission lines-Hydraulic pipes 3. Passive components-Hydraulic cylinders Hydraulic pump is used of output 1.395 LPM. Hydraulic Pump is a device used to impart motion and pressure to the fluid in a hydraulic circuit. Pump is driven by 3 phase, 1hp electric motor to create flow. Pump that creates flow pushes against the piston of a hydraulic cylinder.Directional control valve is one of the most fundamental parts in hydraulic circuit. They allow fluid flow into different paths from one or more sources. Here double solenoid, spring centered and center open type D.C. valve is used. The valve is controlled by an electric current through a solenoid. At center position pressure line is connected to tank line and motor is unloaded. Due to this one can connect or disconnect quick disconnect coupling at center position. Cylinders are linear actuation devices that are typically used to keep a work piece stationary or move work piece into position. They provide axial clamping force proportional to the hydraulic pressure applied. Hydraulic clamp used is of 2.22KN force @ 15Bar, piston diameter is of 45 mm, Stroke is 58. It is important when designing a circuit that all devices including fittings, hoses, valves, tubing have a working pressure compatible with circuit pressure. Never exceed the maximum operating pressure of any device. If the system flow requirement for clamp time is established with in the restrictions of the largest device, the addition of a flow control will be required to prevent over driving the smaller devices. Hydraulic circuit is shown in fig no.2 323 Figure no2:-hydraulic circuit diagram with different component Design Every mechanical system having some design procedure and we all know which are important things that we want to design for system and this design gives satisfactory work according to calculations. Hydraulic system consists of many parts but from that we want to calculate some important parameter like 1. Hydraulic cylinder 2. Flow rate/ discharge of system 3. Tank capacity Now for designing of our system we required some basic information, so we have following information. . 7(7), 320-328 324 HP=0.04375 BTU/ Hrs. Finite Element Analysis (Fea) Of Component Finite element analysis (FEA) is a computerized method for predicting how a product reacts to real word forces, vibrations and other physical effects. Finite element analysis shows whether a product will break, wear out or work the way it was designed. Here we divide the component into small sizes known as element and collective element on the model form mesh. The computer analyses the element and shows collective result. The computer solves by the computational method provided. FEA analysis is done on ANSYS 16.0 The test results showed that the deflection was within permitted limit or not. For checking whether static and dynamic forces acting over the component are within acquired such that it will provide some of flexibility to it without causing damage to some software are used for it to simulate that designed component are safe or not also whether further improvement is needed. This is done by using ANSYS 16.0 software. Meshing As for analysis whole element is divided into number of parts called as nodes. These nodes are joined to each other to form elements. The process of joining these nodes is called as meshing. We can choose auto mesh mode. This meshing is shown in following figure no 3. In the same we apply different force applied on the casing and we observe the various effect on the differential casing the various value of stress and deformation is give below form that all we can choose 225 kg force because it is recommended by industry but according to our analysis, we also can use force of 250 kg. The value of various deformations and stress are given in table no 5.
2,056.6
2019-07-31T00:00:00.000
[ "Engineering" ]
Estimation of the canopy height model from multispectral satellite imagery with convolutional neural networks The canopy height model (CHM) is a representation of the height of the top of vegetation from the surrounding ground level. It is crucial for the extraction of various forest characteristics, for instance, timber stock estimations and forest growth measurements. There are different ways of obtaining the vegetation height, such as through ground-based observations or the interpretation of remote sensing images. The severe downside of field measurement is its cost and acquisition difficulty. Therefore, utilizing remote sensing data is, in many cases, preferable. The enormous advances in computer vision during the previous decades have provided various methods of satellite imagery analysis. In this work, we developed the canopy height evaluation workflow using only RGB and NIR (near-infrared) bands of a very high spatial resolution (investigated on WorldView-2 satellite bands). Leveraging typical data from airplane-based LiDAR (Light Detection and Ranging), we trained a deep neural network to predict the vegetation height. The provided approach is less expensive than the commonly used drone measurements, and the predictions have a higher spatial resolution (less than 5 m) than the vast majority of studies using satellite data (usually more than 30 m). The experiments, which were conducted in Russian boreal forests, demonstrated a strong correlation between the prediction and LiDAR-derived measurements. Moreover, we tested the generated CHM as a supplementary feature in the species classification task. Among different input data combinations and training approaches, we achieved the mean absolute error equal to 2.4 m using U-Net with Inception-ResNet-v2 encoder, high-resolution RGB image, near-infrared band, and ArcticDEM. The obtained results show promising opportunities for advanced forestry analysis and management. We also developed the easy-to-use open-access solution for solving these tasks based on the approaches discussed in the study cloud-free composite orthophotomap provided by mapbox via tile-based map service. manned Aerial Vehicle (UAV)-based approaches; and 3) satellite remote sensing data. All aforementioned approaches have advantages and limitations connected with acquisition time and cost (Fig 1). The first data source is forest inventory documents, usually treated as field-based observations. They are available for some regions and useful in addressing forest owners', governmental, and independent organizations' needs [14]. However, these data do not cover all regions of practical interest [15]. Furthermore, such data actualization is time-consuming and cost intensive in difficult-to-access areas. An alternative approach is to use remote sensing data. The remote sensing approach draws on both active and passive sensing technologies. During active sensing such as Light Detection and Ranging (LiDAR) measurements, the sensor measures time between the emitted light time and its return time to estimate the distance of an object (a surface). This technology allows digital elevation models to be produced. Passive remote sensing measures radiation that is emitted or reflected by the object in different spectral wavelengths. Spectral bands obtained this way can be used for future analysis and to calculate the height value in landcover extraction. A common approach builds on UAV assessment. A UAV with LiDAR sensors is a powerful tool for forest height estimation. It obtains canopy height data with minor errors, meeting the precision requirements for almost all forestry tasks. However, such equipment is more expensive than a spectral aerial camera system, thus there remains the challenge of obtaining the same information using low-cost methods [16]. Many works use LiDAR data as a reference and aim to find a cheaper height data source. A detailed review of the alternative approaches to LiDAR sensing is presented in [17], [18]. Thus, most of the current studies in the sphere of canopy height estimation use UAVs with optical aerial systems [19]- [24]. Despite its advantages over fieldbased observations, when large regions have to be processed, the labor involved in working with vast and remote areas is problematic. Satellite data address this issue, providing a cheaper option for forest monitoring [17]. Point cloud data that is useful for estimation of the canopy height can also be derived from satellite imagery using photogrammetry approach. The comparison of such photogrammetry approach and high-density LiDAR measurements is presented in [25], where authors showed photogrammetry method is slightly less accurate (difference in R 2 is about 0.07) compare to the LiDAR method for height measurements of the forest region in New Zealand. The important benefit of the photogrammetry method is that it could provide information for the larger scale compare to the LiDAR method, however it requires special high resolution imagery which is not always available for the particular region. The other limitation of the photogrammetric method is that it is able to characterise only the upper canopy and is not able to perform vertical characterisation of the forest such as can be done by laser scanning. The comparison of the photogremmetry obtained by unmanned aerial systems and areal laser scanning for the forest inventory in Oregon was presented in [26], where authors stated that photogrammetry is slightly less accurate compare to laser scanning (difference in R 2 for height estimation is about 0.15). However photogrammetry is easier to integrate to existing forest monitoring methodologies. Our work is focused on using satellite images for CHM estimation as it is more preferable data source than LiDAR derived measurements in terms of cost and spatial coverage. Neural networks allows us to conduct image processing automatically. We set up the hypothesis that neural networks can extract significant spatial features from very high-resolution RGB images of 1 m to improve performance of CHM estimation. It was expected that developing a satellite-based solution compatible with a high-resolution UAV approach would further enable the prediction of advanced forest characteristics. Thus, this study's objectives and contributions were: 1) to develop a method for vegetation height estimation utilizing deep neural networks and different configurations of input data varying spectral compound (reducing to Blue, Green, and Red), spatial resolution and by adding topography features; 2) to assess the generated height map, conducting a further investigation into the classification of dominant forest species (conifer and deciduous). For this, multispectral imagery was incorporated with height data; 3) to create the software toolchain to train a neural network to predict CHM using single satellite non-stereo imagery. 4) to develop the easy-to-use open-access solution for the community which is now available by the following resource [27]. The underlying code will be shared: https://github.com/LanaLana/forest_height. II. RELATED WORK For canopy height estimation studies, spectral satellite imagery can be distinguished by the following characteristics: spatial resolution, spectral range, and availability. The majority of works use a spatial resolution much higher than 20 m to tackle the canopy height evaluation problem. This approach is justified for particular tasks when large-scale maps are produced. In [28], they conducted a 30 m spatial resolution canopy height evaluation with Landsat imagery and showed the dynamics over 29 years in the Darwin region. In [29], they employed Landsat 7 and 8 time-series data (30 m spatial resolution) to estimate tree heights in Africa. GLAS (Geoscience Laser Altimeter System) height measurements from the ICESat satellite were used as reference data (60 − 70 m spatial resolution). The same height data source was mentioned in [30]. In [31], they used Sentinel-2 images that were resampled to a 20 m pixel size to predict Mangrove forest canopy height. Other studies involving Sentinel-2 data are reported in [32]- [34]. In [35], they assessed SAR images from ALOS PALSAR, and upsampled them from 30 to 5 m as a LiDAR elevation model. The cases of very high-resolution FIGURE 1: Cost comparison of different forest height measurement approaches (diagram is not to scale) (3.7 m) images from the Planet Dove implementation are presented in [36]. However, the target height map resolution for that study was 1 hectare. Very high-resolution (2 m) WorldView-2 satellite imagery was used in [37], but the working resolution was adjusted to 5 m. Another important data characteristic is the spectral range and the number of channels. A wider wavelength range is available for satellites with low spatial resolutions (Landsat, Sentinel) than for some very high-resolution satellites. For instance, Planet (3-5 m resolution) and GeoEye (2 m resolution) satellites have Blue, Green, Red, and NIR bands; RapidEye (6 m resolution) has Red Edge. The GeoEye panchromatic channel has a 0.4 m resolution and allows RGB to be enhanced. WorldView-2 provides eight spectral bands with a resolution of 2 m. An additional source of very high remote sensing data is Basemaps, with RGB bands such as those provided by Maxar one [38]. Nevertheless, the majority of works focus on using only the wide multispectral range (more than eight bands), sacrificing the spatial resolution. From the aforementioned satellite-based studies, the minimal number of spectral bands (Blue, Green, Red, NIR) was only considered in [36]. However, the goal of the work was the creation of a large-scale country wide map, so the spatial resolution of the analysis was 1 hectare. Therefore, the issue of minimizing the number of required satellite bands for forest height estimation has not yet been well studied. Satellite data are frequently accompanied with data of other sensing techniques. In [39], they combined four Kompsat-3 multispectral bands and PALSAR-1 radar images resampled into 2.8 m to train a neural network. Few studies have implemented this into self-contained spectral satellite data [33], [40]- [42]. However, the spatial resolution of the Sentinel and Landsat images (lower than 10 m) considered in these studies is not high enough to extract small details on the surface. Thus, the satellite spatial resolution of 1-m per pixel is still beyond the scope of the majority of studies. Data availability is also a significant aspect of implementation in practice. An image's properties affect its cost. Sentinel and Landsat images are open source, while WorldView, Planet, and RapidEye are commercial and contain a greater amount of the spatial information required in applied tasks. After data acquisition, the obvious question of data processing arises. Computer vision algorithms enable highquality automatic satellite imagery analysis. Such methods are usually based on key feature extraction from input spectral bands to describe some object, which can be a pixel or set of pixels. Then, the algorithm aims to ascribe a label (for classification tasks) or a value (for regression tasks) to the object. The processing methods for expansive forestry areas using satellite images are classical machine learning models, such as Random Forest [43] or Support Vector Machine [44]. Their main advantages are simplicity and straightforward interpretation in the case of linear models. Generally, spatial characteristics are not taken into consideration, and an algorithm relies on spectral values or precalculated vegetation indices. In [28], a combination of 14 vegetation indices and spectral bands were used in the Random Forest model to predict the canopy height using Landsat images. Moreover, the strong correlation between the normalized difference vegetation index (NDVI) and canopy height has been well emphasized in aerial photography [16], [35]. Despite the importance of spectral data, other vital features can also be processed. For instance, there is a strong correlation between forest height and canopy width, as discussed in [32], in which the canopy volume was estimated using only the crown projected area and the crown diameter combined in a particular regression equation. The deep neural network-based approach is more capable than classical machine learning methods for the following reasons: the texture and spatial features extracted by the neural networks include sufficient VOLUME 1, 2021 information about landcover; it not only handles spectral values, but also the aforementioned spatial characteristics of an object available, for instance, in UAV-based tasks [45]. Tree height is correlated with tree diameter for each forest species [46]. In [47], tree height was estimated from the exponential equation, including diameter at breast height value. The crown form depends on the tree species; accompanied by the crown diameter, it can provide important features for a neural network. Tree height can also be derived from spectral information only, as it depicts meaningful vegetation characteristics such as chlorophyll content [48]. A. STUDY AREA The study area is located in the Arkhangelsk region of northern European Russia with coordinates between 45 • 16 ′ and 45 • 89 ′ longitude and between 61 • 31 ′ and 61 • 57 ′ latitude (Fig 2). The investigated territory belongs to the middle boreal zone. The region's climate is humid, with the warmest month being July when the temperature rises to 17 • C. The topography is flat, with a height difference in a range between 170 and 215 m above sea level [49]. The main species present in the region are pine, spruce, aspen, and birch. B. REFERENCE DATA We used forest inventory and LiDAR-derived data covering the area of about 50 thousand hectares. LiDAR measurements were continued in the end of August of 2017 and 2018 by Leica ALS 80 HP scanner. Then the Canopy Height Model (CHM) with a 1 m spatial resolution was generated from LiDAR-derived point clouds. The inventory data were collected in accordance with the official Russian inventory regulation in 2018 and 2019 [50]. It included such characteristics as canopy height, species percentage distribution, and age. This data was organized as a set of individual stand coordinates with appropriate characteristics based on the assumption that the crop was homogeneous. A species class markup was used in additional experiments presented as a raster map of dominant conifer and deciduous classes. The statistics of this data are shown in Table 3. However, the shift in geo-referencing between the satellite data and LiDAR-derived measurements makes the target at 1 m spatial resolution less useful. As the typical shift lies between 2 and 3 m, the high-resolution CHM will show erroneous value for the particular point in the satellite image. This forced us to downsample the height map to 5 m to make the target value for each point represent the mean value of the area including the true location. The distribution of the height over the study region is shown in the Figure 4. Although, height is usually represented as a continues value, height categories are essential for practical use in power lines services. Height classes are often required instead of continues values for decision making within protected areas [51]. The reason is that different categories (dangerous vegetation overgrowing) have different importance and estimation in particular categories have to be more precise to reduce accidents on power lines corridors. C. THE TEST REGION SELECTION The training and test area was from the same satellite images, but without overlapping. The test region was manually chosen to include a diversity of height classes. The total test area was equal to 13% of the initial dataset. The spatial location is presented in the D. SATELLITE DATA We used Sentinel-2 and WorldView-2 satellite imagery to check the high and very high spatial resolution data sources. The boreal location of the study area resulted in a lack of cloudless images. All images were from the boreal growing season (from May to August). Image IDs and dates are presented in tables 1, 2. WorldView imagery was downloaded from GBDX [52]. For the height estimation task, we used Red, Green, Blue, and Near-Infrared bands, while for the species classification problem, all eight bands were considered. The resolution of the WorldView images was 1, 2, or 5 m depending on the experiment statement. For CNN-based tasks, image values in the range from 0 to 1 are usually used [53], [54]. Therefore, pixel values were brought into a range between 0 and 1 using Equation 3. For the spatial resolution adjustment, the pansharpening procedure was implemented using a panchromatic band which was obtained in the imagery bundle with multispectral data from the data vendor. We did not consider any predefined cloud mask for WorldView. However, during training, pixels with particular properties were eliminated from consideration (see subsection III-G). This allowed us to clean the dataset from erroneous labels. where mean, std are the mean and standard deviation of the image. In equations 1, 2, we calculate m and M (minimum and maximum of the preserved dynamic range). The standardization of the imagery according to the whole dataset statistics proves profitable for the neural network training compared to a simple scaling of the entire value range [55]. For the additional analysis, freely available Sentinel data were downloaded in L1C format from EarthExplorer USGS [56] and preprocessed using Sen2Cor [57] to an L2A format. Pixel values were brought into a range between 0 and 1 using Equation 3. We used the B02, B03, B04, B05, B06, B07, B08, B11, B12, and B8A bands, which were adjusted to a 10 m resolution. 60m bands were discarded as they are more affected by atmosphere than the land surface. 20 to 10 m bands were upsampled with the nearest neighbor method to avoid initial data corruption (they can be unambiguously downsampled back to exactly initial 20m data). Both for Sentinel and WorldView, each image covered the entire study area, and images were considered separately without any spatial averaging (the same as in [58]). As supplementary features, we used a freely available high-resolution digital elevation model (DEM), ArcticDEM [59], covering boreal regions (Fig 5). It provides a resolution of 2 m. For some experiments, the resolution was upsampled to 1 m by interpolation (see the section III-E). Both the satellite and LiDAR data were co-registered through geo-referencing, the same as in [37]. We used cloud-free composite orthophotomap provided by mapbox [60] via tile-based map service as an example of free-available high-resolution RGB data-source. This image covered the same test region and was used just for the developed model assessment. We chose this data-source, because model implementation without expensive input data demands is crucial for open-access platform that can handle a more available images. The spatial resolution was 1 m per pixel, and the preprocessing was the same as for WorldView data. E. FEATURE SELECTION FOR DEEP NEURAL NETWORK Convolutional neural networks take a tensor as an input. The feature selection to create this tensor is fundamental. To find the best input data representation for the CHM estimation problem, we established a range of experiments. Firstly, we conducted a study with the WorldView bands. The workflow of our research is shown in the Fig 6. For each experiment, the RGB bands were used constantly. The variable part concerned the resolution changing and the supplementary features (NIR and ArcticDEM), which were combined with the RGB channel in a single input tensor for the neural network model. We studied the original (2 m), pansharpened (1 m), and downsampled (5 m) images. For the experiments with the 1 m resolution, bands were upsampled to the target resolution by bilinear interpolation. We used bilinear interpolation for image resampling to avoid aliasing emerging in nearest neighbor and halo inherent to higherorder interpolation methods, which are more problematic for neural networks than bilinear smoothing. A reference CHM was used during the training procedure to estimate the model's error. To minimize data mismatches, reference and predicted height maps were intersected with the forest cover mask before the loss function calculation stage. Therefore, we conducted the following experiments for the WorldView images: 1) RGB original resolution 2 m; 2) RGB pansharpened to 1 m; 3) RGB pansharpened to 1 m + ArcticDEM upsampled to 2 m; 4) RGB + NIR original resolution 2 m; 5) RGB + NIR original resolution 2 m + ArcticDEM upsampled to 2 m; 6) RGB pansharpened to 1 m + NIR upsampled to 1 m; 7) RGB pansharpened to 1 m + NIR upsampled to 1 m + ArcticDEM upsampled to 1 m; 8) RGB downsampled to 5 m resolution. To assess the importance and restriction of the spatial resolution, we also checked the model's performance for the WorldView RGB bands downsampled to 5 m. We conducted the following study to compare model's performance for high-resolution RGB images and less detailed but richer in terms of the spectral information Sentinel data with 10 bands, upsampled to 10 m. There were two experiments: 1) Multispectral bands; 2) Multispectral bands + ArcticDEM downsampled to 10 m. VOLUME 1, 2021 F. STRATEGIES FOR HEIGHT PREDICTION AND EVALUATION METRICS Regression may naturally lead to richer (continuous) estimations for practical implementations than rigid class-based output maps. Therefore, we considered both regression and classification tasks for a comparative analysis. The regression problem statement means that we ascribe each pixel with a particular value corresponding to the height parameter. Then, the loss can be estimated as an error between real height value (CHM value) and the predicted value. The considered metrics are root mean square error (RMSE), mean absolute error (MAE), and mean bias error (MBE): where y is the mean target value among all pixels (mean CHM value),ŷ i is the predicted value of the i th pixel, y i is the target value of the i th pixel (CHM value), and n is the pixel number. Test regions results were computed for all images in WorldView or Sentinel datasets. Using the same reference data we can also solve classification task. When we formalized the problem as a classification task, we divided the continuous values of height into various classes. The choice of such a division often depends on an applied task's demands. For our study, we chose intervals 0 − 4, 4 − 10, 10 − 20, and > 20 m. We rely on the amount of classes and intervals of height that described [61]. We slightly shifted the boundaries of the height intervals, described in [61] according to the suggestion inventory data provider from Arkhangelsk region. After splitting the continuous dataset to the aforementioned classes we can compute the portion of the wrong estimated pixel classes and use F1-score [62] for evaluation of the trained classification models. where TP denotes true positive, FP denotes false positive, and FN denotes false negative. The above formulas were applied in a per-class basis. To compute results, test regions from all images were used. This refers to the area assessment, while in terms of regression, we strove to optimize each pixel value. Therefore, these two approaches can lead to a different local optimum. For example, if we split heights between 0 and 30 m into the following buckets: 0 − 4, 4 − 10, 10 − 20, and 20 − 30, then it is not important that we do not ascribe the exact values but some value from the correct bucket to some pixels. Then, it is clear that regression predictions can also be represented in terms of classification. For the classification task, the multiclass weighted crossentropy loss function was used to make the predictions more balanced even for classes with fewer representatives. The same approach was implemented for the regression task. We compared the simple RMSE loss (Equation 10) and the weighted RMSE loss (Equation 11). For heights with fewer representatives, the penalty for the wrong prediction was increased by predefined weights. The weights were inversely proportional to the height distribution. There was also a threshold for the height when the weight was equal to 1 (no extra penalty). The range of weights and the threshold were chosen empirically, as shown in the Figure 7. whereŷ i is the predicted value of the i th pixel, y i is the target value of the i th pixel, N is the number of relevant (nonmasked) pixels, weights(y i ) is the extra penalty depending on the target value of the i th pixel. We needed to manage the temporal mismatch (such as logging) between LiDAR scanning and satellite imagery. To do so, we used two heuristics. The first one was that pixels labeled as forest by the forest cover model but with a height of less than 1 m were considered to be a forest logging. The forest cover model classifies pixels covered with clouds as non-forested. Therefore, the second heuristic was that pixels not labeled as forest but with CHM > 5 m were considered clouds. Reference and predicted height values for these pixels were not used in the loss function calculation during the training procedure (they were treated as masked). Thus, the mask of relevant pixels was defined by the following equations: height_mask = (logging == 0) * (cloud == 0) (14) where forest mask was obtained by the neural network model trained to predict forest cover with a high accuracy, especially in terms of small details using RGB bands. The model was implemented in the GeoAlert service [63]. G. EXPERIMENTAL SETTINGS For all the neural network models, training was performed on the Skoltech supercomputer Zhores [64], using Keras [65] with a Tensorflow [66] backend. The source code containing the implementation details is available in the aforementioned repository. Both for the regression and classification task, U-Net [67] with an Inception-ResNet-v2 [68] encoder was used (Figure 8). U-Net is a popular CNN architecture in the remote sensing domain which has shown high performance in various problems [69], [70]. The upsampling layers follow the U-Net's downsampling layers. Skip connections between layers allow the convolutional neural network to manipulate vital information at large spatial scales avoiding losing local information. Skip connections also facilitate gradient flow during the training procedure that was highlighted in [71]. We substituted the original VGG encoder with a ResNetbased one as it has shown high results in various works [72]. Residual connections in the Inception-ResNet-v2 encoder support shortcuts leading to better prediction quality [73] and enabling substantial simplification of the Inception blocks. We used the original U-Net decoder, where every step consists of an upsampling of the feature map followed by a 2x2 convolution. That halves the number of feature channels. The expansive path also includes concatenation with the cropped feature map from the contracting path and two 3x3 convolutions followed by a ReLU. The total number of parameters in the neural network is 62M where the encoder includes 54M. The decoder has 5 blocks, while the encoder part consist of 8 blocks. The models' implementation was based on opensource library [74]. Each model was trained 25 epochs for 200 training and 100 validation steps with a decreasing learning rate from 0.001 using RMSprop [75] optimizer and early stopping with patience 5 epochs. For the classification task as an activation function for the last layer, the softmax function was chosen. As an activation function for the last layer's regression model, we used linear function. For all models, geometrical augmentation was implemented. This involves random rotations, and vertical and horizontal flipping. For models using the RGB channels only, we implemented color transformation. For this task, the albumentations library [76] was used. H. CLASSICAL MACHINE LEARNING METHODS We also conducted experiments with classical machine learning methods to compare different approaches in canopy height estimation. Two approaches were considered: Random Forest (RF) [43] and Gradient Boosting (GB) [77]. These approaches are widely used in the remote sensing domain due to relatively high performance in various tasks. For the RF method, we implemented 300 decision trees with maximum depth equal to 8, as these parameters shown the best quality. We also compared it to decision tree numbers 100, 200, 400, 500, 600, and maximum depth values equal to 4, 5, 6, 7, 8, 9, 10. In the GB method the parameters were 200 estimators with learning rate equal to 0.1, and maximum depth equal to 7, that were also set empirically (the same grid was considered to choose number of trees and maximum depth as in the RF case). For both two methods the implementation was used from scikit-learn [78]. A proper feature space is essential for machine learning algorithms, namely in classical one. The features were selected according to the study described in [79] as more relevant for vegetation properties estimation from Sentinel images. Therefore, the following vegetation indices were computed and accomplished initial multispectral bands resulting in Sentinel-derived features: the Normalized Difference Vegetation Index (NDVI), the Simple Ratio Index (SRI), the red-edge Normalized Difference Vegetation Index (RENDVI), and the Anthocyanin Reflectance Index 1 (ARI1). Thus, each pixel was considered as an input VOLUME 1, 2021 I. FOREST-TYPE CLASSIFICATION MODEL To estimate the quality of the developed models, we considered a forest-type classification problem. To train the neural network model to predict two species (conifer and deciduous), we leveraged both WorldView and Sentinel imagery. The problem was defined as the per-pixel semantic segmentation task. Forest inventory characteristics were used as reference data. Eight WorldView bands were intersected with the forest mask. Both for the Sentinel and WorldView imagery, a height map or age map was used as an additional channel. This was done to make the model more robust in terms of species diversity resulting from different forest ages. Therefore, the neural network input was formed of 10 bands. As mentioned above, there are two familiar sources of height values: LiDAR-derivied data and forest inventory characteristics. The difference is in the data representation. Forest inventory characteristics establish height for each individual stand (small region joined according to some similar value of features such as tree species, age, density). Although real height within each stand can differ for each pixel, all pixels corresponding to a particular stand have the same height value. Thus, for this experiment we used both inventory-and LiDAR-derived height data. We compared model predictions according to the next strategies of data leveraging: 1) just multispectral data; 2) multispectral data and CHM data; 3) multispectral data and inventory height data; 4) multispectral data and inventory age data; 5) multispectral and artificially generated CHM by the best model height. For these experiments, we trained a smaller U-Net model with the Resnet-34 encoder [80]. Individual stands from the dataset were randomly split into a training and testing set shown in Table 3. During training, the cross-entropy loss function was computed in a per-pixel manner. For testing, the F1-score was estimated for each individual stand. The predicted class for the individual stand was defined as a dominant class among all pixels within the stand. Each forest classification model was trained 25 epochs for 200 training and 100 validation steps with a decreasing learning rate from 0.001 using RMSprop [75] optimizer and early stopping with patience 5 epochs. The activation function for the last layer was soft-max. IV. RESULTS The achieved metrics for the regression models are shown in Table 4. The best quality predictions, using WorldView imagery with MAE 2.47 m (Exp. 9), were achieved with a combination of Red, Green, Blue pansharpened bands, the NIR band, and the supplementary ArcticDEM raster with resolution upsampled to 1 m (Fig 9). The smaller region is presented in the Fig 10. For the Sentinel imagery, only two experimental modes were considered: with ArcticDEM and without ArcticDEM. For both the Sentinel and WorldView data, ArcticDEM usage allowed us to improve the prediction results (for Sentinel, the MAE improved from 4.1 to 3.9 m, and for WorldView, the MAE improved from 2.9 to 2.58 m). The pansharpening procedure also contributed to the final result, decreasing the error from 3.3 to 3.1 m (Exp. 1 and Exp. 2) for the WorldView RGB model. The NIR band usage demonstrated an error reduction from 2.9 to 2.58 m (Exp. 3 and Exp. 7). This effect is linked to vegetation condition, which is reflected by the NIR wavelength. Additional weights during the loss computation reduced the MAE from 2.58 to 2.47 m (Exp. 7 and Exp. 9). In Table 5, we can see a comparison between the regression model and the classification model (Fig 12). These two height between 4−10 m. This is mainly caused by the spatial distribution specificity of the class, and it often occurs due to the small regions between crowns and depends dramatically on the satellite and LiDAR geo-reference data. For this study, we used LiDAR data downsampled to 5 m, while the WorldView imagery resolution was 1 or 2 m. This allowed us to save high-resolution spatial surface characteristics. To assess the importance of texture information, we experimented with RGB bands downsampled to 5 m. The MAE for this case was 4.4 m. This result is lower than that of the Sentinel images (4.1 m) and confirms that when we reduced the spectral information, we faced stricter demands for spatial resolution. We checked the generated height in the forestry task of species classification. The results are presented in Table 6. The first objective of the experiment was to show how supplementary features can enhance the quality of applied tasks. Both LiDAR and inventory data helped to improve classification in comparison with simple multispectral data. The second goal was to show that the generated height is of sufficient quality to beat the base model using just satellite data. We did not intend to conduct a comparison between WorldView and Sentinel sources. For this reason, in both experiments, observation dates were not equal in the data used. The superior results for the Sentinel imagery, as compared with the WorldView data, were partially due to the wider dataset. We also evaluated the regression model trained using RGB WorldView (pansharpened to 1 m resolution) image on a cloud-free composite orthophotomap provided by mapbox [60] and covering the same test area. For this experiment, the MAE was equal to 3.5, and the RMSE was 4.6. Prediction example is shown in Figure 11. This promising result allows cheaper CHM estimation for large areas using only highresolution free-available satellite RGB data. We conducted experiments with classical machine learning algorithms using Sentinel-derived features to compare this approach to the proposed one, namely the CNN-based with high-resolution data. The best results were achieved for the GB algorithm and combination of Sentinel-derived features with ArcticDEM, where MAE was equal to 4 and RMSE was equal to 5.4 4. VOLUME 1, 2021 V. DISCUSSION It is challenging to perform a fair comparison between the majority of studies related to height estimation for various reasons. The main reason is the difference in height distribution. For example, in [37], the predicted height was limited by 30 m, the spatial resolution was 5 m, and the final RMSE was 2.2 m. However, according to the presented plots, the mean value was less than 10 m, while in our study, it was about 15 m. In [28], the validation pixels range was defined as being from 0 to 25 m, with a mean value of 7 m. The model's spatial resolution was 30 m. For this height distribution, an RMSE from 2.3 to 4.1 m was achieved. In [31], they studied the ranges between 0 to 18 m and 3 to 15 m, by leveraging satellite (both spectral and radar) data with a 20 m resolution. In contrast to our work, field-based observations with a sampling frequency of the 10 largest trees per inventory plot were used as reference material. Therefore, the achieved result (an RMSE of 1.48 m) cannot be compared with our model's performance. Other obstacles impeding a fair comparison are the species diversity and regional conditions. It is worth mentioning that although ArcticDEM provides a stable improvement in canopy height estimation (see table 4, Exp. 6 and Exp. 7), it does not cover central or southern regions. For these areas, more powerful base models need to be implemented, leveraging just satellite imagery. We showed that high-resolution WorldView 3-bands images provided more significant features than low resolution Sentinel with 10 spectral bands (see table 4, Exp. 2 and Exp. 10). However, resolution adjustment from 2 m to 5 m for the same WorldView dataset leads to a loss of important information, in particular texture information (see table 4, Exp. 2 and Exp. 8). The aforementioned experiments, which was performed on the same dataset and using the same NNs with only one difference -the adjusted spatial resolution, showed that neural networks can extract additional spatial features from very high-resolution optical images of 1 m. Thus we experimentally confirmed the initial hypothesis that by using high resolution data it is possible to make CHM estimation more accurate. Creating the model with only high-resolution RGB channels allows it to be implemented in more available satellite images, such as RGB mosaic basemaps (google, yandex, and mapbox). Therefore, an opportunity to replace WorldView data with satellite images derived from other sources, making the provided model more universal. We made a prediction for cloud-free composite orthophotomap provided by mapbox [60] using the CNN model trained on RGB 1 m bands. The achieved quality (MAE = 3.5) confirms the opportunity for further model application for basemaps analysis. There are the following directions for future research. The first involves improving the co-registration between LiDAR and satellite data. Now the developed RGB-based model shows the ability to reconstruct the main patterns corresponding to the CHM (Fig 10); large individual trees and spots within forest are detected successfully. However, satellite data has a slight shift in comparison with LiDAR data. Im-proving co-registration would allow the model's performance to be assessed more accurately for resolutions of less or equal to 1 m and also could probably improve the poor performance for the class of 4-10 m. The ability of the model to be transferred to new regions is another essential question. As we did not have data from other regions, it is impossible to judge the model robustness for new areas. Moreover, for some regions, the ArcticDEM layer is not available; therefore, additional training for new areas might improve prediction quality. However, the neural network approach has proven to be powerful enough to extract the necessary spatial information and adapt to changing natural conditions. Augmentation and image diversity are often applied to overcome this weakness in real-life applications. Another possible objective for future research is a canopy height estimation for areas with complex topography. Neural network models rely on landcover's spectral and texture characteristics, making the initial approach promising even when topography is not flat. However, shadows on slopes pose additional challenges to the multispectral satellite image analysis. LiDAR data additional preprocessing is also considered for study areas with complex topography [81]. In this study, we used all available images both for training and testing (splitting them into training and testing regions) as it is a common choice in the remote sensing domain [82]. However, in the future work, image-based cross-validation techniques can be used and robustness for new environmental conditions can be considered [83]. VI. CONCLUSIONS Overall, in this study we confirm the hypothesis that neural networks can extract significant spatial features from very high-resolution RGB images, which can be used for more precise canopy height estimation. We also checked whether it is possible to get an accuracy of canopy height estimation by using of satellite-based solutions compatible with measurements obtained by UAV approach. For checking our assumptions we analysed the potential of very high-resolution images with limited spectral information in the task of canopy height model estimation. We created a software toolchain based on a state-of-the-art neural network architecture that enable us to extract spatial features from very high-resolution images. The proposed approach led to a reduction in the mean absolute error to 2.4 m, while leveraging just four spectral bands and the supplementary features from ArcticDEM. However, in southern regions where ArcticDEM is not available and without other sufficiently accurate DEM, the model achieved an MAE of 2.9 m. We also examined how generated height can be successfully used in the forest classification task. Our canopy height model estimation results using RGB bands indicated the prospect of replacing expensive LiDAR sensing data with easily attainable satellite data. Depending on the region of study, our technique allows a customer to promptly collect all the necessary relevant forestry inventory information without ground-based observations. We also developed and shared the easy-to-use open source solution which gives a new possibilities for the community to solve similar tasks. In future works, we are planning to include texture data, indexes and other attributes that can be obtained using ArcticDEM in the modeling procedure. index profile in mountainous forests," ISPRS journal of photogrammetry and remote sensing, vol. 132, pp. 77-87, 2017. [82] E. Saralioglu and O. Gungor, "Semantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network," Geocarto International, pp.
9,319.8
2022-01-01T00:00:00.000
[ "Environmental Science", "Computer Science" ]
What Is the Function of Confirmation Bias? Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias. In recent years, confirmation bias (or ‘myside bias’),1 that is, people’s tendency to search for information that supports their beliefs and ignore or distort data contradicting them (Nickerson 1998; Myers and DeWall 2015: 357), has frequently been discussed in the media, the sciences, and philosophy. The bias has, for example, been mentioned in debates on the spread of “fake news” (Stibel 2018), on the “replication crisis” in the sciences (Ball 2017; Lilienfeld 2017), the impact of cognitive diversity in philosophy (Peters 2019a; Peters et al. forthcoming; Draper and Nichols 2013; De Cruz and De Smedt 2016), the role of values in inquiry (Steel 2018; Peters * Uwe Peters<EMAIL_ADDRESS>1 Department of Philosophy, University of Southern Denmark, Odense, Denmark 2 Department of Psychology, King’s College London, De Crespigny Park, Camberwell, London SE5 8AB, UK 1 Mercier and Sperber (2017) and others prefer the term ‘myside bias’ to ‘confirmation bias’ because people don’t have a general tendency to confirm any hypothesis that comes to their mind but only ones that are on ‘their side’ of a debate. I shall here use the term ‘confirmation bias’ because it is more common and in any case typically understood in the way just mentioned. Confirmation bias is typically viewed as an epistemically pernicious tendency. For instance, Mercier and Sperber (2017: 215) maintain that the bias impedes the formation of well-founded beliefs, reduces people's ability to correct their mistaken views, and makes them, when they reason on their own, "become overconfident" (Mercier 2016: 110). In the same vein, Steel (2018) holds that the bias involves an "epistemic distortion [that] consists of unjustifiably favoring supporting evidence for [one's] belief, which can result in the belief becoming unreasonably confident or extreme" (897). Similarly, Peters (2018) writes that confirmation bias "leads to partial, and therewith for the individual less reliable, information processing" (15). The bias is not only taken to be epistemically problematic, but also thought to be a "ubiquitous" (Nickerson 1998: 208), "built-in feature of the mind" (Haidt 2012: 105), found in both everyday and abstract reasoning tasks (Evans 1996), independently of subjects' intelligence, cognitive ability, or motivation to avoid it (Stanovich et al. 2013;Lord et al. 1984). Given its seemingly dysfunctional character, the apparent pervasiveness of confirmation bias raises a puzzle: If the bias is indeed epistemically problematic, why is it still with us today? By definition, dysfunctional traits should be more prone to extinction than functional ones (Nickerson 1998). Might confirmation bias be or have been adaptive? Some philosophers are optimistic, arguing that the bias has in fact significant advantages for the individual, groups, or both (Mercier and Sperber 2017;Norman 2016;Smart 2018;Peters 2018). Others are pessimistic. For instance, Dutilh Novaes (2018) maintains that confirmation bias makes subjects less able to anticipate other people's viewpoints, and so, "given the importance of being able to appreciate one's interlocutor's perspective for social interaction", is "best not seen as an adaptation" (520). In the following, I discuss three recent proposals of the adaptationist kind, mention reservations about them, and develop a novel account of the evolution of confirmation bias that challenges a key assumption underlying current research on the bias, namely that the bias thwarts reliable belief formation and truth tracking. The account holds that while searching for information supporting one's pre-existing beliefs and ignoring contradictory data is disadvantageous when that what one takes to be reality is and stays different from what one believes it to be, it is beneficial when, as the result of one's processing information in that way, that reality is changed so that it matches one's beliefs. I call this process reality matching and contend that it frequently occurs when the beliefs at issue are about people and social structures (i.e., relationships between individuals, groups, and socio-political institutions). In these situations, confirmation bias is highly effective for us to be confident about our beliefs even when there is insufficient evidence or subjective motivation available to us to support them. This helps us influence and 'mould' people and social structures so that they fit our beliefs, 2 which is an adaptive property of confirmation bias. It can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don't become epistemically disconnected from social reality but can navigate it more easily. I shall not argue that the adaptive function of confirmation bias that this realitymatching account highlights is the only evolved function of the bias. Rather, I propose that it is one important function that has so far been neglected in the theorizing on the bias. In Sects. 1 and 2, I distinguish confirmation bias from related cognitions before briefly introducing some recent empirical evidence supporting the existence of the bias. In Sect. 3, I motivate the search for an evolutionary explanation of confirmation bias and critically discuss three recent proposals. In Sects. 4 and 5, I then develop and support the reality-matching account as an alternative. Confirmation Bias and Friends The term 'confirmation bias' has been used to refer to various distinct ways in which beliefs and expectations can influence the selection, retention, and evaluation of evidence (Klayman 1995;Nickerson 1998). Hahn and Harris (2014) offer a list of them including four types of cognitions: (1) hypothesis-determined information seeking and interpretation, (2) failures to pursue a falsificationist strategy in contexts of conditional reasoning, (3) a resistance to change a belief or opinion once formed, and (4) overconfidence or an illusion of validity of one's own view. Hahn and Harries note that while all of these cognitions have been labeled 'confirmation bias', (1)-(4) are also sometimes viewed as components of 'motivated reasoning' (or 'wishful thinking') (ibid: 45), i.e., information processing that leads people to arrive at the conclusions they favor (Kunda 1990). In fact, as Nickerson (1998: 176) notes, confirmation bias comes in two different flavors: "motivated" and "unmotivated" confirmation bias. And the operation of the former can be understood as motivated reasoning itself, because it too involves partial information processing to buttress a view that one wants to be true (ibid). Unmotivated confirmation bias, however, operates when people process data in one-sided, partial ways that support their predetermined views no matter whether they favor them. So confirmation bias is also importantly different from motivated reasoning, as it can take effect in the absence of a preferred view and might lead one to support even beliefs that one wants to be false (e.g., when one believes the catastrophic effects of climate change are unavoidable; Steel 2018). Despite overlapping with motivated reasoning, confirmation bias can thus plausibly be (and typically is) construed as a distinctive cognition. It is thought to be a subject's largely automatic and unconscious tendency to (i) seek support for her pre-existing, favored or not favored beliefs and (ii) ignore or distort information compromising them (Klayman 1995: 406;Nickerson 1998: 175;Myers and DeWall 2015: 357;Palminteri et al. 2017: 14). I here endorse this standard, functional concept of confirmation bias. Is Confirmation Bias Real? Many psychologists hold that the bias is a "pervasive" (Nickerson 1998: 175;Palminteri et al. 2017: 14), "ineradicable" feature of human reasoning (Haidt 2012: 105). Such strong claims are problematic, however. For there is evidence that, for instance, disrupting the fluency in information processing (Hernandez and Preston 2013) or priming subjects for distrust (Mayo et al. 2014) reduces the bias. Moreover, some researchers have recently re-examined the relevant studies and found that confirmation bias is in fact less common and the evidence of it less robust than often assumed (Mercier 2016;Whittlestone 2017). These researchers grant, however, the weaker claim that the bias is real and often, in some domains more than in others, operative in human cognition (Mercier 2016: 100, 108;Whittlestone 2017: 199, 207). I shall only rely on this modest view here. To motivate it a bit more, consider the following two studies. Hall et al. (2012) gave their participants (N = 160) a questionnaire, asking them about their opinion on moral principles such as 'Even if an action might harm the innocent, it can still be morally permissible to perform it'. After the subjects had indicated their view using a scale ranging from 'completely disagree' to 'completely agree', the experimenter performed a sleight of hand, inverting the meaning of some of the statements so that the question then read, for instance, 'If an action might harm the innocent, then it is not morally permissible to perform it'. The answer scales, however, were not altered. So if a subject had agreed with the first claim, she then agreed with the opposite one. Surprisingly, 69% of the study participants failed to detect at least one of the changes. Moreover, they subsequently tended to justify positions they thought they held despite just having chosen the opposite. Presumably, subjects accepted that they favored a particular position, didn't know the reasons, and so were now looking for support that would justify their position. They displayed a confirmation bias. 3 Using a similar experimental set-up, Trouche et al. (2016) found that subjects also tend to exhibit a selective 'laziness' in their critical thinking: they are more likely to 3 It might be proposed that when participants in the experiment seek reasons for their judgments, perhaps they take themselves already to have formed the judgements for good reasons and then wonder what these reasons might have been. Why would they seek reasons against a view that they have formed (by their own lights) for good reasons? However, we might equally well ask why they would take themselves to have formed a judgment for good reasons in the first place even though they don't know any of them? If it is a general default tendency to assume that any view that one holds rests on good reasons, then that would again suggest the presence of a confirmation bias. For a general tendency to think that one's views rest on good reasons even when one doesn't know them is a tendency to favor and confirm these views while resisting balanced scrutiny of their basis. avoid raising objections to their own positions than to other people's. Trouche et al. first asked their test participants to produce arguments in response to a set of simple reasoning problems. Directly afterwards, they had them assess other subjects' arguments concerning the same problems. About half of the participants didn't notice that by the experimenter's intervention, in some trials, they were in fact presented with their own arguments again; the arguments appeared to these participants as if they were someone else's. Furthermore, more than half of the subjects who believed they were assessing someone else's arguments now rejected those that were in fact their own, and were more likely to do so for invalid than for valid ones. This suggests that subjects are less critical of their own arguments than of other people's, indicating that confirmation bias is real and perhaps often operative when we are considering our own claims and arguments. Evolutionary Accounts of the Bias Confirmation bias is typically taken to be epistemically problematic, as it leads to partial and therewith for the individual less reliable information processing and contributes to failures in, for instance, perspective-taking with clear costs for social and other types of cognition (Mercier and Sperber 2017: 215;Steel 2018;Peters 2018;Dutilh Novaes 2018). Prima facie, the bias thus seems maladaptive. But then why does it still exist? Granted, even if the bias isn't an adaptation, we might still be able to explain why it is with us today. We might, for instance, argue that it is a "spandrel", a by-product of the evolution of another trait that is an adaptation (Gould and Lewontin 1979). Or we may abandon the evolutionary approach to the bias altogether and hold that it emerged by chance. However, evolutionary explanations of psychological traits are often fruitful. They can create new perspectives on these traits that may allow developing means to reduce the traits' potential negative effects (Roberts et al. 2012;Johnson et al. 2013). Evolutionary explanations might also stimulate novel, testable predictions that researchers who aren't evolutionarily minded would overlook (Ketelaar and Ellis 2000;De Bruine 2009). Moreover, they typically involve integrating diverse data from different disciplines (e.g., psychology, biology, anthropology etc.), and thereby contribute to the development of a more complete understanding of the traits at play and human cognition, in general (Tooby and Cosmides 2015). These points equally apply when it comes to considering the origin of confirmation bias. They provide good reasons for searching for an evolutionary account of the bias. Different proposals can be discerned in the literature. I will discuss three recent ones, what I shall call (1) the argumentative-function account, (2) the group-cognition account, and the (3) intention-alignment account. I won't offer conclusive arguments against them here. The aim is just to introduce some reservations about these proposals to motivate the exploration of an alternative. Mercier andSperber (2011, 2017) hold that human reasoning didn't evolve for truth tracking but for making us better at convincing other people and evaluating their arguments so as to be convinced only when their points are compelling. In this context, when persuasion is paramount, the tendency to look for material supporting our preconceptions and to discount contradictory data allows us to accumulate argumentative ammunition, which strengthens our argumentative skill, Mercier and Sperber maintain. They suggest that confirmation bias thus evolved to "serve the goal of convincing others" (2011: 63). The Argumentative-Function Account Mercier and Sperber acknowledge that the bias also hinders us in anticipating objections, which should make it more difficult for us to develop strong, objection-resistant arguments (2017: 225f). But they add that it is much less cognitively demanding to react to objections than to anticipate them, because objections might depend on particular features of one's opponents' preferences or on information that only they have access to. It is thus more efficient to be 'lazy' in anticipating criticism and let the audience make the moves, Mercier and Sperber claim. There is reason to be sceptical about their proposal, however. For instance, an anticipated objection is likely to be answered more convincingly than an immediate response from one's audience. After all, "forewarned is forearmed"; it gives a tactical advantage (e.g., more time to develop a reply) (Sterelny 2018: 4). And even if it is granted that objections depend on private information, they also often derive from obvious interests and public knowledge, making an anticipation of them easy (ibid). Moreover, as Dutilh Novaes (2018: 519) notes, there is a risk of "looking daft" when producing poor arguments, say, due to laziness in scrutinizing one's thoughts. Since individuals within their social groups depend on their reputation so as to find collaborators, anticipating one's audience's responses should be and have been more adaptive than having a confirmation bias (ibid). If human reasoning emerged for argumentative purposes, the existence of the bias remains puzzling. The Group-Cognition Account Even if confirmation bias is maladaptive for individuals, it might still be adaptive for groups. For instance, Smart (2018) and Peters (2018) hold that in groups with a sufficient degree of cognitive diversity at the outset of solving a particular problem, each individual's confirmation bias might help the group as a whole conduct a more in-depth analysis of the problem space than otherwise. When each subject is biased towards a different particular proposal on how to solve the problem, the bias will push them to invest greater effort in defending their favored proposals and might, in the light of counterevidence, motivate them to consider rejecting auxiliary assumptions rather than the proposals themselves. This contributes to a thorough exploration of them that is less likely with less committed thinkers. Additionally, since individuals appear to have a particular strength in detecting flaws in others' arguments (Trouche et al. 2016), open social criticism within the group should ensure that the group's conclusions remain reliable even if some, or at times most, of its members are led astray by their confirmation bias (Smart 2018: 4190;Peters 2018: 20). Mercier and Sperber (2011: 65) themselves already float the idea of such a social "division of cognitive labor". They don't yet take its group-level benefits to explain why confirmation bias evolved, however (Dutilh Novaes 2018: 518f). Smart (2018) and Peters (2018) also don't introduce their views as accounts of the evolved function of the bias. But Dutilh Novaes (2018: 519) and Levy (2019: 317) gesture toward, and Smith and Wald (2019) make the case for, an evolutionary proposal along these lines, arguing that the bias was selected for making a group's inquiry more thorough, effective, and reliable. While I have sympathies with this proposal, several researchers have noted that the concept of 'group selection' is problematic (West et al. 2007;Pinker 2012). One of the issues is that since individuals reproduce faster than groups, a trait T that is an adaptation that is good for groups but bad for an individual's fitness won't spread, because the rate of proliferation of groups is undermined by the evolutionary disadvantage of T within groups (Pinker 2012). The point equally applies to the proposal that confirmation bias was selected for its group-level benefits. Moreover, a group arguably only benefits from each individual's confirmation bias if there is a diversity of viewpoints in the group and members express their views, as otherwise "group polarization" is likely to arise (Myers and Lamm 1976): arguments for shared positions will accumulate without being criticized, making the group's average opinion more extreme and less reliable, which is maladaptive. Crucially, ancestral 'hunter-gather' groups are perhaps unlikely to have displayed a diversity of viewpoints. After all, their members traveled less, interacted less with strangers, and were less economically dependent on other groups (Simpson and Beckes 2010: 37). This should have homogenized them with respect to race, culture, and background (Schuck 2001(Schuck : 1915. Even today groups often display such homogeneity, as calls for diversity in academia, companies etc. indicate. These points provide reasons to doubt that ancestral groups provided the kind of conditions in which confirmation bias could have produced the benefits that the group-cognition account highlights rather than maladaptive effects tied to group polarization. The Intention-Alignment Account Turning to a third and here final extant proposal on the evolution of confirmation bias, Norman (2016) argues that human reasoning evolved for facilitating an "intention alignment" between individuals: in social interactions, reasons typically 'overwrite' nonaligned mental states (e.g., people's divergent intentions or beliefs) with aligned ones by showing the need for changing them. Norman holds that human reasoning was selected for this purpose because it makes cooperation easier. He adds that, in this context, "confirmation bias would have facilitated intention alignment, for a tribe of hunter-gatherers prone to [the bias] would more easily form and maintain the kind of shared outlook needed for mutualistic collaboration. The mythologies and ideologies taught to the young would accrue confirming evidence and tend to stick, thereby cementing group solidarity" (2016: 700). Norman takes his view to be supported by the "fact that confirmation bias is especially pronounced when a group's ideological preconceptions are at stake" (ibid). However, the proposal seems at odds with the finding that the bias inclines subjects to ignore or misconstrue their opponents' objections. In fueling one-sided information processing to support one's own view, the bias makes people less able to anticipate and adequately respond to their interlocutor's point of view (Dutilh Novaes 2018: 520). Due to that effect, the bias arguably makes an intention alignment with others (especially with one's opponents) harder, not easier. Moreover, since our ancesteral groups are (as noted above) likely to have been largely viewpoint homogenous, in supporting intention-alignment in these social environments, confirmation bias would have again facilitated group polarization, which is prima facie evolutionarily disadvantageous. All three proposals of the adaptive role of confirmation bias considered so far thus raise questions. While the points mentioned aren't meant to be fatal for the proposals and might be answerable within their frameworks, they do provide a motivation to explore an alternative. Towards an Alternative The key idea that I want to develop is the following. Confirmation bias is typically taken to work against an individual's truth tracking (Mercier and Sperber 2017: 215;Peters 2018: 15), and indeed searching for information supporting one's beliefs and ignoring contradictory data is epistemically disadvantageous when what one takes to be reality is and stays different from what one believes it to be. However, reality doesn't always remain unchanged when we form beliefs about it. Consider social beliefs, that is, beliefs about people (oneself, others, and groups) and social structures (i.e., relationships between individuals, groups, and socio-political institutions). I shall contend that a confirmation bias pertaining to social beliefs reinforces our confidence in these beliefs, therewith strengthening our tendency to behave in ways that cause changes in reality so that it corresponds to the beliefs, turning them (when they are initially inaccurate) into self-fulfilling prophecies (SFPs) (Merton 1948;Biggs 2009). Due to its role in helping us make social reality match our beliefs, confirmation bias is adaptive, or so I will argue. I first introduce examples of SFPs of social beliefs. Then I explore the relevance of these beliefs in our species, before making explicit the adaptive role of confirmation bias in facilitating SFPs. Social Beliefs and SFPs Social beliefs often lead to SFPs with beneficial outcomes. Here are four examples. 1. S (false) believes he is highly intelligent. His self-view motivates him to engage with intellectuals, read books, attend academic talks, etc. This makes him increas-ingly more intelligent, gradually confirming his initially inaccurate self-concept (for relevant empirical data, see Swann 2012). 2. Without a communicative intention, a baby boy looking at a kitten produces a certain noise: 'ma-ma'. His mother is thrilled, believing (falsely) that he is beginning to talk and wants to call her. She responds accordingly, rushing to him, attending to him, and indicating excitement. This leads the boy to associate 'ma-ma' with the arrival and attention of his mother. And so he gradually begins using the sounds to call her, confirming her initially false belief about his communicative intention (for relevant empirical data, see Mameli 2001). 3. A father believes his adolescent daughter doesn't regularly drink alcohol, but she does. He acts in line with his beliefs, and expresses it in communication with other people. His daughter notices and likes his positive view of her, which motivates her to increasingly resist drinks, gradually fulfilling her father's optimistic belief about her (for relevant empirical data; see Willard et al. 2008). 4. A teacher (falsely) believes that a student's current academic performance is above average. She thus gives him challenging material, encourages him, and communicates high expectations. This leads the student to increase his efforts, which gradually results in above-average academic performance (for relevant evidence, see Madon et al. 1997). SFPs of initially false positive trait ascriptions emerge in many other situations too. They also occurred, for instance, when adults ascribed to children traits such as being tidy (Miller et al. 1975), charitable (Jensen and Moore 1977), or cooperative (Grusec et al. 1978). Similarly, in adults, attributions of, for example, kindness (Murray et al. 1996), eco-friendliness (Cornelissen et al. 2007), military competence (Davidson and Eden 2000), athletic ability (Solomon 2016), and even physiological changes (Turnwald et al. 2018) have all had self-fulfilling effects. Moreover, these effects don't necessarily take much time to unfold but can happen swiftly in a single interaction (e.g., in interview settings; Word et al. 1974) right after the ascription (Turnwald et al. 2018: 49). SFPs are, however, neither pervasive nor all-powerful (Jussim 2012), and there are various conditions for them to occur (Snyder and Klein 2007). For instance, they tend to occur only when targets are able to change in accordance with the trait ascriptions, when the latter are believable rather than unrealistic (Alfano 2013: 91f), and when the ascriber holds more power than the ascribee (Copeland 1994: 264f). But comprehensive literature reviews confirm that SFPs are "real, reliable, and occasionally quite powerful" (Jussim 2017: 8;Willard and Madon 2016). The Distribution of Social Beliefs and Role of Prosociality in Humans Importantly, SFPs can be pernicious when the beliefs at the center of them capture negative social conceptions, for instance, stereotypes, anxious expectations, fear, or hostility (Darley and Gross 1983;Downey et al. 1998;Madon et al. 2018). In these cases, SFPs would be maladaptive. Given this, what do we know about the distribution of social beliefs, in general, and positive ones, in particular, in ancestral human groups? Many researchers hold that our evolutionary success as a species relies on our being "ultra-social" and "ultra-cooperative" animals (e.g., Tomasello 2014: 187;Henrich 2016). Human sociality is "spectacularly elaborate, and of profound biological importance" because "our social groups are characterized by extensive cooperation and division of labour" (Sterelny 2007: 720). Since we live in an almost continuous flow of interactions with conspecifics, "solving problems of coordination with our fellows is [one of] our most pressing ecological tasks" (Zawidzki 2008: 198). A significant amount of our beliefs are thus likely to be social ones (Tomasello 2014: 190f). Moreover, when it comes to oneself, to group or "tribe" members, and to collaborators, these beliefs often capture positive to overly optimistic ascriptions of traits (e.g., communicativeness, skills, etc.; Simpson and Beckes 2010). This is well established when it comes to one's beliefs about oneself (about 70% of the general population has a positive self-conception; Talaifar and Swann 2017: 4) and one's family members (Wenger and Fowers 2008). The assumption that the point also holds for 'tribe' members and collaborators, more generally, receives support from the "tribalinstincts hypothesis" (Richerson and Boyd 2001), which holds that humans tend to harbor "ethnocentric attitudes in favor of [their] own tribe along with its members, customs, values and norms", as this facilitates social predictability and cooperation (Kelly 2013: 507). For instance, in the past as much as today, humans "talk differently about their in-groups than their out-groups, such that they describe the in-group and its members [but not out-groups] as having broadly positive traits" (Stangor 2011: 568). In subjects with such 'tribal instincts', judgments about outgroup members might easily be negative. But within the groups of these subjects, among in-group members, overly optimistic, cooperation-enhancing conceptions of others should be and have been more dominant particularly in "intergroup conflict, [which] is undeniably pervasive across human societies" (McDonald et al. 2012: 670). Indeed, such conflicts are known to fuel in-group "glorification" (Leidner et al. 2010;Golec De Zavala 2011). Given these points, in 'ultra-cooperative' social environments in which 'tribe' members held predominantly positive social conceptions and expectations about ingroup subjects, positive SFPs should have been overall more frequent and stronger than negative ones. Indeed, there is evidence that even today, positive SFPs in individual, dyadic interactions are more likely and pronounced than negative ones. 4 For instance, focusing on mothers' beliefs about their sons' alcohol consumption, Willard et al. (2008) found that children "were more susceptible to their mothers' positive than negative self-fulfilling effects" (499): "mothers' false beliefs buffered their adolescents against increased alcohol use rather than putting them at greater risk" (Willard and Madon 2016: 133). Similarly, studies found that "teachers' false beliefs raised students' achievement more than they lowered it" (Willard and Madon 2016: 118): teacher overestimates "increase[d] achievement more than teacher underestimates tended to decrease achievement among students" (Madon et al. 1997: 806). Experiments with stigmatized subjects corroborate these results further (ibid), leading Jussim (2017) in his literature review to conclude that high teacher expectations help students "more than low expectations harm achievement" (8). One common explanation of this asymmetry is that SFPs typically depend on whether the targets of the trait ascriptions involved accept the expectations imposed on them via the ascriptions (Snyder and Klein 2007). And since subjects tend to strive to think well of themselves (Talaifar and Swann 2017), they respond more to positive than negative expectations (Madon et al. 1997: 792). If we combine these considerations with the assumption that in ancestral groups of heavily interdependent subjects, positive social beliefs about in-group members (in-group favoritism) are likely to have been more prevalent than negative ones, then there is reason to hold that the SFPs of the social conceptions in the groups at issue were more often than not adaptive. With these points in mind, it is time to return to confirmation bias. From SFPs to Confirmation Bias Notice that SFPs depend on trait or mental-state ascriptions that are 'ahead' of their own truth: they are formed when an objective assessment of the available evidence doesn't yet support their truth. Assuming direct doxastic voluntarism is false (Matheson and Vitz 2014), how can they nonetheless be formed and confidently maintained? I suggest that confirmation bias plays an important role: it allows subjects to become and remain convinced about their social beliefs (e.g., trait ascriptions) when the available evidence doesn't yet support their truth. This makes SFPs of these beliefs more likely than if the ascriber merely verbally attributed the traits without committing to the truth of the ascriptions, or believed in them but readily revised the beliefs. I shall argue that this is in fact adaptive not only when it comes to positive trait ascriptions, but also to negative ones. I will illustrate the point first with respect to positive trait ascriptions. Motivated Confirmation Bias and Positive Trait Ascriptions Suppose that you ascribe a positive property T to a subject A, who is your ward, but (unbeknownst to you) the available evidence doesn't yet fully support that ascription. The more convinced you are about your view of A even in the light of counterevidence, the better you are at conveying your conviction to A because, generally, "people are more influenced [by others] when [these] others express judgments with high confidence than low confidence" (Kappes et al. 2020: 1;von Hippel and Trivers 2011). Additionally, the better you are at conveying to A your conviction that he has T, the more confident he himself will be that he has that trait (assuming he trusts you) (Sniezek and Van Swol 2001). Crucially, if A too is confident that he has T, he will be more likely to conform to the corresponding expectations than if he doesn't believe the ascription, say, because he notices that you only say but don't believe that he has T. Relatedly, the more convinced you are about your trait ascription to A, the clearer your signaling of the corresponding expectations to A in your behavior (Tormala 2016) and the higher the normative impetus on him, as a cooperative subject, to conform so as to avoid disrupting interactions with you. Returning to confirmation bias, given what we know about the cognitive effect of the bias, the more affected you are by the bias, the stronger your belief in your trait ascriptions to A (Rabin and Schrag 1999), and so the lower the likelihood that you will reveal in your behavior a lack of conviction about them that could undermine SFPs. Thus, the more affected you are by the bias, the higher the likelihood of SFPs of the ascriptions because conviction about the ascriptions plays a key facilitative role for SFPs. This is also experimentally supported. For several studies found that SFPs of trait ascriptions occurred only when ascribers were certain of the ascriptions, not when they were less confident (Swann and Ely 1984;Pelham and Swann 1994;Swann 2012: 30). If we add to these points that SFPs of trait ascriptions were in developmental and educational contexts in ancestral tribal groups more often beneficial for the targets than not, then there is a basis for holding that confirmation bias might in fact have been selected for sustaining SFPs. Notice that the argument so far equally applies to motivated reasoning. This is to be expected because, as mentioned above, motivated confirmation bias is an instance of motivated reasoning (Nickerson 1998). To pertain specifically to confirmation bias, however, the evolutionary proposal that the bias was selected for facilitating SFPs of social conceptions also has to hold for unmotivated confirmation bias. Is this the case? Unmotivated Confirmation Bias and Negative Trait Ascriptions Notice that when we automatically reinforce any of our views no matter whether we favor them, then our preferences won't be required for and undermine the reinforcement process and the SFPs promoted by it. This means that such a general tendency, i.e., a confirmation bias, can fulfil the function of facilitating SFPs more frequently than motivated cognitions, namely whenever the subject has acquired a social conception (e.g., as the result of upbringing, learning, or testimony). This is adaptive for at least three reasons. First, suppose that as a parent, caretaker, or teacher you (unknowingly) wishfully believe that A, who is your ward, has a positive trait T. You tell another subject (B) that A has T, and, on your testimony, B subsequently believes this too. But suppose that unlike you, B has no preference as to whether A has T. Yet, as it happens, she still has a confirmation bias toward her beliefs. Just like you, B will now process information so that it strengthens her view about A. This increases her conviction in, and so the probability of an SFP of, the trait ascription to A, because now both you and B are more likely to act toward A in ways indicating ascription-related expectations. As a general tendency to support any of one's beliefs rather than only favored ones, the bias thus enables a social 'ripple' effect in the process of making trait ascriptions match reality. Since this process is in ultra-social and ultra-cooperative groups more often than not adaptive (e.g., boosting the development of a positive trait in A), in facilitating a social extension of it, confirmation bias is adaptive too. Secondly, in ancestral groups, many of the social conceptions (e.g., beliefs about social roles, gender norms, stereotypes etc.) that subjects unreflectively acquired during their upbringing and socialization will have been geared toward preserving the group's function and status quo and aligning individuals with them (Sterelny 2006: 148). Since it can operate independently of a subject's preferences, a confirmation bias in each member of the group would have helped the group enlist each of its members for re-producing social identities, social structures, traits, and roles in the image of the group's conceptions even when these individuals disfavored them. In sustaining SFPs of these conceptions, which might have included various stereotypes or ethnocentric, prejudicial attitudes that we today consider offensive negative trait ascriptions (e.g., gender or racist stereotypes) (Whitaker et al. 2018), confirmation bias would have been adaptive in the past. For, as Richerson and Boyd (2005: 121f) note too, in ancestral groups, selection pressure favored social conformity, predictability, and stability. That confirmation bias might have evolved for facilitating SFPs that serve the 'tribal' collective, possibly even against the preference, autonomy, and better judgment of the individual, is in line with recent research suggesting that many uniquely human features of cognition evolved through pressures selecting for the ability to conform to other people and to facilitate social projects (Henrich 2016). It is thought that these features may work against common ideals associated with self-reliance or "achieving basic personal autonomy, because the main purpose of [them] is to allow us to fluidly mesh with others, making us effective nodes in larger networks" (Kelly and Hoburg 2017: 10). I suggest that confirmation bias too was selected for making us effective 'nodes' in social networks by inclining us to create social reality that corresponds to these networks' conceptions even when we dislike them or they are harmful to others (e.g., out-group members). Thirdly, in helping us make social affairs match our beliefs about them even when we don't favor them, confirmation bias also provides us with significant epistemic benefits in social cognition. Consider Jack and Jill. Both have just seen an agent A act ambiguously, and both have formed a first impression of A according to which A is acting the way he is because he has trait T. Suppose neither Jack nor Jill has any preference as to whether A has that trait but subsequently process information in the following two different ways. Jack does not have a confirmation bias but impartially assesses the evidence and swiftly revises his beliefs when encountering contradictory data. As it happens, A's behavior soon does provide him with just such evidence, leading him to abandon his first impression of A and reopen the search for an explanation of A's action. In contrast, Jill does have a confirmation bias with respect to her beliefs and interprets the available evidence so that it supports her beliefs. Jill too sees A act in a way that contradicts her first impression of him. But unlike Jack, she doesn't abandon her view. Rather, she reinterprets A's action so that it bolsters her view. Whose information processing might be more adaptive? For Jack, encountering data challenging his view removes certainty and initiates a new cycle of computations about A, which requires him to postpone a possible collaboration with A. For Jill, however, the new evidence strengthens her view, leading her to keep the issue of explaining A's action settled and be ready to collaborate with him. Jack's approach might still seem better for attaining an accurate view of A and predicting what he'll do next. But suppose Jill confidently signals to A her view of him in her behavior. Since people have a general inclination to fulfil others' expectations (especially positive ones) out of an interest in coordinating and getting along with them (Dardenne and Leyens 1995;Bacharach et al. 2007), when A notices Jill's conviction that he displays T, he too is likely to conform, which provides Jill with a correct view of what he will do next. Jill's biased processing is thus more adaptive than Jack's approach: a confirmation bias provides her with certainty and simpler information processing that simultaneously facilitates accurate predictions (via contributing to SFPs). Generalizing from Jill, in everyday social interactions we all form swift first impressions of others without having any particular preference with respect to these impressions either way. Assuming that confirmation bias operates on them nonetheless, the bias will frequently be adaptive in the ways just mentioned. Summing Up: The Reality-Matching Account By helping subjects make social reality match their beliefs about it no matter whether they favor these beliefs or the latter are sufficiently evidentially supported, confirmation bias is adaptive: when the bias targets positive social beliefs and trait ascriptions, it serves both the subject and the group by producing effects that (1) assist them in their development (to become, e.g., more communicative, cooperative, or knowledgeable) and (2) make social cognition more tractable (by increasing social conformity and predictability). To be sure, when it targets negative trait ascriptions (pernicious stereotypes, etc.), the bias can have ethically problematic SFP effects. But, as noted, especially in ancestral 'tribal' groups, it would perhaps still have contributed to social conformity, predictability, and sustaining the status quo, which would have been adaptive in these groups (Richerson and Boyd 2005) inter alia by facilitating social cognition. Taken together, these considerations provide a basis for holding that confirmation bias was selected for promoting SFPs. I shall call the proposal introduced in this section, the reality-matching (RM) account of the function of confirmation bias. Supporting the RM Account Before offering empirical support for the RM account and highlighting its explanatory benefits, it is useful to disarm an objection: if confirmation bias was selected for its SFP-related effects, then people should not also display the bias with respect to beliefs that can't produce SFPs (e.g., beliefs about physics, climate change, religion, etc.). But they do (Nickerson 1998). From Social to Non-social Beliefs In response to the objection just mentioned, two points should be noted. First, the RM account is compatible with the view that confirmation bias was also selected for adaptive effects related to non-social beliefs. It only claims that facilitating the alignment of social reality with social beliefs (i.e., reality matching) is one of the important adaptive features for which the bias was selected that has so far been neglected. Second, it doesn't follow that because confirmation bias also affects beliefs that can't initiate SFPs that it could not have been selected for affecting beliefs that can and do initiate SFPs. The literature offers many examples of biological features or cognitive traits that were selected for fulfilling a certain function despite rarely doing so or even having maladaptive effects (Millikan 1984;Haselton and Nettle 2006). Consider the "baby-face overgeneralization" bias (Zebrowitz and Montepare 2008). Studies suggest that people have a strong readiness to favorably respond to babies' distinctive facial features. And this tendency is overgeneralized such that even adults are more readily viewed more favorably, treated as likeable (but also physically weak, and naïve) when they display babyface features. While this overgeneralization tendency often leads to errors, it is thought to have evolved because failures to respond favorably to babies (i.e., false negatives) are evolutionarily more costly than overgeneralizing (i.e., false positives) (ibid). Might our domain-general tendency to confirm our own beliefs be similarly less evolutionarily costly than not having such a general tendency? It is not implausible to assume so because, as noted, we are ultra-social and ultra-cooperative, and our beliefs about people's social standing, knowledge, intentions, abilities, etc. are critical for our flourishing (Sterelny 2007: 720;Tomasello 2014: 190f;Henrich 2016). Importantly, these beliefs, unlike beliefs about the nonsocial world, are able to and frequently do initiate SFPs contributing to the outlined evolutionary benefits. This matters because if social beliefs are pervasive and SFPs of them significant for our flourishing, then a domain-general tendency to confirm any of our beliefs ensures that we don't miss opportunities to align social reality with our conceptions and to reap the related developmental and epistemic benefits. Granted, this tendency overgeneralizes, which creates clear costs. But given the special role of social beliefs in our species and our dependence on social learning and social cognition, which are facilitated by SFPs, it is worth taking seriously the possibility that these costs can often outweigh the benefits. While this thought doesn't yet show that the RM account is correct, it does help disarm the above objection. For it explains why the fact that confirmation bias also affects beliefs that cannot initiate SFPs doesn't disprove the view that the bias was selected for reality matching: the special role of social beliefs in our species (compared to others species) lends plausibility to the assumption that the costs of the bias' overgeneralizing might be lower than the costs of its failing to generalize. I now turn to the positive support for the RM account. Empirical Data If, as the RM account proposes, confirmation bias was selected for facilitating the process of making reality match our beliefs, then the bias should be common and pronounced when (1) it comes to social beliefs, that is, beliefs (a) about oneself, (b) about other people, and (c) about social structures that the subject can determine, and when (2) social conditions are conducive to reality matching. While there are no systematic comparative studies on whether the bias is more frequent or stronger with respect to some beliefs but not others (e.g., social vs. non-social beliefs), there is related empirical research that does provide some support for these predictions. (a) Self-related Beliefs In a number of studies, Swann and colleagues (Swann 1983;Swann et al. 1992; for an overview, see Swann 2012) found that selective information processing characteristic of confirmation bias is "especially pronounced with regards to self-concepts" and so self-related beliefs (Müller-Pinzler et al. 2019: 9). 5 Interestingly, and counterintuitively, the data show that "just as people with positive self-views preferentially seek positive evaluations, those with negative self-views preferentially seek negative evaluations" (Talaifar and Swann 2017: 3). For instance, those "who see themselves as likable seek out and embrace others who evaluate them positively, whereas those who see themselves as dislikeable seek out and embrace others who evaluate them negatively" (ibid). Much in line with the RM account, Swann (2012) notes that this confirmatory tendency "would have been advantageous" in "hunter-gatherer groups": once "people used input from the social environment to form self-views, self-verification strivings would have stabilized their identities and behavior, which in turn would make each individual more predictable to other group members" (26). Similarly, in a study in which subjects received feedback about aspects of their self that can be relatively easily changed (e.g., their ability to estimate the weights of animals), Müller-Pinzler et al. (2019) found that "prior beliefs about the self modulate self-related belief-formation" in that subjects updated their performance estimates "in line with a confirmation bias": individuals with prior negative selfrelated beliefs (e.g., low self-esteem) showed increased biases towards factoring in negative (vs. positive) feedback, and, interestingly, this tendency was "modulated by the social context and only present when participants were exposed to a potentially judging audience" (ibid: 9-10). This coheres with the view that confirmation bias might serve the 'collective' to bring subjects into accordance with its social conceptions (positive or negative). (b) Other-Related Beliefs If confirmation bias was selected for sustaining social beliefs for the sake of reality matching then the bias should also be particularly pronounced when it comes to beliefs about other people especially in situations conducive to reality matching. For instance, powerful individuals have been found to be more likely to prompt subordinates to behaviorally confirm their social conceptions than relatively powerless subjects (Copeland 1994;Leyens et al. 1999). That is, interactions between powerful and powerless individuals are conducive to reality matching of the powerful individuals' social beliefs. According to the RM account, powerful individuals should display a stronger confirmation bias with respect to the relevant social beliefs. Goodwin et al. (2000) found just that: powerful people, in particular, tend to fail to take into account data that may contradict their social beliefs (capturing, e.g., stereotypes) about subordinates and attend more closely to information that supports their expectations. Relative to the powerless, powerful people displayed a stronger confirmation bias in their thinking about subordinates (ibid: 239f). Similarly, if confirmation bias serves to facilitate social interaction by contributing to a match between beliefs and social reality then the bias should be increased with respect to trait attributions to other people in subjects who care about social interactions compared to other subjects. Dardenne and Leyens (1995) reasoned that when testing a hypothesis about the personality of another individual (e.g., their being introverted or extroverted), a preference for questions that match the hypothesis (e.g., that the subject is introverted) indicates social skill, conveying a feeling of being understood to the individual and contributing to a smooth conversation. Socially skilled people ('high self-monitors') should thus request 'matching questions', say, in an interview setting, for instance, when testing the introvert hypothesis, an interviewer could ask questions that are answered 'yes' by a typical introvert (e.g., 'Do you like to stay alone?'), confirming the presence of the hypothesized trait (ibid). Dardenne and Leyens did find that matching questions pertaining to an introvert or an extrovert hypothesis were selected most by high self-monitors: socially skilled subjects displayed a stronger confirmatory tendency than less socially skilled subjects (ibid). Finally, there is also evidence that confirmation bias is more pronounced with respect to social beliefs compared to non-social beliefs. For instance, Marsh and Hanlon (2007) gave one group of behavioral ecologists a specific set of expectations with respect to sex differences in salamander behavior, while a second group was given the opposite set of expectations. In one experiment, subjects collected data on variable sets of live salamanders, while in the other experiment, observers collected data from identical videotaped trials. Across experiments and observed behaviors, the expectations of the observers biased their observations "only to a small or moderate degree", Marsh and Hanlon note, concluding that these "results are largely optimistic with respect to confirmation bias in behavioral ecology " (2007: 1089). This insignificant confirmation bias with respect to beliefs about non-social matters contrasts with findings of a significant confirmation bias with respect to beliefs about people (Talaifar and Swann 2017;Goodwin et al. 2000;Marks and Fraley 2006;Darley and Gross 1983), and, as I shall argue now, social affairs whose reality the subject can determine. (c) Non-personal, Social Beliefs One important kind of social beliefs are political beliefs, which concern social states of affairs pertaining to politics. Political beliefs are especially interesting in the context of the RM account because they are very closely related to reality matching. This is not only because subjects can often directly influence political affairs via voting, running as a candidate, campaigning, etc. It is also because subjects who are highly confident about their political beliefs are more likely to be able to convince other people of them too (Kappes et al. 2020). And the more widespread a political conviction in a population, the higher the probability that the population will adopt political structures that shape reality in line with it (Jost et al. 2003;Ordabayeva and Fernandes 2018). If, as the RM account proposes, confirmation bias was selected for sustaining social beliefs for the sake of reality matching then the bias should be particularly strong when it comes to beliefs about political states of affairs. And indeed Taber and Lodge (2006) did find that "motivated [confirmation] biases come to the fore in the processing of political arguments", in particular, and, crucially, subjects "with weak […] [political] attitudes show less [confirmation] bias in processing political arguments" (767). In fact, in psychology, attitude strength, especially, in politically relevant domains of thinking has long been and still is widely accepted to increase the kind of selective exposure constitutive of confirmation bias (Knobloch-Westerwick et al. 2015: 173). For instance, Brannon et al. (2007) found that stronger, more extreme political attitudes are correlated with higher ratings of interest in attitude-consistent versus attitude-discrepant political articles. Similarly, Knobloch-Westerwick et al. (2015) found that people online who attach high importance to particular political topics spent more time on attitude-consistent messages than users who attached low importance to the topics, and "[a]ttitude-consistent messages […] were preferred", reinforcing the attitudes further (171). While this can contribute to political group polarization, such a polarization also boosts the group-wide reality-matching endeavour and can so be adaptive itself (Johnson and Fowler 2011: 317). In short, then, while there are currently no systematic comparative studies on whether confirmation bias is more frequent or stronger with respect to social beliefs, related empirical studies do suggest that when it comes to (positive or negative) social beliefs about oneself, other people, and social states of affairs that the subject can determine (e.g., political beliefs), confirmation bias is both particularly common and pronounced. Empirical data thus corroborate some of the predictions of the RM account. Explanatory Benefits The theoretical and empirical considerations from the preceding sections offer support for the RM account. Before concluding, it is worth mentioning three further reasons for taking the account seriously. First, it has greater explanatory power than the three alternative views outlined above. Second, it is consistent with, and provides new contributions to, different areas of evolutionary theorizing on human cognition. And it casts new light on the epistemic character of confirmation bias. I'll now support these three points. For instance, the argumentative-function account holds that confirmation bias is adaptive in making us better arguers. This was problematic because the bias hinders us in anticipating people's objections, which weakens our argumentative skill and increases the risk of us appearing incompetent in argumentative exchanges. The RM account avoids these problems: if confirmation bias was selected for reinforcing our preconceptions about people to promote SFPs then, since in one's own reasoning one only needs to justify one's beliefs to oneself, the first point one finds acceptable will suffice. To convince others, one would perhaps need to anticipate objections. But if the bias functions to boost primarily only one's own conviction about particular beliefs so as to facilitate SFPs then 'laziness' in critical thinking about one's own positions (Trouche et al. 2016) shouldn't be surprising. Turning to the group-cognition account, the proposal was that confirmation bias is adaptive in and was selected for making group-level inquires more thorough, reliable, and efficient. In response, I noted that the concept of 'group selection' is problematic when it comes to traits threatening an individual's fitness (West et al. 2007;Pinker 2012), and that confirmation bias would arguably only lead to the group-level benefits at issue in groups with viewpoint diversity. Yet, it is doubtful that ancestral groups met this condition. The RM account is preferable to the group-cognition view because it doesn't rely on a notion of group selection but concerns primarily individual-level benefits, and it doesn't tie the adaptive effects of the bias to conditions of viewpoint diversity. It proposes instead that the adaptive SFP-related effects of the bias increase individuals' fitness (e.g., by facilitating their navigation of the social world, aligning them/others with their group's conceptions etc.) and can emerge whenever people hold beliefs about each other, interact, and fulfill social expectations. This condition is satisfied even in groups with viewpoint homogeneity. The RM account also differs from the intention-alignment view, which holds that confirmation bias evolved for allowing us to synchronize intentions with others. One problem with this view was that the bias seems to hinder an intention alignment of individuals by weakening their perspective-taking capacity, and inclining them to ignore or distort people's objections. The RM account avoids this problem because it suggests that by disregarding objections or counterevidence to one's beliefs, one can remain convinced about them, which helps align social reality (not only, e.g., people's intentions) with them, producing the adaptive outcomes outlined above. The account can also explain why confirmation bias is particularly strong in groups in which shared ideologies are at stake (Taber and Lodge 2006;Gerken 2019). For subjects have a keen interest in reality corresponding to their ideological conceptions. Since the latter are shaping social reality via their impact on behavior and are more effective in doing so the more convinced people are about them (Kappes et al. 2020), it is to be expected that when it comes to ideological propositions in like-minded groups, confirmation bias is more pronounced. And, as noted, the resulting group polarization itself can then be adaptive in strengthening the realitymatching process. Moving beyond extant work on the evolution of confirmation bias, the RM account also contributes to and raises new questions for other areas of research in different disciplines. It, for instance, yields predictions that psychologists can experimentally explore in comparative studies such as the prediction that confirmation bias is more common and stronger when targeting social versus non-social beliefs, or when conditions are conducive to reality matching as opposed to when they are not. The account also adds a new perspective to research on SFPs and on how social conceptions interact with their targets (Hacking 1995;Snyder and Klein 2007;Jussim 2017). Relatedly, the RM account also contributes to recent philosophical work on, folk-psychology, i.e., our ability to ascribe mental states to agents to make sense of their behavior. In that work, some philosophers argue that folk-psychology serves "mindshaping", that is, the moulding of people's behavior and minds so that they fit our conceptions, making people more predictable and cooperation with them easier (Mameli 2001;Zawidzki 2013;Peters 2019b). There are clear connections between the mindshaping view of folk psychology and the RM account, but also important differences. For instance, the RM account pertains to the function of confirmation bias, not folk psychology. Moreover, advocates of the mindshaping view have so far left the conditions for effective mindshaping via folk-psychological ascriptions and the possible role of confirmation bias in it unexplored. The RM account begins to fill this gap in the research and in doing so adds to work on the question of how epistemic (or 'mindreading') and non-epistemic (or 'mindshaping', e.g., motivational) processes are related in folk-psychology (Peters 2019b: 545f; Westra 2020; Fernández-Castro and Martínez-Manrique 2020). In addition to offering contributions to a range of different areas of research, the RM account also casts new light on the epistemic character of confirmation bias. Capturing the currently common view on the matter, Mercier (2016) writes that "piling up reasons that support our preconceived views is not the best way to correct them. […] [It] stop[s] people from fixing mistaken beliefs" (110). The RM account offers a different perspective, suggesting that when it is directed at beliefs about social affairs, confirmation bias does often help subjects correct their mistaken conceptions to the extent that it contributes to SFPs of them. Similarly, Dutilh Novaes (2018) holds that the bias involves or contributes to a failure of perspective taking, and so, "given the importance of being able to appreciate one's interlocutor's perspective for social interaction", is "best not seen as an adaptation" (520). The RM account, on the other hand, proposes that the bias often facilitates social understanding: in making us less sensitive to our interlocutor's opposing perspective, it helps us remain confident about our social beliefs, which increases the probability of SFPs that in turn make people more predictable and mindreadable. Conclusion After outlining limitations of three recent proposals on the evolution of confirmation bias, I developed and supported a novel alternative, the reality-matching (RM) account, which holds that one of the adaptive features for which the bias evolved is that it helps us bring social reality into alignment with our beliefs. When the bias targets positive social beliefs, this serves both the subject and the group, assisting them in their development (to become, e.g., more communicative or knowledgeable) while also making their social cognition more effective and tractable. When it targets negative social beliefs, in promoting reality matching, the bias might contribute to ethically problematic outcomes, but it can then still support social conformity and predictability, which were perhaps especially in ancestral tribal groups adaptive. While the socially constructive aspect of confirmation bias highlighted here may not be the main or only feature of the bias that led to its evolution, it is one that has so far been overlooked in the evolutionary theorizing on confirmation bias. If we attend to it, an account of the function of confirmation bias becomes available that coheres with data from across the psychological sciences, manages to avoid many of the shortcomings of competitor views, and has explanatory benefits that help advance the research on the function, nature, and epistemic character of the bias.
13,781.6
2020-04-20T00:00:00.000
[ "Philosophy", "Psychology" ]