text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
CVtreeMLE: Efficient Estimation of Mixed Exposures using Data Adaptive Decision Trees and Cross-Validated Targeted Maximum Likelihood Estimation in R
Summary Statistical causal inference of mixed exposures has been limited by reliance on parametric models and, until recently, by researchers considering only one exposure at a time, usually estimated as a beta coefficient in a generalized linear regression model (GLM). This independent assessment of exposures poorly estimates the joint impact of a collection of the same exposures in a realistic exposure setting. Marginal methods for mixture variable selection such as ridge/lasso regression are biased by linear assumptions and the interactions modeled are chosen by the user. Clustering methods such as principal component regression lose both interpretability and valid inference. Newer mixture methods such as quantile g-computation (Keil et al., 2020) are biased by linear/additive assumptions. More flexible methods such as Bayesian kernel machine regression (BKMR)(Bobb et al., 2014) are sensitive to the choice of tuning parameters, are computationally taxing and lack an interpretable and robust summary statistic of dose-response relationships. No methods currently exist which finds the best flexible model to adjust for covariates while applying a non-parametric model that targets for interactions in a mixture and delivers valid inference for a target parameter. Non-parametric methods such as decision trees are a useful tool to evaluate combined exposures by finding partitions in the joint-exposure (mixture) space that best explain the variance in an outcome. However, current methods using decision trees to assess statistical inference for interactions are biased and are prone to overfitting by using the full data to both identify nodes in the tree and make statistical inference given these nodes. Other methods have used an independent test set to derive inference which does not use the full data. The CVtreeMLE R package provides researchers in (bio)statistics, epidemiology, and environmental health sciences with access to state-of-the-art statistical methodology for evaluating the causal effects of a data-adaptively determined mixed exposure using decision trees. Our target audience are those analysts who would normally use a potentially biased GLM based model for a mixed exposure. Instead, we hope to provide users with a non-parametric statistical machine where users simply specify the exposures, covariates and outcome, CVtreeMLE then determines if a best fitting decision tree exists and delivers interpretable results.
uses V-fold cross-validation and partitions the full data into parameter-generating samples and estimation samples. For example, when V=10, integers 1-10 are randomly assigned to each observation with equal probability. In fold 1, observations assigned to 1 are used in the estimation sample and all other observations are used in the parameter-generating sample. This process rotates through the data until all the folds are complete. In the parameter-generating sample, decision trees are applied to a mixed exposure to obtain rules and estimators are created for our statistical target parameter. The rules from decision trees are then applied to the estimation sample where the statistical target parameter is estimated. CVtreeMLE makes possible the non-parametric estimation of the causal effects of a mixed exposure producing results that are both interpretable and guaranteed to converge to the truth (under assumptions) at a particular rate as sample size increases. Additionally, CVtreeMLE allows for discovery of important mixtures of exposure and also provides robust statistical inference for the impact of these mixtures.
Statement of Need
In many disciplines there is a demonstrable need to ascertain the causal effects of a mixed exposure. Advancement in the area of mixed exposures is challenged by real-world joint exposure scenarios where complex agonistic or antagonistic relationships between mixture components can occur. More flexible methods which can fit these interactions may be less biased, but results are typically difficult to interpret, which has led researchers to favor more biased methods based on GLM's. Current software tools for mixtures rarely report performance tests using data that reflect the complexities of real-world exposures (Carlin et al., 2013;Keil et al., 2020;Yu et al., 2022). In many instances, new methods are not tested against a ground-truth target parameter under various mixture conditions. New areas of statistical research, rooted in non/semi-parametric efficiency theory for statistical functionals, allow for robust estimation of data-adaptive parameters. That is, it is possible to use the data to both define and estimate a target parameter. This is important in mixtures when the most important set of variables and levels in these variables are almost always unknown. Thus, the development of asymptotically linear estimators for data-adaptive parameters are critical for the field of mixed exposure statistics. However, the development of open-source software which translates semi-parametric statistical theory into well-documented functional software is a formidable challenge. Such implementation requires understanding of causal inference, semi-parametric statistical theory, machine learning, and the intersection of these disciplines. The CVtreeMLE R package provides researchers with an open-source tool for evaluating the causal effects of a mixed exposure by treating decision trees as a data-adaptive target parameter to define exposure. The CVtreeMLE package is well documented and includes a vignette detailing semi-parametric theory for data-adaptive parameters, examples of output, results with interpretations under various real-life mixture scenarios, and comparison to existing methods.
Background
In many research scenarios, the analyst is interested in causal inference for an a priori specified treatment or exposure. This is because when a single exposure/treatment is measured the analyst is interested in understanding how this exposure/treatment impacts an outcome, controlling for covariates. However, in the evaluation of a mixed exposure, such as air pollution or pesticides, it is not possible to estimate the expected outcome given every combination of exposures. This is because the conditional outcome given every combination of exposures is not measured. Furthermore, it is likely that, only certain exposures within a mixture have marginal or interacting effects on an outcome. In such a setting, new methods are needed for statistical learning from data that go beyond the usual requirement that the estimand is a priori defined in order to allow for proper statistical inference (Hubbard et al., 2016). In the case of mixtures, it is necessary to map a set of continuous mixture components into a lower dimensional representation of exposure using a pre-determined algorithm, and then estimate a target parameter on this more interpretable exposure. Decision trees provide a useful solution by mapping a set of exposures into a rule which can be represented as a binary vector. This binary vector indicates whether an individual has been exposed to a particular rule estimated by the decision tree. Our target parameter is then defined as the mean difference in counterfactual outcomes for those exposed to the mixture subspace (delineated by the rule) compared to those unexposed, or the average treatment effect (ATE) for the mixed exposure. Decision trees have been used as a data-adaptive parameter to explore and estimate heterogeneous treatment effects of a binary treatment (Athey & Imbens, 2016). Using a so-called "honest" approach, this method estimates the treatment effect in subpopulations based on covariates in a left-out sample. This approach is limited by not making use of the full data and not data-adaptively selecting the best decision tree. Advancements in using decision trees as a data-adaptive parameter that solve both these issues and guarantees nominal confidence interval coverage under certain assumptions are needed. Under normal assumptions of conditional independence (A is independent of Y given W) and positivity (enough experimentation in the data) identifiability of the ATE causal parameter is obtained from the observed data via the statistical functional for a data adaptively determined exposure. This is because, 1. by using Super Learner as our estimator, we are asymptotically guaranteed to select the correct functional for the underlying joint distribution thereby removing bias due to model error and 2. by using TMLE we debias our initial counterfactual for the ATE to target the parameter of interest. The remaining potential bias is therefore due to aggregated data and not the statistical method.
CVtreeMLE's Scope
Building on prior work related to data-adaptive parameters (Hubbard et al., 2016) and CV-TMLE (van der Laan & Rose, 2011), chapter 27. CVtreeMLE is a novel approach for estimating the joint impact of a mixed exposure by using cross-validated targeted minimum loss-based estimation which guarantees consistency, efficiency, and multiple robustness despite using highly flexible learners to estimate a data-adaptive parameter. CVtreeMLE summarizes the effect of a joint exposure on the outcome of interest by first doing an iterative backfitting procedure, similar to generalized additive models, to fit f(A), a Super Learner of decision trees, and h(W), an unrestricted Super Learner, in a semi-parametric model; E(Y | A, W) = f(A) + h(W), where A is a vector of exposures and W is a vector of covariates. In this way, we can data-adaptively find the best fitting decision tree model which has the lowest cross-validated model error while flexibly adjusting for covariates. This procedure is done to find rules for the mixture modeled collectively and for each mixture component individually. There are two types of results, 1. an ATE comparing those who fall within a subspace of the joint exposure versus those in the complement of that space and 2. the ATE for each data-adaptively identified threshold of an individual mixture component when compared to the lowest identified exposure level. The CVtreeMLE software package, for R (R Core Team, 2020), implements this methodology for deriving causal inference from ensemble decision trees.
CVtreeMLE is designed to provide analysts with both V-fold specific and pooled results for ATE causal effects of a joint exposure determined by decision trees. It integrates with the sl3 package (Coyle et al., 2020) to allow for ensemble machine learning to be leveraged in the estimation of nuisance parameters.
Availability
The CVtreeMLE package has been made publicly available via GitHub. Use of the CVtreeMLE package has been extensively documented in the package's README and a vignette. | 2,263.8 | 2023-02-21T00:00:00.000 | [
"Mathematics"
] |
Automorphic products that are singular modulo primes
We use Rankin--Cohen brackets on O(n, 2) to prove that the Fourier coefficients of reflective Borcherds products often satisfy congruences modulo certain primes.
Introduction
This note is inspired by the paper [10], in which it was observed that most of the Fourier coefficients of the (suitably normalized) Siegel cusp form Φ 35 of degree two and weight 35 are divisible by the prime p = 23.More precisely, if one writes a(T )e 2πiTr(T Z) , Z ∈ H 2 , the sum extending over positive-definite half-integral (2 × 2)-matrices T , then the main result of [10] is that (1.1) a(T ) ≡ 0 (mod 23) ⇒ det(T ) ≡ 0 (mod 23).
This has already been generalized in several ways.In [16], similar congruences are derived for Siegel cusp forms of higher weights.The papers [9,15] prove analogous results for Hermitian modular forms of degree two over the Gaussian and Eisenstein integers.The paper [13] considers quaternionic modular forms of degree two, while [1,12,14] consider Siegel modular forms of general degree.We call modular forms satisfying congruences of type (1.1) singular modulo p.
In this note, we start with the fact that the cusp form Φ 35 is a reflective Borcherds product [2,3,6,7], which in this situation means that it vanishes only on Humbert surfaces in the Siegel upper half-space that are fixed by transformations in the Siegel modular group.A natural generalization is to consider reflective Borcherds products on general orthogonal groups O(n, 2), with Siegel modular forms appearing through the exceptional isogeny from Sp 4 (R) to O (3,2).
It turns out that reflective Borcherds products on O(n, 2) with simple zeros and of weight k are very often singular modulo primes p dividing n/2 − 1 − k.In this note, we give a general argument to prove singularity modulo p that takes a set of two or more reflective Borcherds products and proves that some of them are singular modulo specific primes, using an identity based on the Rankin-Cohen brackets on O(n, 2).This argument requires almost no computation: the presence of congruences such as (1.1) for Φ 35 can be deduced from the location of its zeros.We also give similar arguments that can be used to prove that a single reflective product is singular modulo certain primes.
This note is organized as follows.In §2 we review reflective modular forms and define what it means for a modular form to be singular modulo a prime p.In §3 we introduce the Rankin-Cohen bracket on O(n, 2) and explain how to use it to derive modular forms that are singular modulo primes.In the last two sections we work out over 50 reflective Borcherds products that are singular modulo primes.In particular, for every prime p < 60, we construct at least one mod p singular modular form.
Reflective modular forms and singular modular forms modulo primes
Let L be an even integral lattice of signature (n, 2) with n ≥ 3, and let L R = L ⊗ R and L C = L ⊗ C. The Z-valued quadratic form on L is denoted by Q and the even bilinear form is Attached to the orthogonal group O(L C ) is the Hermitian symmetric domain D, the Grassmannian of oriented negative-definite planes in L R .This is naturally identified with one of the two connected components of by identifying [X + iY] ∈ P 1 (L C ) with the plane through X and Y.We denote by O + (L) the orthogonal subgroup that fixes both D and L.
Let Γ ≤ O + (L) be a finite-index subgroup and χ : Γ → C × a character.A modular form of integral weight k, level Γ and character χ is a holomorphic function F on the cone over D, that satisfies the functional equations Reflective modular forms were introduced by Borcherds [2] and Gritsenko-Nikulin [7] in 1998 and they have applications to generalized Kac-Moody algebras, hyperbolic reflection groups and birational geometry.The above definition is somewhat stronger than that of [7], where F is called reflective if the reflections corresponding to zeros of F lie in the larger group O + (L).Bruinier's converse theorem [4] shows that, in many cases, all reflective modular forms can be constructed through the multiplicative Borcherds lift [3,2].In this case, the Fourier series of a reflective form has a natural infinite product expansion in which the exponents are the Fourier coefficients of a modular form (or Jacobi form) for SL 2 , and we refer to it as a reflective Borcherds product.
To define modular forms that are singular at a prime p we have to work in the neighborhood of a fixed cusp.Suppose c ∈ L is a primitive vector of norm 0 and c ′ ∈ L ′ is an element of the dual lattice with c, c ′ = 1.Let L c,c ′ be the orthogonal complement of c and c ′ , i.e.
Attached to the pair (c, c ′ ) we have the tube domain which is one of the two connected components of the set On H c,c ′ , any modular form F can be written as a Fourier series in which the actual values of λ range over a discrete group depending on Γ and the character χ.To be more precise: there exists a sublattice K of L c,c ′ such that λ lies in the dual K ′ of K whenever a F (λ) = 0.By definition, the level of K is the smallest positive integer A non-constant modular form F is called singular (with respect to the pair (c, c ′ )) if its Fourier series on H c,c ′ is supported on vectors λ of norm zero.By analogy, we define singular modular forms modulo a prime p as follows: Definition 2.2.Let F be a non-constant modular form and p be a prime not dividing D F .The form F is called singular modulo p (at the cusp determined by (c, c ′ )) if its Fourier coefficients are all integers and if a F (λ) ≡ 0 (mod p) for all vectors λ for which Q(λ) is nonzero modulo p.
Remark 2.3.Using the Fourier-Jacobi expansion, it is not difficult to show that a modular form is singular if and only if its weight is k = n/2 − 1.In this case, it is singular at every cusp.
The notion of mod p singular modular forms also appears to be independent of the choice of cusps, and the weight appears to satisfy the similar constraint k ≡ (n/2 − 1) (mod p).
Unfortunately we do not have a proof of this.The converse is false: most modular forms of weight k ≡ (n/2 − 1) mod p fail to be singular modulo p.
Singularity with respect to (c, c ′ ) is closely related to the holomorphic Laplace operator.If e 1 , ..., e n is any basis of L c,c ′ with Gram matrix S, and z 1 , ..., z n are the associated coordinates on L c,c ′ ⊗ C, then define where s ij are the entries of S −1 .Note that ∆ is independent of the basis e i .
Applying ∆ e 2πi λ,Z = −Q(λ)e 2πi λ,Z , to the Fourier series termwise shows that the form F is annihilated by ∆ if and only if it is singular at (c, c ′ ).Similarly, if F has integral coefficients then F is singular modulo p if and only if here we recall that D F • ∆(F ) also has integral Fourier coefficients at the cusp (c, c ′ ) and that p does not divide D F by definition.The setting of [10], i.e.Siegel modular forms of degree two, corresponds to the case of the lattice L = 2U ⊕ A 1 i.e.Z 5 with Gram matrix 0 0 0 0 1 0 0 0 1 0 0 0 2 0 0 0 1 0 0 0 1 0 0 0 0 .If we work with c = (1, 0, 0, 0, 0) and c ′ = (0, 0, 0, 0, 1) then vectors (0, z 1 , z 2 , z 3 , 0) of H c,c ′ correspond exactly to matrices ( z 1 z 2 z 2 −z 3 ) in the Siegel upper half-space in a way that is compatible with the actions of O(3, 2) and Sp 4 (R), and the Laplace operator at (c, c ′ ) becomes (up to a scalar multiple) the theta-operator . See also Section 4.1 below.
The construction of singular automorphic products modulo primes
Let L be an even lattice of signature (n, 2) with n ≥ 3 that contains a primitive vector c of norm zero and a vector c ′ ∈ L ′ with c, c ′ = 1.The Laplace operator attached to the pair (c, c ′ ) is simply denoted ∆.Let Γ ≤ O + (L) be a modular group.Note that Γ satisfies Koecher's principle: the Baily-Borel compactification of D/Γ contains no cusps in codimension one.Lemma 3.1.For modular forms F of weight k and G of weight ℓ for Γ, the bracket Proof.Up to a scalar multiple, this is the first Rankin-Cohen bracket of F and G as defined by Choie and Kim [5].The assumption of [5] that the lattice L splits two hyperbolic planes is unnecessary.This lemma can also be proved directly by analyzing how ∆(F ) transforms under the modular group.In particular, it follows from [21, Lemma 2.4] that Since F is singular modulo p if and only if all Fourier coefficients of ∆(F ) vanish modulo p, we obtain the corollary: Corollary 3.2.Let p be a prime that divides the numerator of n 2 − 1 − k.Suppose G is a modular form of weight ℓ that is not identically zero modulo p. Suppose p does not divide ℓ and that p does not divide the numerator of n 2 − 1 − ℓ.The following are equivalent: (1) F is singular modulo p; (2) The cusp form [F, G] vanishes identically modulo p.Now suppose that F is a reflective modular form for Γ ≤ O + (L) with only simple zeros, and that G is a modular form for Γ that is non-vanishing on every zero r ⊥ of F .Since the associated reflection σ r is an involution and is contained in Γ, it follows that If G also happens to be a reflective modular form for Γ, with only simple zeros that are distinct from those of F , then the above argument shows that [F, G] is divisible by both F and G and therefore the quotient [F,G] F G is a holomorphic modular form of weight two without character.Many groups Γ do not admit holomorphic modular forms of weight two.(For example, this is always true if n > 6, and it is usually true for Γ = O + (L) if the discriminant of L is reasonably small.)In these cases, we obtain [F, G] = 0 and therefore an integral relation among ∆(F G), ∆(F )G and F ∆(G).This is summarized below: Proposition 3.3.Let L be an even lattice of signature (n, 2) with n ≥ 3. Suppose F and G are reflective modular forms for Γ ≤ O + (L) of weights k and ℓ with simple and disjoint zeros, and that Γ admits no modular forms of weight two with trivial character.Then we have the identity In particular, (1) F is singular modulo every prime dividing defines a cusp form of weight 2 + N i=1 k i for Γ.This is also a special case of the Rankin-Cohen brackets defined in [5].The identity in Proposition 3.3 generalizes to an identity involving any number of reflective products; however, this does not appear to give any information not already obtained from considering the products in pairs.It was proved in [20] that every holomorphic Borcherds product of singular weight on L can be viewed as a reflective modular form, possibly after passing to a distinct lattice in L ⊗ Q.It is amusing that the notion of reflective modular forms plays a similar role for congruences.
Examples
In this section we use Proposition 3.3 to produce a number of examples of reflective Borcherds products on orthogonal groups of root lattices or related lattices that are singular modulo certain primes.The non-existence of modular forms of weight two in the nontrivial case of n ≤ 6 can be derived from [17,18,19], where the entire graded rings of modular forms were determined.
We denote by U the hyperbolic plane, i.e. the lattice Z 2 with Gram matrix ( 0 1 1 0 ).Let A n , D n , E 6 , E 7 and E 8 be the usual root lattices.For a lattice L and d ∈ N, we write L(d) to mean L with its quadratic form multiplied by the factor d.
4.1.
Siegel modular forms of degree two.When L is the lattice 2U ⊕ A 1 with n = 3, modular forms for O + (L) are the same as Siegel modular forms of degree two and even weight for the level one modular group Sp 4 (Z).Through this identification, rational quadratic divisors become the classical Humbert surfaces defined by singular relations.There are two equivalence classes of reflective divisors: (i) The Humbert surface of invariant one, which is represented by the set of diagonal matrices ( τ 0 0 w ) in H vanishes with simple zeros on the Humbert surface of invariant four.We calculate Proposition 3.3 and the non-existence of Siegel modular forms of weight two yields: (1) Ψ 5 is singular modulo p = 3; (2) Φ 30 is singular modulo p = 59; (3) Φ 35 = Ψ 5 Φ 30 is singular modulo p = 23.
4.2.Siegel paramodular forms of degree two and level 2 and 3. Section 4.1 gives the simplest example of a number of realizations of arithmetic subgroups of Sp 4 (Q) as orthogonal groups of lattices.When L = 2U ⊕ A 1 (t), modular forms for O + (L) are the same as Siegel paramodular forms of degree two and level t that are invariant under certain additional involutions.We will work out the congruences implied by Proposition 3.3 when t = 2 or t = 3.
Remark This implies that Φ 120 M 7 is singular modulo p = 31 and also that Φ 120 is singular modulo p = 13.Note that neither M 7 nor Φ 120 M 7 is a Borcherds product.We conclude with an example of a mod p singular Borcherds product that is not reflective and also has non-simple zeros.
Let L = 2U ⊕ D 11 and consider the following two Borcherds products for O + (L): (i) Ψ 1 , a meromorphic modular form of weight 1 which vanishes precisely with multiplicity 1 on hyperplanes r ⊥ with r ∈ L ′ and Q(r) = 1/2 and whose only singularities are simple poles along hyperplanes s ⊥ with s ∈ L ′ and Q(s) = 3/8; (ii) Φ 142 , a cusp form of weight 142 which vanishes precisely with multiplicity 1 on hyperplanes λ ⊥ with λ ∈ L and Q(λ) = 1, and with multiplicity 26 on hyperplanes s ⊥ with s ∈ L ′ with Q(s) = 3/8.
The form Φ 142 is the Jacobi determinant of the generators of a free algebra of meromorphic modular forms constructed in [18].The divisors r ⊥ and λ ⊥ are reflective, i.e. the associated reflections lie in O + (L).However, the divisors s ⊥ are not reflective.By analyzing its Taylor series along the divisor s ⊥ , we find that By comparing the residues, or leading terms in the Laurent series of both sides along s ⊥ , we find that c = 1950.Therefore: Theorem 5.3.
neither k nor ℓ.More generally, under these assumptions, F is singular modulo any prime p that divides n 2 − 1− k to a greater power than any of n 2 − 1 − ℓ and n 2 − 1 − k − ℓ, and similarly for G and F G. Remark 3.4.The bracket [−, −] can be generalized to any number of modular forms.Let F 1 , ..., F N be modular forms for Γ ≤ O + (L) of weights k 1 , ..., k N .Then [F 1 , ..., F N 2 ; (ii) The Humbert surface of invariant four, which is represented by the set of matrices ( τ z z τ ) with equal diagonal entries.Both reflective Humbert surfaces occur as the zero locus of a Borcherds product for O + (L): (a) The form Ψ 5 of weight k = 5, a square root of the Igusa cusp form of weight 10, vanishes with simple zeros on the Humbert surface of invariant one; (b) The quotient Φ 30 = Φ 35 /Ψ 5 of weight ℓ = 30, where Φ 35 is the cusp form of weight 35, [Φ 142 , Ψ 1 ] Φ 142 Ψ 1is a meromorphic modular form of weight 2 with trivial character for O + (L) whose only singularities are poles of multiplicity two along the hyperplanes s ⊥ .By the structure theorem of [18, Theorem 1.2], there is a constant c such that[Φ 142 , Ψ 1 ] = c • Φ 142 Ψ Ψ 1 )Φ 142 = cΦ 142 Ψ 3 1 . | 4,075.4 | 2023-07-26T00:00:00.000 | [
"Mathematics"
] |
What is The Role of Land Value in The Urban Corridor?
The high movement causes traffic congestion and indicates high movement intensity along the corridor. The higher attraction of the land use will encourage the higher attraction of movement and economic values in the location. This attraction is also affected by the high mobility in the corridor supported by available transport infrastructure. Thus this causes land values become increase significantly. Land use along the corridor can be seen as commercial function because this activity is able to survive in the premium location. The purpose of this research is to identify the effect of land use change toward land values in the commercial corridor. This research used positivistic method with descriptive analysis. The result shows that the land values change in commercial use in the corridor has different pattern of land use change pattern according to physical condition and land use which causes highly economic attraction. The new commercial land is influenced by the distance to city centre or CBD (Central Business District). Land use and public facilities that have local and city scope services do not give the significant impact to land values change.
Introduction
The high number of movements in corridor has caused traffic congestion that indicates the high intensity of movements on the land along the corridor. The higher land use attraction on the land, the higher trip generation and economic attraction in those locations. This attractiveness is also supported by transportation infrastructure that can enhance the accessibility and mobility in the corridor.
The land use along the corridor can be seen as commercial function because this activity function is able to survive in the premium location. When it comes to urban areas, land values increase will be much related to strategic site (location factor) that associated with the ease of broad range or access to transportation system and its location to other land uses or urban configuration [1]. Thus, land value change becomes one of the factors of land use change and both have strong linkages. The closer or the easier access to corridor will certainly give impact on land value rise. The corridor space that plays to connect city centre to surrounding small towns has apparently encouraged more problems. These problems are caused by feedback relationship (interrelationships) between both of them that result in human migration flows, goods and services among them. The major issue occurred is transportation problem and land use change rapidly happened in the corridor space. The purpose of this study is to identify the effect of land use change toward land value in the commercial corridor. Which in turn, this land value will hold land market mechanism toward functional change and physical land in the corridor. On the other hand, intensity of development in the corridor should be biased handled in order to avoid the bottle neck occurrence [2] that will cause many problems for the main city and surrounding areas which can eventually decrease the urban economic value.
Urban Commercial Corridor
The corridor space usually develop to be regions with high frequency of movements due to availability of main transportation network dan make the corridor as commercial area [3]. Corridor, according to Duany and Plater-Zyberk [3] in The New Urbanism is the liaison and also the separation between residential neighborhoods and districts that should not just be a remaining space but become an urban element characterized by the appearance of continuity. Duany and Plater-Zyberk [3] noticed the role of corridor in relation to desa-kota that plays as magnet in urban dynamic. The activity between these two elements enable the high movements in both. These high activities which encourage corridor to become a dynamically inclined space following the growth of both magnetic element [4]. Corridor is an element that accommodates the relationship between two elements with mutual support. Krier [5] argued that in the case of a corridor study linking several new towns and access to a main city will have an increased role beyond just circulatory connections. Along its development, there are commercial corridors or commercial strips as urban and suburban connections that accommodate high-speed vehicles. In relation to this function, buildings on commercial corridors are built with sufficient "setback" for the vehicle parking lot. According to Manning [6] this phenomenon was also found in commercial corridors that connect the new city. The temporary building was seen to fill the space between the permanent functions that have been previously filled such as the industrial function.
The length of trip led to a high movement in the corridor as a hub of the city center with urban periphery. The problems of infrastructure constraints in this corridor area are called as the bottleneck of corridor issue [7]. At the same time, many people enter the corridor because of the time to go and get home from work is almost the same. On the other hand, the provision of public transport facilities and infrastructure services have no different from regular hours. These differences make the prominent disparity between transport demand and fully services during high traffics [2].In Minner and Shi's research [8] commercial corridors is a common element in a city known as linear commercial in which within the corridor consist of commercial property built on land close to and oriented to the main road, arterial road or other highways. Commercial corridor is also called as habitat for local businesses and the region targeted for redevelopment. With spatial analysis in research conducted by Minner and Shi [8] found that local businesses tend to thrive in regions along commercial corridors that are located close to the city center.
Land Use Change
Land use is every form of human intervention to the land in order to accommodate the various life needs of both material and spiritual [9]. Most experts agree that land change is a consequence or impact of economic growth and cities, high urbanization, population change and increased activities in urban areas.The main factors causing land use change [10] are population, land value, and transportation system. The change of these three main factors will cause the changes in the activity system and spatially embodied in the form of land use patterns. While Wu and Silva [ land change is driven by combined factors or spatial and non-spatial factors that spur the process of dynamics of land change with the interaction among factors occured. At the same time, the process of urbanization in cities and changes in physical, social and ecosystem aspects resulting from the impact of urban growth triggered land change.
In terms of land use change in the corridor, Arnott, Palma, and Lindsey [7] stated that the urban corridor grew out of the development of a decentralized metropolitan area. Residential growth to suburbs or urban periphery and daily commuting movements between sub-urban to city center (bottle neck corridor). Consequently, those movements caused bad congestions during rush hour which result in the in-efficiency of urban economy. Hence, it is necessary to adapt and expand the efforts to handle the congestion problem by providing persuasive solution such as telecommunication facilities support, computers, electricity network and public facilities. Eventually the corridor grew into a commercial area along the urban corridor that became the commuting line.
Figure 1. Land Use Cycle and Transportation
A land use change will lead to an increase trip generation. This rise will lead to increased level of accessibility that will lead to an increased land value in the area. The increasing land value will eventually lead to the emerge of activities that compatible with the condition in region. Thus it can trigger the development of high rise building intensity on the land. When transportation access to the activity space (land parcel) is improved, then the activity space will be more interesting and usually become more developed. With the development of space activities, it will also increase the need for transportation. This increase then causes an overload transport that must be addressed.
Land Value
The advantage of having a good connection in the high accessible area will lead to increased land value [13]. Commercial properties or commercial buildings have the most expensive land value, followed by single-family residences, multi-family housing, and condominiums. For commercial purposes, properties close to the station reap the high premium and positive land value. This is consistent with the economic theory that commercial property generally raises demand due to accessibility benefits and its proximity to major transportation facility. This is supported by Bocarejo which shows that the introduction of the BRT (Transmilenio) transport system in Bogota, Colombia that has had positive influence to the value of commercial properties. The farther the commercial area with Transmilenio corridor or BRT system station, the lower the land value in the commercial area [14] Cervero [13] added that there was no evidence that rail investment caused land value change, but hedonic price results show a strong relationship between proximity to the transit system and land value. The strong relationships found due to reflection from substantial differences in land use, modes and corridor, revealed that government policies and decisions have an important role in creating land use and land value [ In contrast, Balchin & Pierre [18] explained that the demand for land is a reflection of the benefits or needs emerging from the use of some land by the communities as potential users. The greater the benefits derived from land use in these locations for various purposes, the higher the land price or the land rent. Thus, the bid-rent curve of the commercial area will have sharp bid-rent curve because it has the highest degree of accessibility. In contrast, the bid-rent curve of settlement areas shows the sloping curve. Hence, it can be concluded that changes in infrastructure provision have a significant effect on the land use change.
Methodology
This research used positivistic approach with quantitative technique and descriptive statistical analysis. Descriptive analysis was used to analyze the effect of land use change in spatial configuration that influence the land value along the corridor by examining the relationship land value changes as an indication of the urban corridor development. Land use data used Landsat Data in the period years of 1993, 2004, 2011 and 2015. To ilustrate our approach, two corridors were selected in Seemarang as the case study. The chosen places represent the typical urban and suburban environtmens: Corridor of Semarang -Ungaran dan Corridor of Semarang -Mranggen. The knowledge used include type of land use, transportation network provision and service in the corridor, land value, physical change (non-built area to built environment); functional changes (residential to non-residential); spatial changes (small to large area), economic social change (non-commercial to commercial sector such as industrial, trade and service sectors); demographic sector (low population density to high population density).
The Effect of Urban Land Configuration to Land Value
Spatial configuration in the Corridor of Semarang-Ungaran and Corridor of Semarang-Mranggen shows the existence of public facilities with regional and national scope. Thus, its existence gives important influence to land value in those corridors. In Corridor of Semarang Ungaran is influenced by Diponegoro University which is a national university and become an influence to development of residential and service centers in Tembalang and Banyumanik Regions. These service centers include education, health and transportation. While Corridor of Semarang Mranggen is influenced by urban settlement area in Pedurungan and Pucang Gading Region. In addition, the proximity to the CBD become the major influence for comprehensive public facilities provision in this corridor.
Distance to City Centre
Simpang Lima is one of Central Bussiness Districts in Semarang. There are various activities in this areas such as trading, offices until entertainment activities. Simpang Lima become one of movement attractions for communities in and out of city. Distance to Semarang City Center will certainly affect the land value in the corridor. Based on this case, then the land value is selected as sample and analyzed its relationship with the distance to the city center. It can be seen from the graph that the land value on the property closest to the city center is more expensive than the property land value which is farthest from downtown.
Distance to Traditional Market
Beside CBD or city centre, activities in marketplace can influence land value. Market is divided into traditional and modern market. Gayamsari traditional market is located in the west of corridor of Semarang-Mranggen near the toll gate while Jatingaleh traditional market is located in the northern corridor of Semarang-Ungaran near the toll gate. be same and remain stable in the corridor Semarang-Ungaran. However, when we viewed in detail, the land value goes down due to its location near traditional market particularly in Jatingaleh. The traffic congestion, slums and inconvenience condition surround the market make this area less attractive.
Distance to Modern Market
Modern market located in the corridor is Superindo. Superindo was built in the narrow street in the Semarang-Ungaran while another superindo was well-developed in main road of Semarang-Mranggen. Both corridors show the high land value for housing/vacant land if those properties located near modern market. The trendline tends to decrease continuously as far away as modern market and it indicates that land value of property is completely affected by the distance to modern market.As is presented in the chart shows the function of shops/offices is influenced by the distance to the modern market. Unlike the case in the Corridor of Semarang Mranggen with the graph shows a downward tendency. Sample of shops/offices buildings found at the surrounding modern market and there is slight difference of land value eventhough the distance is not equal.
Distance to Shopping Mall
There are many Shopping Malls are built in both corridors. ADA Supermarket Majapahit is located in the west of Semarang-Mranggen and Giant Central City while ADA Banyumanik and Carrefour Srondol are located in Semarang Ungaran. Each shopping center has different carrying capacity and services. There is an obviously difference from both the graphs. The graph represents inverse relationship between land value and distance to shopping mall in Semarang Ungaran. The land value slumped moderately as it closes to shopping malls (for housing.vacant land) while shops/offices has a positive relationship with the distance to shopping mall. When those located far from shopping mall, the land value will collapse sequentially.
However, housing/vacant land in corridor of Semarang Mranggen tends to rise swiftly when it is located far away from police office. It can be concluded that police office has no influence on land value in both corridors. In conclusion, there is different pattern of land value for housing/vacant land and for shops/offices. The land value for housing/vacant land is tied to surrounding public facilities while shops/offices land value generated from its economic attraction. The land use of Diponegoro
University in Semarang-Ungaran is the most predominant factor affecting land price. Furthermore, the distance to CBD or city center affecting land price generally eventhough shopping centers or modern markets also play important role in this case.
Distance to Toll Gate
The toll gate of corridor Semarang Mranggen is located near Gayamsari traditional market while corridor of Semarang Ungaran is located in ADA supermarket. Housing/vacant land has extremely low prices while shops/offices have high prices because shops/offices built in the commercial area. The entrance toll in the corridor of Semarang-Mranggen is located in front of Gayamsari. The existence of toll is accessible to connect and reach many places. The terminal, bus stops,station, bridges are less influential due to high use of motorized-vehicles and those transport facilities give nothing to land value change. The graph below illustrates that the high land value of properties in both corridors are mostly found at the proximity to toll gate. Both properties represent the same trendline. Although the toll gate gives impact on land value, there is a different pattern of land value in both corridors. For instance in corridor of Semarang-Ungaran where the land value goes down significantly due to traffic congestion and unaccessible condition around the toll gate. As a whole, the existence of toll gate gives enormous impact on land value change of all properties.
Distance to Bus Station
Public transport station in corridor of Semarang -Ungaran is called as Banyumanik Bus Station categorized as type C while bus station in corridor of Semarang Mranggen is Terminal Penggaron categorized as type B. The effect of bus stop on land value shows negative correlation that means the farther the property site from terminal the land value will be increasing except for shops/offices in corridor of Semarang-Ungaran because modern marketplace, shopping cemters such as ADA Supermarket and Carefoure are nearby.
Distance to Footbridge
The footbridge in the corridor of Semarang-Ungaran is located in Terminal of Banyumanik while another footbridge of Corridor Semarang-Mranggen is located in ADA Majappahit Supermarket. Footbridges are usually built near activity centers and this will therefore gives impact on land value due to shopping activities. As can be seen in the scatter plot, there is a strong correlation between land value and parking lots located in shops/offices in both corridors. The number of parking lots which are able to accomodate more vehicles can raise the land value of shops/offices. In conclusion, the high dependency of motorized-vehicles, the existence of toll road give significant impact on land value. The provison of parking lots surround the properties used as attraction to encourage high land value in those areas. In contrast, public transportation infrastructure such as terminal, bus stops and footbridge give no impact on land value.
The Correlation between Land Value Change and Land Use
According to activity point above, it can be clearly seen that the activity points in corridor of Semarang- The average land value in corridor of Semarang-Mranggen in the year 1993 was Rp 397.727 per square meter while at the same year in corridor of Semarang-Ungaran was Rp 125.000 per square meter. The land value in corridor of Semarang-Mranggen was higher than the land value in corridor of Semarang Ungaran because Semarang-Mranggen located near Semarang city centre (CBD). In 2004, both corridors had the same land value but in 2011, those move upward significantly with different percentage changes. The land value in corridor of Semarang Ungaran was higher than Semarang Mranggen. In 2015, the land value in Corridor Semarang Ungaran skyrocketed far above Corridor Semarang Mranggen. The difference between the values around 80%.The land value changes in both corridors can be seen from the The residential land use in 1993 -2015 in both corridors decreased gradually. The drastic decline was located in the Corridor Semarang Mranggen. Residential land use changed into industrial and services use. The large changes in the main corridor was followed by the land use behind the corridor. The land use change that encourage several construction caused the lack of green space. Hence, the land value drop considerably due to the bad impact environmentally. The lack of green space in in corridor of Semarang Mranggen located in along the river (Watershed), while in the corridor of Semarang Ungaran located in Gombel Hills. It can be seen that along the connecting road will develop into premium area with high values and it will apply land use mechanisms in along the corridor. The land will be transformed into the land use with certain economic value that commonly known as commercial use.
Conclusion
According to the analysis above that the land value is divided into two categories such as commercial property and non commercial property. Commercial property consist of high economic activities that can pay a high land value. While non-commercial property is used for personal interests that give no economic value for both landowner and property manager. Both commercial and noncommercial properties, both have different responses and characteristics toward urban space configuration and the existing transport infrastructure.
Both commercial and non-commercial properties are influenced by distance to city centre and human activities out of corridor (commuting activities). The result is in accordance with Land Rent [19,20] and Bid Rent Teories [18,21] that the land value increase the road junction to human activity centers (such as University) and toll gate to Perumnas residential areas in both corridors. Noncommercial properties have highly sensitive characteristic toward space configuration changes compared to commercial properties because non-commercial properties have no economic attraction and it is necessary to have sufficient access to public facilities or other high-attractive land uses. Thus, non-commercial land value is influenced by surround public facilities and accessible road. On the other hand, the toll gate, bus stops, terminal and footbridges have no effect on land values. This condition is very unique when compared with some theories or studies in other countries that most land values are affected by the proximity to public transportation facilities. However, those theories seem impossible to apply in Indonesia due to high dependence on private cars or motorizedvehicles and only very few people who use public transportation. Public transport in Indonesia is only | 4,838.4 | 2018-02-01T00:00:00.000 | [
"Economics",
"Business"
] |
Getting Young People to Farm: How Effective Is Thailand’s Young Smart Farmer Programme?
: In 2014, the Thai government initiated the Young Smart Farmer (YSF) programme to counter the decline in the number of young people involved in farming. The YSF programme has three desired outcomes: first, to increase participants’ financial independence; second, to enhance the adoption of innovative farming methods; and third, to retain participants in the long-run by satisfying them. This study aimed to evaluate if these outcomes have been achieved. A Propensity Score Matching (PSM) method was applied to analyse the data collected from programme participants (61 responses) and non-participants (115 responses) through a survey in the Prachin Buri province in Thailand. Participation was determined by education, farmland size, farming experience, and challenges to farming. Most participants (~79%) stated that they were satisfied with the programme; however, the programme did not increase financial independence and the adoption of innovative farming methods. As such, the programme might not be very effective in motivating young people to continue, return to, or enter farming. We recommend that the programme can be improved by adjusting training and field trips to meet the needs of participants in different production systems. The programme should also be expanded beyond providing knowledge and information, and it could offer additional monetary and non-monetary support to participants, such as loans for technology investments needed for farm expansion and competitive advantages.
Introduction
Thailand is facing an ageing farming population, with fewer young people continuing, returning to, or entering farming [1]. This is a common demographic structural change, also found in many other countries [2]. In Thailand, the share of farmers younger than 45 years decreased from~30% in 2008 to~19% in 2018, while the share of those 60 years and older increased from~26% to~33% [3,4]. This is partly because of the economic development disparity between the agricultural sector and others, resulting in people with a farming career earning less than those with other occupations [5,6]. Farming also faces many risks, such as the volatility of the agricultural markets and product prices, increasing product costs, labour shortage, deterioration of soil quality, climate change and natural disasters, and fraud by intermediaries [5,7]. Farm work is also physically and mentally exhausting, with an augmented chance of work-related accidents, while medical and pension benefits are usually poor [8]. These factors make farming an unattractive career path for young people. Young people are also reaching higher levels of education, which leads to changes in their lifestyle and employment aspirations, with increasing opportunity to find off-farm employment [9,10]. Particularly the well-educated young people out-migrate to urban areas, contributing to the global problem of a rural and farming exodus [11,12]. Between 2015 and 2019, the average net immigration into the capital, Bangkok, was a surplus (32,920 people per year), while the average net immigration into other regions was a deficient (−222,500 people per year) [13].
However, the agricultural sector in Thailand is still an important source of livelihood, income, and raw material, and thus, it is essential to the national food supply. In 2019, of the total 37 million employed people,~32% worked in the agricultural sector [14]. Thailand is also a major food product exporting country [15], meaning that Thailand currently has a food production surplus, and food security is not an issue in most regions. However, if the number of young farmers continues to decline, leaving only older farmers to deal with increased farming workloads and risks, the agricultural sector competitiveness, sustainability, and national food security are likely to become a challenge in the future. This is because older farmers are generally less motivated than their younger colleagues to develop their farms, less open to new ideas and efficient methods, less daring regarding on-farm investments, and less productive as their health might deteriorate [9,16].
To address this decline in the number of young farmers, the Thai government and governments of other Asian and African countries initiated capacity-building programmes for young farmers. The scope of such programmes is large, with differences in how they are implemented and their governance and incentive strategies. Some programmes follow a top-down approach [7,[17][18][19][20][21][22][23], while others are bottom-up [7,18,[24][25][26]. Most programmes focus on collective incentives and developments, with fewer focusing on incentives for and developments of individuals [22]. Programmes either offer mainly monetary support and incentives, such as subsidy [7,17,20,23,26], or mainly non-monetary support and incentives, such as knowledge and information [18,19,21,22,24,25]. An example for the latter strategy has been introduced in Thailand in 2014, under the umbrella of the Young Smart Farmer (YSF) programme, which has been the focus of our study [27]; the next section provides some conclusions about these programmes.
Despite the given variety in programmes aiming to retain young people in farming employment, studies of the programmes have unambiguously concluded that they are only successful when they provide young people with a clear vision of the economic benefits of farming. This can be achieved through following these key programme implementation principles: (1) providing support that is flexible and consistent between the agricultural sector development goals of the countries and the specific needs of young farmers [7,17,20,22,23,25,26]; (2) focusing on developing the entrepreneurial skills and the adoption of innovative information and communication technologies by young farmers [18,24]; and (3) facilitating informal and formal cooperative networks amongst young farmers and also other stakeholders [7,17,[19][20][21]23,26].
Although previous studies have explored how these principles can lead to the success of a programme, most of the studies used qualitative data and did not provide an empirical evaluation of the impact of the programmes in terms of economic viability of the participants. No consensus has been reached among the few existing quantitative studies, with some studies finding that the programmes are ineffective [21], while some found that they are effective [19,25].
To fill this gap, our study, therefore, aimed to evaluate if (1) young farmers participating in the YSF programme could make enough money from farming to be financially independent, (2) if the YSF programme has contributed to the adoption of innovative farming methods, and (3) how satisfied participants are with the YSF programme. Addressing these aims aligns with the desired outcomes of the YSF programme, and they can indicate whether the YSF programme is likely to achieve its ultimate aim of increasing the number of young farmers across Thailand. We conducted a systematic impact evaluation using the Propensity Score Matching (PSM) method and used self-reported satisfaction measures from household data collected from 176 farmers (61 participants and 115 non-participants) in the Prachin Buri province in Thailand. The study results, discussion, and recommendations may contribute to directing the next phase of the programme implementation by policymakers, which will be the extension to more participants and increased effectiveness. This study may also contribute as a guideline for future evaluations of similar programmes in Thailand and other countries facing the same problem of declining numbers of young farmers.
The Young Smart Farmer Programme
According to the Farmer Development Division [27], the YSF programme has been designed to develop the farming business capabilities of young farmers. The development approach of the programme is based on the principle that farmers are the centre of the development (bottom-up approach) and relies on the process of knowledge sharing and network building among farmers as a development means. The programme has been implemented annually in every province of the country by the Department of Agricultural Extension (DOAE) since 2014.
There are three long-term aims of the programme: (1) to increase the number of young farmers by motivating young people to continue, return to, or enter farming to replace older farmers; (2) to help young farmers to become agricultural leaders in their communities; and (3) to create collaborative networks among relevant stakeholders for the development of the agricultural sector of the country.
It is expected that these aims will be achieved through the completion of two short-or medium-term outcomes of the programme: (1) to make young farmers become financially independent with their own farming businesses and (2) to enhance the adoption of innovative farming methods by young farmers.
The three primary activities of the programme are, first, to provide training, workshops, seminars, and field trips to participants, as per their needs; second, to create 77 provincial, nine regional, and one national young smart farmer networks and channels for sharing knowledge among participants; and third, to establish and support the services of 27 young farmer development learning centres.
Young farmers who can participate in the programme must have the following qualifications. First, they must be between 17 and 45 years old and have just started their own farming. Second, they must be determined to improve their farming capability and quality of life. Third, they must volunteer to participate in the programme and join all activities throughout the programme period. Fourth, they must be registered as farmers with the DOAE. Each year, 25 to 30 young farmers are recruited to participate in the programme in each province. Between 2014 and 2018, 12,569 participants nationwide were recruited [28].
Evaluation Framework
We outlined the theory of change for the YSF programme, as implemented in the Prachin Buri province in Thailand between 2014 and 2018, using a result chain to clearly demonstrate the link between inputs, activities, outputs, outcomes, and aims of the programme ( Figure 1). This is an important first step in any programme impact evaluation, as it helps to define a clear evaluation question, in which the answer clearly benefits future policy and an understanding of why a programme might succeed or fail its desired outcomes [29]. We focused on the measurable desired outcomes of the programme from our collected data, that is, to evaluate whether they have been achieved or not. We measured the outcome on increased financial independence by net farm income and the enhanced innovative methods adoption by examining whether participants adopted innovative methods other than common machinery and chemicals. As per Nordin and Lovén [2], this enabled us to indicate the probability of the programme's achievement regarding its final aim (in particular, the increase in the number of young people continuing, returning to, or entering farming), although we did not directly focus on this aim. Two hypotheses could, therefore, be formulated: Hypothesis 2 (H2). Participation in the YSF programme helps to adopt innovative farming methods.
Satisfaction is an important proxy for the success or failure of a development programme's outcome [30], as it can show whether a programme's support has fulfilled participants' expectations and improved them in a desired outcome [30,31]. Participating farmers who are satisfied with an agricultural extension programme and their situation are more likely to continue farming [32]. We, therefore, also focused on evaluating participants' levels of satisfaction with the programme. A third hypothesis could, therefore, be formulated: Hypothesis 3 (H3). Participants in the YSE programme are satisfied with the programme.
Research Area
The study was conducted in the Prachin Buri province in central Thailand ( Figure 2) because of its importance as a large agricultural production area in the central region undergoing structural change (decreasing income from agriculture and a decline in the share of people employed in the agricultural sector). The province spans 2.98 million rai which equals 4762.36 km 2 (1 rai = 0.0016 km 2 ) and is located~136 km east of Bangkok. Most of the Prachin Buri province is used for agriculture (52.6%), especially rice (21.2%), perennial and fruit trees (16.2%), and field crops (10.8%) [33]. The agricultural sector contributes 2.2% to the Gross Provincial Product (GPP), where the service sector contributes 19.1% and the strongest industrial sector, comprising 965 large factories, contributes 78.7% [34]. The proportion of the population below 45 years accounts for 62.1% of the province's total population, while the population 60 years and older accounts for 16% [35]. Most people work in the service (47.3%) or industrial sectors (34.3%), with 18.4% working in the agricultural sector [33].
Sampling, Data Collection and Questionnaire
Data collection comprised two phases. First, young farmers (aged 17-45 years) participating in the Young Smart Farmer programme were interviewed in Thai between September and mid-October, 2018. We received a list of names, addresses, and contact numbers of all current participants from the Prachin Buri Provincial Agricultural Extension Office [36] and the Farmer Development Division [37], in the Department of Agricultural Extension. A total of 123 participants were listed, all of whom were contacted. About 50% (61 YSF programme participants) agreed to be interviewed.
Second, as a control group, non-participants within the same age range were interviewed between mid-October and late-December, 2018. We used a purposive sampling technique to select non-participants who lived in the same or neighbouring villages as the interviewed participants, with the intention to interview three times the number of nonparticipants than participants (183 people), as suggested by Olmos and Govindasamy [38]. Upon request, the village headmen provided assistance in contacting non-participants for the interviews. Although 183 people were contacted, about 81% (149 non-participants) agreed to be interviewed.
We used semi-structured questionnaires, which were slightly modified after a pilot test with seven participants in early-September, 2018. The questionnaire for the programme participants and non-participants consisted of three similar parts: (1) demographic, family, and social characteristics, (2) farming and other occupational experiences, and (3) receipt of other support from the government. The participant questionnaire included an additional section regarding the programme participation.
Data Analysis
To evaluate the impact of the YSF programme participation on participants' net farm income and adoption of innovative farming methods other than common machinery and chemicals, we applied the Propensity Score Matching (PSM) method. We followed the five suggested steps of the PSM method [38,39], as follows: (1) estimation of the binary logistics regression model and propensity scores, (2) examination of common support between the distribution of propensity score estimated for participants and non-participants, (3) matching non-participants with participants based on their similar estimated propensity scores, (4) estimation of the programme's impact, and (5) examination of matching quality and influence of unobserved factors on the estimated programme impact (Table S1 in Supplementary Materials).
The PSM method was suitable to be applied in this study because it only needed comparative data on outcome variables, net farm income, and adopted innovative farming methods of participants and non-participants after joining the programme, and it could restrict the influence of other observed factors on the programme participation and the outcome variables.
To evaluate the participating young farmers' satisfaction with the programme, we asked questions on a 4-point scale about participants' perceived levels of satisfaction with each aspect of the programme (1 = very dissatisfied, 2 = dissatisfied, 3 = satisfied, and 4 = very satisfied). Participants were asked how satisfied they were with five different aspects of the programme: (1) the overall programme, (2) the programme publicity, (3) the opportunity for attending training and field trips, (4) the opportunity for networking among participants, and (5) the post-programme follow-up. We then calculated the percentage of the participants who were satisfied with each of the aspects and the average satisfaction scores for each aspect. We also applied non-parametric Kruskal-Wallis H and Mann-Whitney U tests to examine how the scores differed among participants with different characteristics.
Sample Description
In total, 210 responses were obtained, although 34 were discarded because their key questions were incomplete. Of the remaining 176, 35% were from participants and 65% from non-participants. Most participants (74%) started their involvement in the YSF programme between 2016 and early 2018, while the rest (26%) were involved in the programme between 2014 and 2015. Gender was nearly balanced, with men accounting for 54% of the total sample (Table 1). Participants were significantly younger than non-participants (38.7 vs. 41.3 years), better educated (higher than year 9, the compulsory education level in Thailand), and had less farming experience (7.3 vs. 13.9 years). Participants were also more likely than non-participants to adopt innovative farming methods, other than common machinery and chemicals (92% vs. 78%), and to own the land on which they farmed (69% vs. 48%). Innovative farming methods here refer to adopting (1) agricultural machinery, (2) agricultural chemicals, (3) information and communication technologies (ICT), (4) biological methods for improving soil and water quality and dealing with plant diseases and pests, (5) environmentally-controlled houses for growing crops and raising livestock, (6) management of farm irrigation systems, (7) management of farmland for different usage purposes and collection of farm statistical data for production planning, (8) solar cells for generating electricity for farm use, (9) hydroponics, and (10) other more efficient cultivation and animal husbandry techniques, such as using mung bean peels to increase nitrogen in mushroom cultivation). Notes. Dependent child = child under 20 years old; irregular weather = flood and drought; *, **, *** significant at 10%, 5%, and 1% level.
Inputs, Activities, and Outputs of the Young Smart Farmer Programme
Between 2014 and 2018, the research area received an annual budget of 3843 USD (128,082 baht) for implementing the YSF programme. This budget was calculated by taking the average of 50% of the Smarter Farmer programme's budget between 2014 and 2018 and dividing it by Thailand's 77 provinces [40]. In terms of personnel input, six officials of the Prachin Buri Provincial Agricultural Extension Office were made responsible for driving and overlooking the YSF programme. In addition to these officials, experts were invited from outside agencies, such as the Prachin Buri Agricultural Research and Development Center, the Office of Prachin Buri Provincial Commercial Affairs, and the Bank for Agriculture and Agricultural Cooperatives, to deliver the lectures as part of the programme.
Participants joined the programme because they mainly expected to gain knowledge and information about crop production, livestock, and edible insect culture (~34%); farm product management and marketing (~17%); as well as networking (~30%; Figure 3).
Figure 3.
Reasons for participation in the Young Smart Farmer programme. Notes. n is 102, as some participants had more than one expectation; for more details, see Table S2 in Supplementary Materials.
In terms of activities and outputs, knowledge and information on post-harvest management; crop, aquaculture, and livestock production; and farming business administration were disseminated during the training, field trips, and meetings. Most of the knowledge and information participants gained were related to general and online marketing (~15%); different crop production (~12%); product processing and value addition (~11%); and product branding, packing, and story creation (~9%; Figure 4). However, many participants (~44%) stated that they were still unable to fully utilise the knowledge and information gained, while the remaining~56% could utilize the knowledge quite fully. Additionally, networks among participants at both the provincial and district levels had been formed. Some participants also networked with participants in other provinces. About 69% of participants were still active members of the network. Amongst the members,~86% stated that they received useful knowledge and information through the network, while the remaining 14% rarely had contact with the network. A mobile application (LINE) was used as a channel for communication among participants and relevant officials. The Young Farmer Development Learning Centre located in the study area was about to open for service in early October 2018.
Satisfaction with the Young Smart Farmer Programme
Overall, most participants were satisfied with the YSF programme (79%;~64% satisfied and~15% very satisfied; Figure 5). Participants were particularly satisfied with the main activities of the programme: training and field trips (~80%) and networking (~76%) opportunities provided. In addition, the participants' mean satisfaction score with the programme was 2.9 (SD = 0.64), which was closer to 3 (satisfied) than 2 (dissatisfied) ( Table S4 in Supplementary Materials). Hypothesis 3 was therefore supported. Table S3 in Supplementary Materials. Net farm income, farmland tenure, and farm activity had significant effects on the mean satisfaction, while innovative farming methods, farm size, off-farm income, and marketing problem had none (Table S5 in Supplementary Materials). Participants with medium (5001 to 10,000 baht/rai; 1 baht = 0.03 USD) and low farm income (<5001 baht/rai) had higher levels of mean satisfaction with the overall programme, and specifically, with the opportunities provided, than those with high farm income (>10,000 baht/rai). Those who did not own or who rented most of their land were more satisfied with the overall programme, while those who only cultivated rice were more satisfied with the overall programme and, specifically, with the programme publicity and the opportunities provided.
Binary Logistic Regression Model and Propensity Score Estimation
The binary model had a reasonably good predictive power of farmer participation in the YSF programme, with a McFadden's R-squared of 0.37 ( Table 2). The model showed that farmers were more likely to participate in the programme if they were better educated and had less farming experience, confirming the result of the bivariate analysis (Section 4.1). Those with more farmland, not facing challenges relating to marketing and irregular weather, and those whose farms were far from district agricultural extension offices were more likely to participate. Notes. OR = odds ratio; *, **, *** significant at 10%, 5% and 1% level.
The binary model's propensity score estimate revealed adequate common support for further matching non-participants with participants ( Figure S1 in Supplementary Materials). For every propensity score estimated for participants, there was the same or a close propensity score estimated for non-participants.
Matching and Estimating the Young Smart Farmer Programme Effect on Participants' Net Farm Income and Adoption of Innovative Farming Methods
Using eight different matching algorithms, non-participants and participants with similar propensity scores were matched, and the matched sample was extracted from the total sample for each match for further estimation of the effect of the programme (Table 3). Notes. NNM = one-to-one nearest neighbour matching; NNMR 2:1 = two-to-one nearest neighbour matching with replacement; NNMR 0.20 = one-to-one nearest neighbour matching with replacement within a 0.20 caliper; NNMR 0.25 = one-to-one nearest neighbour matching with replacement within a 0.25 caliper; GM = one-to-one genetic Matching; OM = one-to-one optimal matching; FM = full matching; SUB = subclassification; 1 Numbers in parentheses = standard error; 2 Number in brackets = odds ratio; None of the estimated average net farm income difference and probability of adopting innovative farming methods was significant (p-value > 0.1).
The simple linear regression model after each match showed no effect of the programme on participants' net farm income. We found that, in general, although the estimated average net farm income difference between non-participants and participants differed across all matches, they had the same direction (Table 3). Participants tended to have a lower net farm income than non-participants, at between 4589 and 8818 baht/rai. However, the estimated net farm income difference was statistically insignificant for all matches (p-value > 0.1), meaning that participants had a similar net farm income to nonparticipants. Hypothesis 1 was, therefore, rejected.
The binary logistic regression model after each match also revealed no effect of the programme on participants' adoption of innovative farming methods other than common machinery and chemicals. We found that, generally, although the estimated probability of adopting innovative farming methods had the same direction for all matches, their values were different (Table 3). Participants tended to have 45% to 146% higher probability of adopting innovative farming methods than non-participants, although, again, the estimated probability of adopting innovative farming methods was statistically insignificant for all matches (p-value > 0.1), meaning that participants and non-participants adopted the same innovative farming methods. Therefore, hypothesis 2 was also rejected.
For validating the results, we examined the covariate balance and found that the matches were of good quality for all eight matching algorithms used (Table S6 in Supplementary Materials). When conducting the Rosenbaum's sensitivity analysis, we also found that the estimated programme effects were accurate and insensitive to unobserved confounders (Table S7 in Supplementary Materials).
The Young Smart Farmer Programme Participation and Satisfaction
Farmers with less farming experience had a higher probability of participation in the YSF programme than those with more experience. This was not surprising because the YSF programme targets farmers who have just started their own farming business. In addition, farmers with limited experience might desire to improve their knowledge and skills and seek out the programme for this reason. Although farmer age has no effect on participation, those who were better educated were more likely to participate, perhaps because better-educated farmers had changed careers, probably either as a lifestyle choice or to return to support ageing parents in the rural area. These farmers might lack a farming background or, in the latter case, have left their parents' farm for another career path and, therefore, lack farming experience.
It was surprising to find that farmers with more farmland were more likely to join the programme than those with less, because young farmers generally start with smaller plots. However, some of these farmers may have farmed before or had taken over their parents' large plots. This may also have been because they prioritize networking, as many participants with large farmlands indicated in the open-ended questions about their expectations from the programme. They networked to share their knowledge of new farming practices, to form large-scale farming groups to gain bargaining power when purchasing inputs and distributing their products, and to present their needs and opinions directly to government agencies. Farmers who had not faced any farming challenges, such as marketing and irregular weather, were also more likely to participate in the programme than those who had, which may be because some of these participants were relatively new to farming and may not yet have either experienced or been aware of these challenges.
One of the three desired programme's outcomes was achieved: that of being satisfactory to the participants. We found that participants were satisfied with the overall programme and with the two main activities of the programme: the provision of training and field trips and the formation of a young smart farmer network. This may have been because the programme provided a wide range of knowledge and information, from production to distribution, as well as related technologies and farm business management, that they may then have been able to apply later in their farming career, as discussed by participants (~32% of the comment, Table S8 in Supplementary Materials). They also had received some helpful information and advice from their peers and other members of the established young smart farmer network, as shown in Section 4.2 and described by Pratiwi and Suzuki [41]: the contributions to farm development provided by the advisory role of farmers' social networks.
We also found that satisfaction differed by net farm income, farmland tenure, and farm activity. In contrast to Phiboon, Cochetel, and Faysse [7], we found that farmers with a high income were the least satisfied. This may have been due to their independence from the programme support, indicating that they had the least to gain from participation. Those who do not own land or rent most of their land were more satisfied with the overall programme, because most of them (~74%) belong to the low-income group. Farmers with different farming types have different information needs, which influence their satisfaction levels of public extension services [42]. This was also shown here, with rice farmers being the most satisfied, which may again be related to the low-income of the majority of rice farmers (~94%). Those who rented most of their land and produced only rice, therefore, may have gained the most from programme participation.
Impact of the Young Smart Farmer Programme
It was surprising to find that the YSF programme implemented in the study area fell short of delivering two of its three desired outcomes, in that it neither increased participants' financial independence nor motivated participants to adopt innovative farming methods other than common machinery and chemicals. We found that participants and non-participants were alike in terms of net farm income and adoption of innovative farming methods. As a result, the programme may be less likely to reach its overall aim of incentivising young people to continue, return to, or enter farming and thereby provide a replacement for older farmers. This was also found by Filloux [43], who concluded that the programme weakly influenced agricultural students in Chachoengsao, Sa Kaew, Roi Et, and Sakhon Nakhon provinces in their agricultural career choices and interests in becoming full-time farmers.
The explanations for the missing relationship between programme participation and increase in net farm income, which differed from the existing studies on similar programmes (e.g., [32,44,45]), may have been related to the problems of the programme's inputs, activities, and outputs, as discussed by participants (Table S9 in Supplementary Materials). First, the programme's activities still lacked continuity and excessively focused on theoretical knowledge and information transfer through training or meeting, with little practices and field trips and no additional relevant services and assistances (~17% of the problems discussed). These may not be enough support for participants whose farms are struggling for survival against challenges, such as water shortage, lack of infrastructure, smaller plots, narrow marketing channels, and encounters with disasters, as also described elsewhere [7,17,21,46,47].
Second, the same trainings and field trips were provided to all participants, regardless of their levels of farming knowledge, experience, farming activities, and products (~16% of the problems discussed). Third, the contents of the programme's trainings and field trips were still largely determined by the agency responsible for the programme, with little involvement by participants (~12% of the problems discussed). Phiboon, Cochetel, and Faysse [7] and Tripp, Wijeratne, and Piyadasa [21] also reflected on this problem that, under the bureaucracy, the agencies still primarily focused on implementing such programmes to achieve their own goals rather than the participants' goals. Fourth, as a result of the previous two problems, the contents of the trainings and field trips still do not fit participants' current farming purposes (~14% of the problems discussed), and therefore, they allowed participants to partially utilise the knowledge and information gained from the programme, as shown in Section 4.2. While some of participants who had just started farming set their first priority purposes of developing themselves into productive producers, they noticed that some of the knowledge transferred was instead aimed at developing them into modern entrepreneurs and vice versa. Last, some of participants were already satisfied with their farming practices and simple and peaceful lives and, therefore, had little incentive to attend all training and field trips provided (~11% of the problems discussed).
Furthermore, although previous studies found a positive relationship between agricultural extension programmes participation and the adoption of novel farming technologies (e.g., [48][49][50]), our results did not indicate a similar conclusion. This may have been because of the two already mentioned problems. First, the technological knowledge and information participants gained from the programme were inconsistent with and inapplicable to the farming activities of some participants. Second, the programme had neither direct technological (e.g., related materials, equipment, tools, machineries, and designs) nor financial support for individual participants, and participants themselves may also have had no other financial resources to pay for innovative methods. Technology-based methods, such as greenhouses and automatic irrigation systems, are relatively expensive (e.g., [32,47]) for farmers to adopt without further support. Third, some participants may have already adopted some methods (e.g., biofertilizers and biopesticides) before joining the programme and did not intend to adopt more. Last, most participants had just joined the programme (one to three years prior to the survey), and this period might not have been long enough to detect the impact of the programme on their adoption of innovative farming methods and increase in net farm income, as sometimes a policy's impact on individuals' economic well-being can take multiple years to be adequately measured [51].
Although our study implied that the programme's overarching aim might not have been achieved, during the discussions with the participants, most of them (~62%) had commented that their farming, which was mostly organic and low-chemical farming, could help inspire young people to take up farming due to the non-monetary farming benefits they gained, such as being healthy and living close to their family. It also should be noted that the programme's ineffectiveness on improving farmers' financial independence and innovative farming methods adoption found in our study might not always be the case in other provinces of Thailand where the programme had also been implemented. In Khon Kaen Province, it was found that the programme helped farmers to upskill their entrepreneurial capacity and to develop market-oriented productions, novel technology use, self-reliance and bargaining power, and eventually succeeded in stabilising their incomes [52].
Policy Recommendations
Participants' expectations of gaining knowledge and information, especially on a theoretical basis, from the YSF programme appeared to be met, and participants seemed to be satisfied with the knowledge and information transferred. However, knowledge and information transfer itself, as it is currently conducted, might not motivate participants to stay in the programme and continue their farming in the long run. If participants do not concretely become economically independent in their farming, they are likely to change career paths, sooner or later, particularly those with better education. Based on our findings and participants' suggestions (Table S10 in Supplementary Materials), we, therefore, recommend that the programme should improve its own implementations to enable participants to more fully utilise the knowledge and information transferred and to meet its two unmet outcomes, i.e., improving participants' financial independence and innovative farming method adoption, as follows.
Programmes that fail to provide support that meets the participants' needs will undermine the participants' interest in joining other similar programmes [7,21,22]. Participants in this study also suggested that the programme's activities should be based on their needs and include more hands-on practice sessions (~33% and~9% of the suggestions, respectively). First, the programme should, therefore, ask participants what subjects they really want to learn and use that information every time to design and hold trainings and field trips for participants. Such training and field trips should also be focused more on the workshops and visits of participants' and other farmers' farms in both the study area and nearby provinces. Second, modifying extension services' methods and content to suit each client group will help to increase service usage, performance, and satisfaction [7,21,25,35,40], and participants also suggested that the development provided by the programme should be differentiated and appropriate for each participant group with different types of farming activities (~8% of the suggestions). The programme could, therefore, apply a career path design to provide better-targeted training and field trip methods and content based on a production system (rice cultivation, fruit tree plantation, and organic vegetable production). Additionally, within each career path, participant development should be divided based on their farming knowledge and skill level, such that those participants without basic knowledge and skills are separated from those with basic, intermediate, and advanced knowledge and skills. For example, newcomers could be trained to focus on how to grow crops to generate a high yield, and beginners could be trained on how to improve their product quality. Training those with intermediate knowledge and skills could be on product processing and marketing channels, and training for the advanced could be on agribusiness and entrepreneurship.
Rural out-migration of young people and ageing farmers increases the risk of not meeting the food demand of a growing population. Motivating young people to stay as farmers is only one aspect. The other is to ensure the farms are more efficient and benefit from an economy of scale, using innovative technologies. Instead of increasing or retaining the number of small-scale farms, one solution might be to convert small-scale farms into larger and more competitive farms [2]. Participants also suggested that any support other than or related to training and field trips are needed and should be provided to improve their farming performance (~26% of the suggestions). Third, we therefore recommend that the programme could provide loans with soft and flexible terms (e.g., low-interest rate, long instalment period, and long interest-free period) to participants to make this transition into more commercial farms, also consistent with Faysse, Phiboon, and Fillous [17] and Salvago [20]. With better financial resources, farmers could buy or lease additional land from those farmers who retire and do not have a successor, and they could invest in better technologies and machinery.
Fourth, also in line with Faysse, Phiboon, and Fillous [17] and Salvago [20], the programme could provide subvention (periodic payment or lump sum) and insurance to participants to mitigate their inadequate or reduced incomes and to enable them to continue farming during their initial start-up period, when they are not competitive or sufficiently profitable, or when they face difficulties in production and product distribution, such as flooding, drought, plant disease and pest outbreak, or the current pandemic (COVID-19).
Fifth, the programme could also provide other non-monetary support to participants to directly help them in solving their structural farming problems, such as water shortages or a narrow and limited marketing channel. For instance, the programme could help partic-ipants with pond construction to reserve water for dry season use, develop the prominent points of their products to distinguish them from other farmers' groups, publicise their products to the general public, and arrange for them to become a permanent supplier to a store or establish a shop for them to sell their products.
Study Limitations
Although our study's results were in line with a prior study by Filloux [43] in terms of potential ineffectiveness of the programme, our study still had some limitations. First of all, we did not directly evaluate the impact of the programme on the rates of young people entering and exiting farming as its primary aim, but we inferred this from the evaluation of the programme's impact on its desired outcomes, as was done by Nordin and Lovén [2]. Future research could gather such data over the timeframe of the programme.
Second, the sample size of our study was relatively small and limited to one area. While the methods can be applied elsewhere, the results may not be generalized across the country. The implementation of the programme throughout the country is based on the same principles, but the outcomes of the programme are likely to differ from one area to another, as seen in a prior study [52].
Conclusions
The decline in the number of young farmers, which Thailand and many other countries are currently experiencing, may lead to challenges in maintaining the countries' agricultural sector competitiveness, sustainability, and food security. To cope with these potential challenges, young farmers' capacity-building programmes have been implemented in many countries, including Thailand. In Thailand, the latest of such a programme is the Young Smart Farmer (YSF) programme. It started in 2014, with the desired outcomes to improve young farmers' financial independence, to enhance innovative farming method adoption of young farmers, and to obtain satisfaction from young farmers, thereby maintaining their long-term involvement. This was done with the hope to motivate young people to continue, return to, or enter farming. Although some of these programmes have been in place for some time, and the YSF programme has been in place for over seven years, no rigorous impact evaluation had yet been made. This study contributes to the literature on the role of government in retaining and increasing the number of young people in the agricultural sector. We did so by applying quantitative analytical methods to evaluate the success of the YSF programme, which specifically aimed to improve the economic status of young farmers. We applied the Propensity Score Matching (PSM) method to evaluate the impact of the YSF programme on participants' net farm income and adoption of innovative farming methods other than common machinery and chemicals, and we used farmers' self-rated satisfaction to evaluate satisfaction with each aspect of the programme. We found that the majority of participants (79%) were satisfied with the overall programme and particularly with the training, field trips, and networking opportunities provided. Participants with low and medium farm income, renting most of their land, and solely producing rice were more satisfied with the overall programme. We could not detect a significant difference in net farm income or the probability of adopting innovative farming methods between participants and non-participants, and therefore, we failed to assure that the programme had been meeting two of its three desired outcomes. Therefore, the overarching aim of the programme of persuading young people to continue, return to, or enter farming may be less likely to be achieved. Reasons for this failure might be related to the training and field trips that are offered through the programme, that are not targeted to specific farmer groups, and that may fail to meet the knowledge and information needs of farmers with different experience and skill levels. Additionally, farmers might prefer support beyond the provisions of training and field trips, but this is not yet available through the programme. The results can help to improve the programme and to make it fit for the purpose of eventually haltering the exodus of young farmers in Thailand. The evaluation technique and the findings of this study are also relevant to similar programmes in other countries, such as the agricultural entrepreneurship education programme in the Philippines and the farmer field school programme in Sri Lanka, which have not yet been adequately evaluated and which face the similar implementation problems, leading to less effectiveness of the programmes [21,24].
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/su132111611/s1: Table S1: Five steps of PSM analysis. Table S2: Reasons for participation in the Young Smart Farmer programme. Table S3: Receipt of knowledge and information from The Young Smart Farmer programme. Table S4: Satisfaction with the Young Smart Farmer programme. Table S5: Satisfaction with the Young Smart Farmers programme by participants' characteristics. Table S6: Result of calculating the absolute standardized mean difference and chi-square statistics for examining the covariate balance before and after matching. Table S7: Result of analysing Rosenbaum's sensitivity. Table S8: Participants' comments on the merits and benefit of the Young Smart Farmer programme. Table S9: Participants' comments on the problems of the Young Smart Farmer programme. Table S10: Participants' recommendations on the Young Smart Farmer programme. Figure S1: Distribution of propensity scores predicted for participants and non-participants.
Author Contributions: Conceptualization, P.J. and K.K.Z.; methodology, P.J.; data curation, P.J.; software, P.J.; formal analysis, P.J.; writing-original draft preparation, P.J.; writing-review and editing, P.J. and K.K.Z.; visualization, P.J.; supervision, K.K.Z. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all respondents involved in the study. The data are not identifiable.
Data Availability Statement:
The data that support the findings of this research are available in the supplementary materials of this article. | 9,863 | 2021-10-20T00:00:00.000 | [
"Economics"
] |
Stereotactic body radiotherapy for low-risk prostate cancer: five-year outcomes
Purpose Hypofractionated, stereotactic body radiotherapy (SBRT) is an emerging treatment approach for prostate cancer. We present the outcomes for low-risk prostate cancer patients with a median follow-up of 5 years after SBRT. Method and Materials Between Dec. 2003 and Dec. 2005, a pooled cohort of 41 consecutive patients from Stanford, CA and Naples, FL received SBRT with CyberKnife for clinically localized, low-risk prostate cancer. Prescribed dose was 35-36.25 Gy in five fractions. No patient received hormone therapy. Kaplan-Meier biochemical progression-free survival (defined using the Phoenix method) and RTOG toxicity outcomes were assessed. Results At a median follow-up of 5 years, the biochemical progression-free survival was 93% (95% CI = 84.7% to 100%). Acute side effects resolved within 1-3 months of treatment completion. There were no grade 4 toxicities. No late grade 3 rectal toxicity occurred, and only one late grade 3 genitourinary toxicity occurred following repeated urologic instrumentation. Conclusion Five-year results of SBRT for localized prostate cancer demonstrate the efficacy and safety of shorter courses of high dose per fraction radiation delivered with SBRT technique. Ongoing clinical trials are underway to further explore this treatment approach.
Background
Prostate cancer is thought to have unique radiobiology, characterized by a low α/β ratio relative to surrounding normal tissues [1,2]. A growing body of evidence from clinical studies using hypofractionated radiation provides support that the α/β ratio for prostate cancer is lower than that for the bladder and rectum, and that consequently a therapeutic gain could be achieved using fewer, high-dose fractions (see reviews by Dasu [3] and Macias and Biete [4]). High-dose-rate (HDR) brachytherapy can deliver radiation to a tightly constrained treatment volume using large doses per fraction. Recent multi-institutional findings reported by Martinez et al. for early stage prostate cancer show a 5-year biochemical disease-free survival of about 90% for HDR brachytherapy, which is comparable to their own lowdose-rate (LDR) brachytherapy outcomes, with lower late toxicity levels [5][6][7].
Stereotactic body radiotherapy (SBRT) has recently emerged as an alternative technique to deliver hypofractionated radiotherapy to the prostate, comparable in many respects to HDR brachytherapy, but with a noninvasive approach [8][9][10][11][12][13][14]. The concept is not entirely novel. In the 1980 s, prostate cancer patients were treated in the United Kingdom with 6 fractions of 6 Gy each, delivered over three weeks. Good disease control with no major early or late morbidity was obtained [15]. Innovations in image-guidance technology, the ability to automatically correct for the movement of the prostate during treatment, and delivery of highly-conformal beam profiles have greatly enhanced the capability of delivering high dose fractions to a well-defined target, with sharp dose fall-off towards the bladder and rectum [16][17][18].
King et al. at Stanford University began treating lowrisk prostate cancer patients with the CyberKnife system (Accuray Inc., Sunnyvale, CA) in late 2003, using five fractions of 7.25 Gy (total 36.25 Gy). At a median follow-up of 33 months for the first 41 patients, the urethral/rectal toxicity profile was comparable to that from dose-escalated external beam radiotherapy (EBRT) [12]. Friedland and Freeman et al. in Naples, Florida, began their SBRT program in early 2005, treating lowand intermediate-risk patients with 5 fractions of 7.0 Gy (total 35 Gy). Outcomes from their first 112 patients showed a biochemical control rate of 97% at 24 months median follow-up and toxicity similar to or better than published outcomes of EBRT [9].
Given the intense level of interest in academic and community practices, the ramifications for the management of prostate cancer, and the potential positive economic impact on prostate cancer treatments, we felt it would be both timely and of significant value to examine outcomes from patients with the longest follow-up available to date with the aim of determining disease control and toxicity for SBRT at a median of 5 years. In this report, we present for the first time the results from our combined experience.
Patient Characteristics
The Stanford prostate SBRT program began in December 2003. Eligible patients had newly diagnosed, biopsy-proven prostate cancer presenting with low-risk features. The criteria for low-risk classification included a pre-treatment PSA of 10 ng/mL or less, Gleason score of 3+3 or lower and clinical stage T1c or T2a/b. Patients with a Gleason score of 3+4 were included if present in 2 or fewer cores and involving less than 5 mm aggregate tumor length. Patients with prior treatment (hormone therapy or transurethral resection of prostate) were excluded. The Naples prospective program began in February 2005. Eligibility criteria were similar to that of the Stanford program, except that it included patients with Gleason scores 3 +4 in addition to those with Gleason scores of 3+3. For the current study, we included only the Naples patients with Gleason scores of 3+3 or lower, to increase the homogeneity of this combined study population. Staging work-up included a bone scan and CT scan of the abdomen and pelvis. Both centers had IRB-approval for enrolling patients in their clinical trial.
The current patient cohort consists of consecutively treated patients with the longest follow-up participating in the Stanford [12] and Naples studies [9]. Two patients were lost to follow-up within 12 months of treatment and were not included. Two others died of non-prostate cancer related disease at 12 and 51 months after treatment. This study is therefore composed of 41 patients with a median follow-up of 5 years (4.2-6.2 years). The median patient age was 66 years (range 48 to 83 years). The median initial PSA was 5.6 ng/mL (range 0.7 to 10 ng/mL).
Treatment Planning and Delivery
Three to four gold fiducial markers were placed in the prostate under transrectal ultrasound guidance for image-guided positioning and motion tracking. Treatment planning CT scans were performed at a slice thickness of 1.25 mm, either on the same day (Stanford) or one week after fiducial placement (Naples). MRI scans were obtained for all Naples patients, with preferred sequences of T2* GRE or T1 post Gd, using a slice thickness of 1-2 mm. Planning CTs were used either alone (Stanford) or fused with MRI images (Naples), to differentiate the prostate and the proximal 1 cm of the seminal vesicles (the gross tumor volume, or GTV) from the rectum, urogenital diaphragm, bladder, distal seminal vesicles, and other surrounding structures. The clinical target volume consisted of a 3 mm expansion anteriorly and laterally and a 1 mm posterior expansion. The planning target volume (PTV) consisted of an additional 2 mm expansion anteriorly and laterally and 2 mm posteriorly, to account for errors in target definition and delivery.
All patients were treated with the CyberKnife system, composed of a 6 MV linear accelerator mounted on a robotic arm, with two orthogonal kilovoltage X-ray imagers that provide real-time stereoscopic image guidance and automatic correction for movements of the prostate throughout treatment. Typically, 150-200 non-coplanar beams were delivered in each treatment session. Patient positioning and target tracking were accomplished by registering the location of the fiducials in the real time images to their location in the planning CT. The robot automatically corrected the accelerator's aim to account for both translational and rotational movement of the patient or prostate during the treatment.
Treatment for the Stanford patients consisted of 5 fractions of 7.25 Gy for a total dose of 36.25 Gy. The prescription dose covered at least 95% of the planning target volume, normalized to the 88-92% isodose line. The rectal dose-volume goals were <50% of the rectum receiving 50% of the prescribed dose, <20% receiving 80% of the dose, <10% receiving 90% of the dose, and <5% receiving 100% of the dose. The Naples patients received 5 fractions of 7 Gy each, for a total dose of 35 Gy. The planning objective was also to deliver the prescribed dose to at least 95% of the PTV. For the rectum, the V36 Gy constraint was <1 cm 3 ; for the bladder, the V37 Gy was <10 cm 3 . The Stanford rectal dose-volume guidelines were followed whenever possible. Treatments were given over 5 consecutive days for all but 3 patients in the combined cohort. obtained at each follow-up. Toxicity and quality of life measures for Stanford patients were assessed using the EPIC scale. Naples patients were assessed with the American Urological Association (AUA) and Sexual Health Inventory for Men (SHIM) surveys. Toxicities were subsequently scored based on Radiation Therapy Oncology Group (RTOG) urinary and rectal toxicity criteria [19], and toxicities requiring intervention were noted. (The authors acknowledge that the RTOG scoring system may be insensitive to subtle changes in urinary or bowel function.) Biochemical failure was assessed using the nadir+2 (Phoenix) definition [20].
PSA Response
The 5-year biochemical progression-free survival rate was 92.7% (95% CI = 84.7% to 100%, Figure 1). PSA fell from a pre-treatment mean (± SD) of 5.4 ± 2.4 ng/ml to a mean post-treatment value of 0.34 ± 0.35 ng/ml at last follow-up for non-recurring patients. Median PSA nadir was 0.3 ng/ml. Comparing non-recurring Stanford patients (treated with 36.25 Gy) to Naples patients (treated with 35 Gy), the mean PSA at last follow-up was significantly lower for the Stanford group (0.18 ± 0.14 ng/ml vs. 0.51 ± 0.46 ng/ml, p = 0.002). The mean follow-up for the Stanford patients was about 4.5 months longer than for the Naples patients (5.17 vs. 4.78 years). Three patients developed biochemical progression at 33, 37 and 42 months, respectively. Two patients received the 35 Gy dose; the third received 36.25 Gy. In each case, biopsy confirmed pathologic evidence of malignancy within the prostate gland and a negative metastatic work-up. The remaining patients continued to have stable or declining PSA levels at last follow-up.
Toxicity
As previously reported, patients tolerated treatments very well, resuming normal activities within one week of completion. Acute symptoms of dysuria, urinary urgency, frequency, nocturia and/or tenesmus typically resolved within one month of treatment completion. Late toxicities are summarized in Table 1. No patient has experienced grade 3 or greater late rectal toxicity. Only one patient developed late grade 3 urinary toxicity following repeated urologic instrumentation, including cystoscopy and urethral dilatation. No urinary incontinence has been observed. Twenty-five percent of patients reported mild (grade 1) and 7% moderate (grade 2) urinary symptoms following treatment. King et al. [12] previously reported less frequent grade 1-2 urinary toxicity when SBRT treatments were delivered on non-consecutive days (QOD) vs. daily (QD). As the majority of patients in this study received QD treatment, a similar comparison was not possible.
Discussion
This report demonstrates that SBRT can achieve high rates of durable disease control for patients with lowrisk prostate cancer while resulting in low levels of bladder and rectal toxicity. The current results extend prior independently conducted studies by the authors [9,12], demonstrating the potential of SBRT monotherapy to provide durable disease control with few serious complications in low-risk prostate cancer patients. Our 5-year progression-free survival rate of 93% compares favorably with that obtained with surgery, LDR or HDR brachytherapy [21][22][23][24][25][26].
In a recent update of the Stanford experience, which included 67 low-risk patients [27], King et al. succinctly reviewed the rationale for hypofractionation in the management of prostate cancer. At a median follow-up of 2.7 years, the PSA relapse-free survival was 94%, and toxicity was equal to or lower than observed in doseescalation studies. Disease control rates above 90% are entirely consistent with predictions based on an α/β ratio for prostate cancer of 1.5 Gy. Using the linearquadratic radiobiologic model, 36.25 Gy yields an equivalent dose at 2 Gy per fraction, or EQD2, of 91 Gy for this α/β.
In addition, both disease control and toxicity outcomes with SBRT compare favorably to other treatments for low-risk prostate cancer. In a study comparing outcomes for radical prostatectomy and IMRT to a dose of at least 72 Gy [28], no significant difference in 5-year biochemical disease-free survival (bDFS) rates was detected for low-risk patients (prostatectomy resulted in a bDFS of 92.8% vs. 85.3% for IMRT, p = 0.20). Similar 5-year bDFS rates, ranging from 76% to 92% for radical prostatectomy, 69% to 89% for external beam radiotherapy at doses of 66 to 72 Gy, and 83% to 88% for seed brachytherapy, have been reported in retrospective comparisons of these various treatments [21][22][23][24][25][26]. A recent report of a multiinstitutional retrospective study comparing HDR brachytherapy to seed brachytherapy showed bDFS to be about 90% for both modalities. Somewhat higher 5-year bDFS rates, in the 92-95% range, have been obtained in other studies of surgery, high-dose and hypofractionated EBRT, and seed brachytherapy for low-risk patients [29][30][31][32]. Thus, the 5-year bDFS of 92.7% obtained in the current study is clearly within the range of disease control expected using modern surgical and high-dose radiation techniques.
In the coming years, the long-term outcomes of several other studies of SBRT for organ-confined prostate cancer will be reported. Katz [11]. An update with 42 months median follow up was presented at ASTRO 2010 [33], and 5-year data from this study should be available in 2011. An additional 114 low-intermediate risk prostate patients were treated with SBRT in Naples in 2006, so that data will reach 5-year maturity next year. Acute toxicity from a prospective study underway at the University Hospitals Case Medical Center were presented at the 2009 ASCO meeting [34]. Georgetown has also treated prostate cancer using SBRT; early data were presented at the 2010 ASCO meeting [35]. Two prospective studies funded by Accuray, examining the effects of delivering either a homogeneous, EBRT-like dose distribution or an HDR-like, heterogeneous distribution [10] should complete enrollment in the next 6 months, adding another 600 patients to the collective data pool. A phase III study comparing 12-fraction versus 5-fraction SBRT for localized prostate cancer is currently under review by the RTOG, and a proposed, phase III study from the University of Miami will compare extended fractionation (26 fractions) versus accelerated hypofractionation (5 fractions) for low-intermediate risk disease. As data from these various studies mature, we will develop a clearer picture of long-term outcomes following SBRT.
Conclusion
The current analysis is the first report of 5-year outcomes of SBRT for low-risk prostate cancer, and biochemical disease control is comparable to other available therapies, with equal to or better toxicity profiles. In addition, the treatment can be completed in a time period that is notably shorter (1-2 weeks) than conventional radiotherapy (8-9 weeks) and neither hospitalization nor surgical recovery is involved. These characteristics of SBRT may benefit patients by reducing travel costs and lost work time, allowing a more immediate return to normal, daily routines, and potentially reducing health care costs. We look forward to future multicenter studies that will examine outcomes with this treatment approach. | 3,395 | 2011-01-10T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Detections of IoT Attacks via Machine Learning-Based Approaches with Cooja
Once hardware becomes "intelligent", it is vulnerable to threats. Therefore, IoT ecosystems are susceptible to a variety of attacks and are considered challenging due to heterogeneity and dynamic ecosystem. In this study, we proposed a method for detecting IoT attacks that are based on ML-based approaches that release the final decision to detect IoT attacks. However, we have implemented three attacks as a sample in the IoT via Contiki OS to generate a real dataset of IoT-based features containing a mix of data from malicious nodes and normal nodes in the IoT network to be utilized in the ML-based models. As a result, the multiclass random decision forest ML-based model achieved 98.9% overall accuracy in detecting IoT attacks for the real novel dataset compared to the decision tree jungle, decision forest tree regression, and boosted decision tree regression, which achieved 87.7%, 93.2%, and 87.1%, respectively. Thus, the decision tree-based approach efficiently manipulates and analyzes the KoÜ-6LoWPAN-IoT dataset, generated via the Cooja simulator, to detect inconsistent behavior and classify malicious activities.
Introduction
An Internet of Things (IoT) is a network of physical objects containing sensors, actuators, microcontrollers, and smart appliances that gather and transfer information and interact with their surroundings [1], [2], allowing these devices to generate and exchange data with minimal human intervention.It is one of the most promising technologies and the world is already beginning to utilize various IoT technologies.It communicates with each other via various protocols [3] as well as interacts with a wide range of applications, including smart cities, building automation, safety, surveillance systems, logistics, healthcare, economy, calamity and agriculture [4], [5], [3].Therefore, it offers a large number of attractive qualities that have made us rely on it in our daily applications with best-effort and real-time [6], [7].
The IoT cloud provides capabilities for collecting, processing, managing, and storing massive amounts of data in real-time [8], [9].This data may be easily accessed remotely via industries, governments, monitoring tools, and related services, allowing them to make decisions as needed [10], [11].It is essentially a powerful, high-performance network of servers designed to do high-speed data processing for billions of connected devices [12].
IoT technologies have certain properties in common that are described as heterogeneity, auto-configuring, dynamic ecosystem, smart, large scale, and connectivity [4], [13], [14], [15].For example, the IoT ecosystem includes extremely different technologies and protocols, adaptive protocols, a variety of factors that may be influenced in order to adapt to environmental changes, etc.These components (large scale) work together in a cooperative and smart way to share their collected data and services [16].In many cases, the connected devices are required to offer secure and reliable services to an applicant [17].
The development of technologies day by day increases the various characteristics and techniques of the IoT ecosystem, therefore, raising new security concerns [18], [19] as well as vulnerabilities that cannot be fully addressed by using traditional security solution formulation.
Nowadays, the IoT is facing an increase in threats and security vulnerabilities.Current security techniques may be used to defend against specific IoT attacks.However, the traditional approaches may be inefficient in the face of technological advancements, as well as a variety of attack kinds and severity levels.Thus, it is basic and important to connect IoT and Machine Learning (ML) technologies in order to enhance their cooperation in many aspects.Therefore, enabling ML in IoT for learning and analyzing the behaviors of IoT devices/objects, and systems based on prior information and experiences may allow the IoT ecosystem to effectively manage the unexpected deterioration that is frequently caused by anomalous conditions.Therefore, ML methods have seen significant technical development, opening up numerous new research directions to solve current and future problems in various sciences [20], [21].
The IoT is a master plan that intends to interconnect things to the Internet in order to increase their usefulness [19].It was necessary to discover a way to integrate the IEEE 802.15.4 protocol for Low Power Wireless Personal Area Networks (LoWPANs) with the IPv6 network protocol, which has a huge address space and will allow a billion devices to connect to the internet.The invention of 6LoWPAN technology was a suitable answer to this problem [19], [18], allowing the IoT concept to become a reality.However, this was merely the beginning of a series of problems and issues, such as security [18].6LoWPAN is susceptible to a range of attacks that exhaust node resources and damage the network due to its inability to provide security measures [18], [22].
For the next-generation IoT systems, a powerful, dynamically improved, and up-to-date security solution is necessary.In this paper, we utilized smart technologies (ML) to find security solutions for smart environments (IoT) that make them more secure and reliable.The rapid growth of IoT exposes them to many issues and threats.ML approaches are being used as a strong technique to detect and classify inconsistent, abnormal, and harmful actions and detect incorrect IoT devices that may be due to errors.
The main contribution of this study is summarized as follows: • Proposing a method to detect IoT attacks that relies on ML-based approaches.This paper is structured as follows: Section 2 shows a preview of three IoT attacks that were implemented as samples during the simulation phase.In addition to related works.Section 3 explains an overview of the 6LoWPAN protocol stack for IoT networks.Section 4 explains the methodology for detecting attacks in the IoT, the proposed method, and its implementation.Section 5 describes the tools we use to carry out our work.Section 6 discusses the decision tree-based model and results.Section 7 summarizes the conclusion of this work.
Related work
6LoWPAN protocol stack is vulnerable to attack due to the IoT devices are connected to an unsecured internet, therefore providing security in the IoT is critical [23].An attacker can capture, clone, tamper with, or even destroy LoWPAN nodes [24].Therefore, 6LoWPAN channels are generally vulnerable to a variety of security risks.The characteristics of 6LoWPAN technologies may provide attractive services compared to their peers [25].However, they may be more vulnerable to attacks due to heterogeneity, dynamic ecosystems, etc.Although the link-layer offers encryption [24], it may not be sufficient to guarantee security to both data and signaling packets.It may be encrypted harmful packages without detecting them.The data may be encrypted to maintain confidentiality between the endpoints [26].But it is difficult to detect and know whether the data sent is reliable, has not been tampered with, dropped, or has been breached via malicious action.Attacks may occur in different layers with different severity.
In this study, we will preview three attacks DoS, BHA, and OOA. which may be exposed to the IoT ecosystem and makes IoT devices at a critical point.DoS is an attempt to prevent the targeted user from accessing resources.This attack may occur in RPL via UDP packet flooding [27], [29].Therefore, the attacker node sends too many requests to the root(sink), preventing normal users from accessing it in their usual way [28], [29].BHA, which is one of the most dangerous attacks in RPL in which the malicious node drops the packets that are received from its neighbors to forward to their destination [30], [32] may drop all the packets, which is called a complete black hole, or drop some packets, called a selective forwarding attack, and this is cleverer because it is not observed and the network topology is not affected [31], [32].Trustworthiness in IoT devices is critical.OOA is one type of attack that affects trust in the IoT and the devices and objects don't trust each other [33], [34], [35].OOA is a kind of selective attack (inconsistent behavior) in which the malicious node switches from malicious to normal and back again to avoid being classified as a low-trust node, allowing it to remain undiscovered while inflicting harm and making the nodes suspicious to their neighbors [34], [ 35].
On the topic of IoT security, there have been a variety of related studies.Researchers are still working in this field.Existing research in the literature for IoT security areas offers numerous security approaches.Many approaches have been developed for detecting IoT attacks depending on solutions using traditional methods to detect specific attacks.The authors in [29] proposed an Intrusion detection system (IDS) mechanism to detect DoS attacks, as well as the authors in [31] proposed an IDS mechanism to detect BHA in the IoT.Also, the authors in [35] proposed IDS mechanisms to detect on-off attacks.The authors [29], [31], [35] use different methods and features to detect specific attacks.This may be effective to detect a special attack, but implementing an IDS mechanism for each attack, especially with advances in technology and increasing attack types, may be inefficient.It may consume device resources as well as prevent it from being extended to detect new attacks.Currently, many researchers try to employ ML-based approaches to solve security issues, this technique becomes an ingenious solution.The authors in [36], [37], [38] are utilization the ML methods to detect attacks.A dataset may be used by other contributors such as DS2OS dataset [39], NSL-KDD dataset [40], UNSW-NB15 [41] and Bot-IoT-2018 [42] to employ in ML models.So, according to the previous reviews, various attack datasets are employed in ML-based activities, However, using the dataset may lead to its employment in more than one research and use of the same ML techniques, which leads to similar results as well as being may unrelated to the characteristics and features of the IoT.Thus, using the features related to IoT security is better for evaluations.
Our method is different from the existing works related to IoT security in many aspects in: Implementation, simulation, attacks type, dataset generation, parameter/features, used protocols and ML techniques.In this paper, we have implemented three attacks on the IoT, and we have generated a novel dataset based on IoT features (6LoWPAN) produced via Contiki OS.This dataset is called the KoÜ-6LoWPAN-IoT dataset.Then we employed this dataset on the algorithms ML-based that depend on their decisions on the tree.
6LoWPAN protocol stack
IPv6 over low power wireless personal area network (6LoWPAN) is a particular instance of a low power lossy network (LLN) that allows tiny devices with restricted resources to connect to IPv6 networks, and these devices comply with the IEEE 802.15.4 standard [18].It supports end-to-end IPv6 connectivity, allowing it to have a direct connection to the Internet with a wide range of networks (heterogeneous devices), including tiny devices [19], [18].Interoperability is an important consideration when selecting a wireless protocol.Interoperability implies that apps do not need to know the limits of the physical connections that transport their packets.6LoWPAN devices can communicate with any wireless 802.15.4 devices over any other IP network connection (e.g., Ethernet or Wi-Fi), in contrast to other technologies such as ZigBee devices, which can only communicate with other ZigBee devices [43].
In the network layer, the Internet Engineering Task Force (IETF) proposed routing protocol for low-power lossy networks (RPL) is the most popular routing protocol for 6LoWPAN [44] as well as used in academia and industry.Therefore, Contiki-OS provides work on the IoT and wireless sensor networks that are constrained and operate on a 6LoWPAN protocol stack [45].
In comparison to the normal Internet stack, the 6LoWPAN stack contains an extra layer, which is known as the LoWPAN adaption layer.The adaption layer rests above the IEEE 802.15.4 layer, immediately below the network layer.The adaption layer offers header compression, fragmentation, and reassembly, as well as packet forwarding services, allowing IPv6 connections to be provided to extremely tiny devices linked to the internet [46].Therefore, the IPv6 packets are encapsulated to be sent to the underlying linklayer via the adaption layer.
Figure 1 illustrates the structure of the 6LoWPAN stack and the used protocols.The application layer in the 6LoWPAN Stack contains lightweight protocols such as Constraint Application Protocol (CoAP) that were created for the IoT and were inspired by HTTP, assuming that UDP may be used without impedance in security (RFC 8323).CoAP over UDP's message layer supports reliable delivery, basic congestion control, and flow control.It was designed with simplicity in mind, with a minimal code footprint and a small, lightweight message size [46] As a recommendation from the developers (IETF), due to the 802.15.4 MAC/PHY frame size limits, UDP is a better fit for the 6LoWPAN stack than standard TCP at the transport layer with a large size header up to 60 bytes (RFC 8323).RPL is a routing protocol designed for low-power and lossy networks, and it has become the preferred routing protocol for IoT.It is a distance-vector routing protocol that routes the data to a destination (sink) with a short path (Optimal Path).RPL was designed to be highly adaptive to network conditions and to provide alternate routes [47] One of the main goals of RPL is to construct topologies of the network [48].The resulting routes from a Directed Acyclic Graph (DAG) are the network topology.There is only one Destination Oriented Directed Acyclic Graph (DODAG) per root (sink) and it is the data to a root [47].
Application Layer
The RPL consists of four control messages that are used in the formation and maintenance of the network topology.acronyms for DODAG Information Solicitation, DODAG Information Object, Destination Advertisement Object, and Destination Advertisement Object Acknowledgement, respectively.A node can utilize DIS to explore for DODAGs in its general vicinity.The DIO contains data that enables a node to find an RPL instance, understand its configuration parameters, choose a DODAG parent set, and keep the DODAG up to date [47].The DODAG in the node uses the DAO to communicate destination information upward.The DAO message is unicast by the child to the specified parent.DAO-ACK is a message sent back to the DAO sender [48].Figure 2 shows the diagram of control messages in the RPL.
Internet Control Message Protocol (ICMP6)
Every IPv6 node must successfully implement ICMPv6 since it is an integral part of IPv6.The IPv6 nodes utilize ICMPv6 to report packet processing errors and conduct additional internet-layer operations, including diagnostics, such as ICMPv6 "pinging" and multicast member reporting (RFC 1885).
Methodology
This study depends entirely on simulation to detect attacks in IoT in all its stages to obtain the results starting from the simulation stage to reaching the stage of results.In this study, we divided our method into phases thus, our model consists of the Simulation phase, Dataset collection, and manipulation, Pre-processing phase, the decision tree-based phase, and the results phase.In this phase, we implemented three attacks on the IoT, based on the degree of difficulty in implementation and detection.These attacks are arranged as follows: DoS, BHA, and OOA attacks.In this paper, all scenarios are configured on 6LoWPAN stacks for the IoT.Each scenario consists of eight nodes.Node 1 is the root (server/sink).Nodes 2-7 are normal nodes, while node 8 is malicious.The color for a malicious node is red.Normal and malicious nodes either request service from the root (server) or the root (sink), which collects their data.The attacker node might be implemented on various layers.Thus, the malicious code is implemented in the transport layer as DoS, while the malicious code in BHA is implemented in the network layer and some functions are controlled in the mac layer like duty cycle to implement the OOA in addition to the transport layer.While the simulation is running in each scenario, the radio messages and the power parameters are captured.The reason is as the malicious node directly affects the power consumption, as well as the features of radio messages, change from one attack to another during the test of attacks.
As we trace the behaviors of attacks, the malicious nodes experience the most power consumption compared to normal nodes.In addition to that, the radio messages include all the matrix of 6LoWPAN protocols that we knew parameters/ features, which affect the features and parameters directly via any malicious activities.Therefore, the 6LoWPAN protocol stacks (radio messages) and the power properties are considered as criteria and inputs (dataset) to our models.
DoS Attacks
Denial-of-service attacks are a critical point in IoT devices due to constrained devices.So, DoS attacks are designed to make a machine or network resource unavailable to their users (clients, senders, etc.).In IoT, A DoS occurs when the attacker sends too many requests to the main server/host, making the real users of the server unable to use it.The attacker node is used to flood either traffic or requests, which causes the network traffic to overflow, preventing normal requests and traffic from entering the network.Also, the malicious node indirectly prevents other nodes from gaining access to the server.The malicious node will try to deny any normal node access to the node that is attacked.This causes the node that has been attacked to work improperly.
Figure 4. DoS scenario
To implement the DoS scenario, in the network area, we set up eight nodes distributed circularly as shown in Figure 4.
Node 1 is a server, and nodes 2-7 are normal nodes, while node 8 is the malicious node colored in red.All the normal and malicious nodes (clients) send their requests to the server, and the server responds to them as in Figure 4.As mentioned in the first remark, the blue line between the server (node 1) and the malicious node (node 8) represents the radio traffic.The flooding of radio traffic between the server and the malicious node is obvious due to the many requests from the malicious node.In other words, the number of UDP packages produced by the malicious node is extremely high when compared to the normal nodes.
In this scenario, two periods of time are defined to send requests to the server, one for the normal nodes and another for malicious nodes.The normal nodes send requests (UDP packets) to the server every minute (normal case), while the malicious nodes send their requests to the server every second.In normal nodes, the timer sends the requests within a predefined 1-minute period to their target (server), while in the malicious node, the timer is set to send the requests within a predefined 1 second period.
Figure 5 illustrates the node's output and the requests made between clients and servers.In the output node window, it consists of three tabs: time, ID, and message.The time represents the node time events and the ID is referred to as the Node ID, whereas the message carries the request with the destination ID and the number of requests that have been requested.
Figure 5. DoS nodes output
Where we notice that malicious node 8 sends too many requests to the server (node 1), the server may respond to it.We could observe in the second line that the DoS attacks are active and the malicious node sends a request to the server at 44 minutes and 48 seconds (44:48) and the server may receive it.Also, the malicious node sends a request to the server at the time of 44:49.The number of malicious node requests reached 2587 requests at the time 44:48, compared to normal node 4 which reached 47 requests at the time 44:58.This means the requests from malicious nodes are sent every 1 second and the process is continuous to send flooding of requests compared to the normal node.For example, node 4 sends a request to the server at 44:58 and sends it again at 45:03, which means normal nodes send their requests to the server every 1 minute, and this is normal case if there is no delay in the buffer.
Blackhole attacks
In a special case of black hole attacks, the malicious node drops some data packets while others are forwarded successfully.This is called selective forwarding.In another case, the malicious node does not forward any data packets.This is called a "complete black hole attack."When implementing the special case of black hole attacks, the topology of the network remained un-isolated because the malicious node is still forwarding some packets to other nodes, whereas the malicious node in complete black hole attacks must be isolated explicitly as in our scenario.In this scenario, we have implemented a complete black hole attack that isolates several nodes on the topology.
The main target in this scenario is the sink node collects data from senders whereas the senders send their packets to the sink node.In the scenario of a black hole, we set up eight nodes.Node 1 is a sink node, where node 8 is a malicious node and the other nodes are normal.
We used a multi-hop node because the black hole is more effective and affects the topology of the network.In this scenario, we placed the malicious node in a strategic position that separates several nodes that communicate with the sink via the malicious node.For further information, some nodes are located in the direct range of the sink node, while others are not, and data packets from nodes outside of radio coverage are routed through other nodes to the sink node.All data packets from senders' nodes are destined to the sink.
As appears in Figure 6, Nodes 2, 3, 4, and 5 are located within radio coverage, while nodes 6 and 7 are outside the radio area.The malicious node is the link between the nodes inside the radio area and the nodes outside the radio area of the sink node.The route of data packets from nodes 6 and 7 to the sink is passed via a malicious node.And this leads the topology of the network to isolate several nodes due to the malicious node in a strategic position as appear in Figure 7.
Each one executes and implements the malicious code in its way to examine and test its intended purpose.As well, we can implement the malicious code in different layers.In this case, we implemented and developed the malicious code in the network layer.In the malicious node of the blackhole scenario, we set some global variables (parameters) to zero to drop all packets like uip_len, uip_flags, uip_ext_len, and uip_ext_bitmap.The uip_len variable is the length of the packet in the uip_buf buffer, and the uip_buf buffer puts incoming packets in it, whereas the uip_flags variable is used for communication between the IPv6 and the application program like UDP.Therefore, many operational routers may be set to discard all packets with a hop-by-hop option header (HBH), but major difficulties still exist (RFC8200).In IPv6, we can have extension headers.If present, the HBH must be processed before forwarding the packet (Contiki team).In this scenario of black holes are present.As well as extension headers, HBH can be processed or handled by each node along a packet's delivery path until it arrives at its destination.function.Thus, what's going on is that there are some conditions on the global variable.For example, if the global variable is greater than or equal to zero, it goes to the drop function.In the drop function, we return these variables to equal zero.Because these variables must usually be greater than zero due to the updating and changing of their values during each process.This means when these values of variables are updated or changed to any values, the malicious code returns these values to zero.Hence, the malicious node may or may not continue to receive and forward the data packets generated by other nodes.Also, the malicious node may not continue to process its generated packet.The malicious node drops the incoming and outgoing packets.The effects of the malicious node on the topology of the network are completely isolated.The global parameters in IPv6 were more effective in isolating the route between the malicious node and the destination.
ON-OFF attacks
OOA is a sort of selective attack to avoid it being classified as an untrusted node [34].So, the malicious node switches its behavior from harmful to normal and back again, allowing it to remain unnoticed while launching attacks.Therefore, in this attack, there are two statuses: the ON status is called "attack," It is a critical case while the OFF status is called "normal.".This attack hits the advantage of the dynamic features of trust by exploiting the time-domain of status and inconsistent behavior [33].The attacker swaps between ON and OFF.When the attack is ON, the malicious node initiates attacks, and when it is OFF, it does nothing [33], [35].An OOA attacker often has to deal with various neighbors to gain incompatible opinions of trust from the same node.
ContikiMAC is a radio duty cycling technique that employs periodic wake-ups to monitor neighboring packet streams.If a packet transmission is detected during a wakeup, the receiver is kept turned ON [49] so that the packet may be received.When a packet is received correctly, the receiver sends an acknowledgement.The transceiver (transmitterreceiver) must be completely turned between OFF and ON to send and receive radio if the status is ON and save power if the status is off.Therefore, to achieve low power consumption, ContikiMAC nodes sleep most of the time and intermittently wake up to check for radio activity [49].
The purpose of this scenario is to achieve the properties of an OOA to create a real dataset that will be trained in our model to detect OOA.The malicious node in our scenario switches its action from the attacker to normal and from normal to attackers if the case malicious node's cycle is ON via sending both trusted and untrusted packages randomly.
In this scenario, we initialized eight nodes that were placed randomly in the network area as shown in Figure 8. Node 1 is a server.Node 8 is malicious, while the other nodes are normal.Node 1 is exposed to any malicious node because this node is always active (ON-status).This node wants to receive its data from neighbors' nodes in a position of trust, without any doubt.Node 8 is a malicious node that generates and sends inconsistent data alternatively.In a malicious node, the duty cycle of the ON status to OFF status is set up similarly, 50%-50% percent.This ratio makes it easier to detect malicious behavior if the radio status is ON.Normal and malicious nodes request a service from the server.
Figure 8. OOA Scenario
The malicious code was implemented only in node 8.This malicious node alternately produced trusts and untrusted packages to node 1. Node 1 was set up to have an ON status all the time.It is not known if the data being sent by the malicious node is reliable or unreliable data.
In a malicious node, we created two functions.The first function is called "Trusted_Function," and the second function is called "Untrusted_Function."The Trusted_Function sends trusted packages to node 1, while the Untrusted_Function sends untrusted packages.In another sense, the Untrusted_Function sends UDP packets that are not standard and have different parameters (i.e., payload) than their peers.Additionally, this function sends their packages at an abnormal time.For example, all the normal nodes send their UDP packets every 1 minute.The trusted function in malicious nodes also sends their UDP packets every 1 minute.Unlike the Untrusted_Function, it sends its UDP packets every 30 seconds.Thus, the malicious node did these functions, generating trusted and untrusted packages randomly to put neighboring nodes in doubt with their inconsistent behavior.Node 1 is the service provider for the other nodes.This node is vulnerable to any malicious node because it is always in the ON status (active).It is more vulnerable to attacks from malicious nodes.Because it receives huge packets from nodes, especially from malicious nodes.Node 1 may bring the malicious code from malicious nodes and spread them to other nodes.
We use the implementation of ContikiMAC as Radio Duty Cycling (RDC) in the MAC layer.ContikiMAC is a dutycycling mechanism that allows nodes to keep their radios off as much as possible to achieve low power consumption and save energy [49].The default radio duty cycling mechanism in Contiki 2.7 [49] uses a power-efficient wake-up mechanism with a set of timing constraints to allow devices to keep their transceivers off.By default, this setting is active when we initialize nodes in the Cooja network area.Thus, the function of the duty cycle is that we can set it up however It is desired.This function was called in the OOA attacks scenario to serve the purpose of keeping the malicious node active and node 1 at risk.Alternately, the malicious node switches to lunch the malicious code (Untrusted_Function) and normal code (Trusted_Function).It remains undetected while the status is active.As for the node that is vulnerable to attack, the parameters of this function were set to 1 (100%), which means node 1 has its radio transceivers ON, high reception packets, and power consumption.The parameters for the malicious node were set to 0.5 (50%).Figure 9 illustrates the output of nodes in the OOA Scenario.This shot was captured after spending over an hour in the Cooja simulator.where observed the malicious node switch their behavior from trust to untrust randomly as clarified in the rectangle.
Dataset collection and manipulation
At this stage, we collected all the datasets generated via Contiki OS that was captured by the 6LoWPAN analyzer and Power Trace tool.Table 2 summarizes the number of observations/samples and the number of features that were collected and processed that represent the radio messages and power features in each scenario.The analysis of the radio message improves its security, and the analysis of the power improves its reliability.From all the scenarios, we obtained three datasets.The numbers of captured data are 9912, 12696, and 25072 from DoS, BHA, and OOA, respectively.These three datasets are merged into one dataset and are called the KoÜ-6LoWPAN-IoT dataset.In the process of merging the datasets at the beginning of each dataset, a sample of 20% was taken and copied one by one into the unified dataset from each attack dataset, and then the remainder was copied successively into the file that represented 80%.The datasets are not directly merged one by one.The reason is that in the ML phase when we split the data into training and testing data, the testing data was taken from the DoS sample only because the dataset was large and the testing dataset was not randomly extracted from all attack samples.
Pre-processing phase
The practice of preparing raw data for use in a machine learning model is known as data preprocessing.It is the first and most critical step in the development of a machine learning model.Real-world data in most circumstances has noise, missing values, and is in an unsuitable format that cannot be directly used for machine learning models.Therefore, data preprocessing is a necessary step in manipulating data and preparing it for a machine learning model, as well as improving the model's accuracy and efficiency.The first thing we need to develop a machine learning model is a dataset because a machine learning model is completely dependent on data.A dataset is the collection of data for a certain topic in the appropriate format.Like the IoT attacks dataset that was generated via Contiki OS to detect malicious activity and inconsistent behavior.The train-test split procedure is a very important part of ML.It is used to estimate the performance of machine learning algorithms.Thus, we split the dataset into 80% for training and 20% for testing.The IoT dataset contains a huge number of samples of attacks collected from DoS, BHA, and OOA attacks, which total 47680 samples of observations, 84 features from radio messages, and 12 from the power trace.The number of samples that were captured in DoS is 9912 samples over 45 minutes, while the number of samples that were captured in BHA is 12696 samples during 1 hour and the number of samples that were captured in OOA is 25072 samples over 2 hours.Therefore, the number of samples for training was 38142, while the number of tests was 9535.The features contain a mixture of categorical and numerical data.
Decision tree-based phase
At this stage, we preferred four machine learning algorithms that depend on their work and structure on a decision tree (Decision Tree-Based).The main reason for this is that these algorithms have practically demonstrated their efficiency in accurately outputting results and working on the real IoT dataset produced by Contiki OS, which is difficult for some machine learning algorithms to deal with until these categorical data are converted into numerical data using onehot encoding or any other method.This leads to an increase in the number of features after converting them, which leads to a slow model in efficiency.
In addition to time, in training time, the algorithms that do not work on decision tree-based models need more time to finish training than algorithms that work on decision treebased models.For example, a neural network needs a lot of time to finish training the data on it.The AMLS takes more than an hour and 15 minutes to train the model on 47680 samples.The reason is that neural networks are more computationally expensive and do require a graphics processing unit (GPU) to finish training, unlike the algorithms that work on decision tree-based systems that are less computationally expensive and do not require a GPU to finish training.Also, when a model is requested from an external source to predict and classify new data, the response time of the model that has been trained on the decision treebased approach is much faster compared to models that do not work on it.For example, when the random decision forest model was requested via the Postman tool, the response time of the model was most up to 6 seconds, while some models needed more than 240 seconds.
In this study, we utilized four algorithms that depend on the tree for their decisions.Two of these are for the classification of the attacks and two for regression to predict the attacks in the IoT ecosystem.To classify the attacks, we utilized multiclass random decision trees and multiclass decision tree jungle models.Also, to predict we used decision forest tree regression and boosted decision tree regression for regression.The general parameter configurations are the same in both algorithms.Thus, the number of decision trees is set at 50, and the maximum depth of the trees is 96.
Contiki OS
Contiki is a networked operating system that is designed to function on hardware with severe memory, power, and parameter constraints, with an emphasis on low-power wireless IoT devices.Contiki contains the 6LoWPAN stack network mechanisms which provide the routing protocol for low power and lossy networks IPv6 with the 6LoWPAN header compression and adaptation layer for IEEE 802.15.4 links [45].
Cooja simulator
Contiki includes the Cooja framework.Cooja is a powerful simulator utilized in the IoT.It is a network simulator designed for simulating sensor networks.It's a Java-based simulator that lets us write sensor nodes in the C language.[48].
6LoWPAN analyzer tool
A 6LoWPAN analyzer is a tool built on the Cooja framework that captures radio messages and saves them as packets with an extension.PCAP was developed to capture all the packet data with details.Once this tool is activated when running the Cooja simulator and scenario, it does its job of capturing and saving the file with the extension PCAP automatically.This PCAP file can be opened with Wireshark to know the details of the packages closely.
Wireshark
Wireshark is a network packet analyzer that tries to display the packet data details [48].It allows viewing packet data from a live network or a previously stored capture file interactively like PCAP.The PCAP format is one of the native capture file formats for Wireshark, which it can read and write.Wireshark is used to monitor network traffic and keep a close eye on what's going on in the network.It enables us to retrieve this data and convert it to CSV for further processing and analysis, as we did in our model to detect IoT attacks.By default, it displays four features of the packets, such as source, destination, protocols, and information control message.In our study, we enabled all the features and parameters for the packets, which numbered 84 features.The reason is due to the impact of attacks on some parameters in packets, which in turn make them more accurate in detecting attacks and knowing the behavior of malicious nodes than normal nodes.The features of the packet are different from one another, and the number of features is too large, thus aiding our model in detecting misbehavior in a good manner.
Power trace tool
Power tracing is a run-time power profiling method that estimates each node's power usage via power state tracking.by calculating the time each component spends in each power state.The ContikiMAC low power radio duty cycling mechanism is utilized by Cooja [18].The goal of radio duty cycling is to switch the radio off as much as possible while still being able to communicate to save power.A node cannot receive transmissions from neighbors if the radio transceiver is turned off.To communicate while keeping the radio turned off as much as possible, the radio must periodically wake up between two statuses to receive packets from neighbors [18].Nodes in a duty-cycled network do three things: transmit packets, receive packets, and periodically wake up so that it can receive packets from neighbors.These parameters that are calculated by the power traces tool are called and printed in each scenario and converted to a CSV file.Because the malicious activity in the network is affect these parameters in some types of attacks.For example, the malicious activity maybe effects on radio state and still the status ON all the time thus leading to consuming much energy and maybe vulnerability to any malicious activity.The radio must be turned completely off-duty-cycled-as possible to decrease power consumption and prevents other dangerous issues.
Azure Machine Learning Studio (AMLS)
AMLS is a platform that provides machine learning algorithms in separate modules for creating and deploying ML workflows on Azure.It is a cloud solution that helps us speed up and manage our machine learning projects.It is a set of services and technologies aimed at assisting developers in the development and deployment of machine learning models.It may be used by machine learning experts, data scientists, and engineers in their workflows to design, train, and manage models.Individuals and teams deploying machine learning operations inside their company may use AML to move machine learning models into production in a safe and auditable environment.We can build our experiment from scratch using one of the popular programming languages such as Python or R. In addition to that, it provides ready-made modules that make it easier for us to build and test our model, which contains practical and common modules and algorithms in artificial intelligence.In this study, we utilized an MLS because it provides a flexible and extensible framework for machine learning.Each stage of this process is handled by a different type of module, which may be updated, added to, or eliminated without impacting the rest of the experiment.
Discussions and Results
The (ML) Decision Tree-Based Models were able to analyze and evaluate data by predicting and classifying it into normal and malicious nodes as in Figure 10.In Classification decision tree-based, the evaluation model gives us the parameter values of estimation that are derivative values from the confusion matrix.Therefore, in the multiclass random decision forest ML-based model, we obtain the overall accuracy, averaged precision, and averaged recall of 98.9%, 98%, and 97.1%, respectively.In comparison to multiclass decision tree jungle, we get 82.5%, 82%, and 44.1% for overall accuracy, averaged precision, and averaged recall, respectively.In regression decision tree-based, the metrics to evaluate the models are Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Squared Error (RSE), and Coefficient of Determination (CD).Therefore, in the decision forest, tree regression is carried out at 0.138 and 0.143 for MAE and RMSE while achieving 6.75 % and 93.2 % for RSE and CD, respectively.In comparison to boosted decision tree regression, we got the MAE and RMSE of 0.12 and 0.246, while for RSE and CD we obtained 19.83% and 80.1%, respectively.
As a result, as shown in Figure 11, the multiclass random decision forest ML-based model obtained 98.9% overall accuracy in identifying IoT attacks for the real dataset IoT features-based, compared to 87.7%, 93.2%, and 87.1% for decision tree jungle, decision forest tree regression, and boosted decision tree regression, respectively.As a consequence, multiclass random decision forest produces fair results when compared to other algorithms, but this does not exclude the development of lightweight custom ML algorithms for future IoT challenges.
Conclusions
The cooperation of technologies among them makes them able to increase security in their aspects.The enable of MLbased techniques in finding IoT security solutions is a strong point.However, ML-based techniques make IoT devices more secure and reliable due to the heterogeneity and difficult conditions for IoT devices consequently, As summarized in this study, we have proposed our method to detect IoT attacks that depend on ML-based approaches.Also, we implemented
•
Implementing three different IoT attacks, Cooja simulator-based.The attacks are denial-of-service attacks (DoS), black hole attacks (BHA), and ON-OFF attacks (OOA).• Generating a novel dataset based on IoT features.• Applying the IoT novel dataset to decision tree-based models and displaying the results.
Figure 2 .
Figure 2. RPL mechanism Figure 3 illustrates the proposed model for detecting the IoT attacks and implementation phases.It starts from the simulation phase until getting results in the evaluation.
Figure 3 .
Figure 3. Proposed model for detecting the IoT attacks and implementation phases EAI Endorsed Transactions on Internet of Things 04 2022 -04 2022 | Volume 7 | Issue 28 | e1
Figure 7 .
Figure 7. Effect BHA on topology These parameters are set up and configured in the malicious node to drop all the packets by the created drop function.Thus, what's going on is that there are some conditions on the global variable.For example, if the global variable is greater than or equal to zero, it goes to the drop function.In the drop function, we return these variables to equal zero.These parameters are set up and configured in the malicious node to drop all the packets by the created drop
Figure 9 .
Figure 9. OOA mote outputThis function was called in the OOA attacks scenario to serve the purpose of keeping the malicious node active and node 1 at risk.Alternately, the malicious node switches to lunch the malicious code (Untrusted_Function) and normal code (Trusted_Function).It remains undetected while the status is active.As for the node that is vulnerable to attack, the parameters of this function were set to 1 (100%), which means node 1 has its radio transceivers ON, high reception packets, and power consumption.The parameters for the malicious node were set to 0.5 (50%).Figure9illustrates the output of nodes in the OOA Scenario.This shot was captured after spending over an hour in the Cooja simulator.where observed the malicious node switch their behavior from trust to untrust randomly as clarified in the rectangle.
Figure 11 .
Figure 11.The overall accuracy of decision tree-based model.and examined three attacks in the IoT ecosystem: DoS, BHA, and OOA, to generate a novel dataset of IoT features-based that was produced via Cooja simulatorbased.The ML-based approaches depend on decision treebased models that have proven their efficiency in manipulating, examining, and classifying malicious activity of the IoT features generated via the Cooja simulator.As a result, the multiclass Random Decision Forest ML-based model achieved 98.9% overall accuracy in detecting IoT attacks for the KoÜ-6LoWPAN-IoT dataset compared to the decision tree jungle, decision forest tree regression, and boosted decision tree regression, which achieved 87.7%, 93.2%, and 87.1%, respectively.The multiclass random decision forest achieved the highest accuracy overall.As the orientation of designing the lightweight ML custom algorithm to solve the IoT security problem.It is a matter of interest and tends to be on the minds of developers.
Table 1
shows the general configuration in the Cooja network area.
Table 2 .
Observations and features in each attack. | 10,033.8 | 2022-04-07T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Saliva exposure reduces gingival keratinocyte growth on TiO2-coated titanium
Bioactive, nanoporous TiO2-coating has been shown to enhance cell attachment on titanium implant surface. The aim of this study was to evaluate, whether the saliva proteins affect the epithelial cell adhesion on TiO2-coated and non-coated titanium. Grade V titanium discs were polished. Half of the discs were provided with TiO2-coating produced in sol with polycondensation method. Half of the TiO2-coated and non-coated discs were treated with pasteurized saliva for 30 min. After saliva treatment, the total protein amounts on surfaces were measured. Next, the hydrophilicity of discs were measured with water contact angle measurements. Further, the gingival keratinocyte adhesion strength was measured after 2 and 6 h of cultivation using serial trypsinization. In addition, cell growth and proliferation were measured after 1, 3, and 7 days of cell culture. Finally, cell morphology, spreading and adhesion protein signals were detected with high resolution confocal microscopy. As a result, in sol coated TiO2-surface had significantly higher hydrophilicity when compared to non-coated titanium, meanwhile both non-coated and TiO2-coated surfaces with saliva treatment had a significant increase in hydrophilicity. Importantly, the amounts of adhered saliva proteins were equal between TiO2-coated and non-coated surfaces. Adhesion strength against enzymatic detachment was weakest on non-coated titanium after saliva exposure. Cell proliferation and cell spreading were highest on TiO2-coated titanium, but saliva exposure significantly decreased cell proliferation and spreading on TiO2-coated surface. To conclude, even though saliva exposure makes titanium surfaces more hydrophilic, it seems to neutralize the bioactive TiO2-coating and decrease cell attachment to TiO2-coated surface. Graphical Abstract
Introduction
Dental implant materials have been developed to achieve appropriate biocompatibility.Important material properties that affect cell and tissue integration to implant surface are for example surface roughness, nanotexture, chemistry and surface wettability [1,2].Nevertheless, in oral conditions dental materials are in contact with saliva most of the time, and saliva exposure can modify earlier mentioned surface properties as saliva is able to produce a thin film on biomaterials [3].Saliva consists mostly of water, but it also contains different salivary proteins, minerals, enzymes and serum albumin [4,5].Saliva includes over thousand different proteins; proline-rich proteins, statherins, cystatins, histatins, amylase and mucins to name the major families [6].These salivary proteins are able to adhere to dental material surfaces and can change the surface properties.Salivary proteins bind preferably on surfaces with high roughness values [7].
Peri-implantitis is a biofilm-associated disease, which occurs, when oral microbes invade to peri-implant area, causing an inflammation in peri-implant mucosa and leading to initiation of peri-implant bone resorption [8,9].There are many factors that expose to peri-implantitis.Poor oral hygiene, a history of periodontitis, smoking and diabetes have been often reported [10].One crucial factor behind peri-implantitis is a weaker soft tissue barrier around the implant abutment than around the natural tooth.The gingival fibres in connective tissue are not able to attach directly to the implant surface, rather they form a capsule-like structure, which allows an easier access for oral bacteria into deeper peri-implant tissue [11,12].However, the epithelium is able to attach to implant surface in a similar manner to the natural tooth, via hemidesmosomes and basal lamina [13].Hemidesmosomes' main function is cell adhesion by binding the cytoplasmic plaque to basal lamina.In addition, the hemidesmosomes take part in cell signalling [14].Important molecules in basal lamina are laminins, of which the laminin-332 plays the most important role concerning gingival epithelial cell adhesion [15].Laminins can bind to cell membrane-penetrating integrins and thus affect cell spreading, growth and migration.Laminin-332 binds specifically to integrin α6β4 [16][17][18][19], which again is able to bind intracellular adapter proteins forming a connection between the intracellular cytoskeleton and extracellular basal lamina [20].
Nanoporous, bioactive TiO 2 -coatings have been shown to have favourable effects on soft tissue cell attachment to titanium and zirconia surfaces [21][22][23].The benefits of sol gel-derived TiO 2 -coatings comprise, that they are thin, hydrophilic, bioactive, nonresorbable and rather easy to produce [24].The dip-coating sol-gel method which has been used in many previous studies has limitations when coating objects with variable surface shapes.This study uses, in sol-produced TiO 2 -coating which is based on polycondensation and facilitates coatings on a wider selection of implant components.In addition, it allows faster coating procedures in normal laboratory circumstances without the need for special equipment.Moreover, this coating has shown to produce nanotopography on titanium surfaces and increase cell spreading and adhesion on abutment surface in vitro [25,26].
However, even though cell response to TiO 2 surface has been shown to be favorable in vitro, it ought to be noted, that as the implant crown or abutment is connected, at least the coronal part will be in contact with saliva.This can cause significant changes in bioactive surface properties and thus affect the cell and tissue adherence [1].The aim of this study was to determine, whether there is a difference in surface properties after saliva exposure between nanoporous TiO 2 -coated titanium and non-coated titanium surface and does the saliva exposure affect gingival keratinocyte attachment on TiO 2 -coated and non-coated titanium.
Half of the discs were coated with sol gel-derived TiO 2coating made directly in sol with polycondensation technique as described earlier by Riivari et al. [25,26].To produce the sol, two solutions were prepared.Solution 1 consisted of 28.4 g of titanium isopropoxide (98%, Acros Organics) mixed with 21,2 g of ethanol (95%).Solution 2 was mixed from 4,5 g of 2-ethoxyethanol (99%, Acros Organics), 1,8 g of hydrogen chloride (HCl, 1 M) and 16,7 g of ethanol.Solution 2 was pipetted into the solution 1 while mixing effectively.The produced sol had a transparent colour and was left to age at 0 °C for 24 h.While waiting for the coating procedure, the sol was kept at −18 °C.
The polished titanium discs were coated with a layer of TiO 2 -sol and set in a freezer for two hours (-18 °C).Thereafter, the discs were washed twice with ethanol, placed in a ceramic bowl and heated in an oven until 500 °C, where the discs were kept for 10 min.Further, the acetone and ethanol washing was replied for 5 min each and the discs were sterilized in an autoclave.
Saliva coating
Paraffin wax stimulated whole saliva was collected from 7 healthy non-smoking adult volunteers for 10 min.The bacteria were eliminated from saliva with pasteurization.First, the saliva was centrifuged (9500 rpm, 40 min) followed by pasteurization at 60 °C degrees for 30 min.After this, the solution was centrifuged again and divided into smaller portions.After pasteurization, the solutions were tested and no microbial growth was detected.
The titanium discs were covered with 1 ml saliva diluted in PBS (1:1) and shaken for 30 min which followed by washing three times with PBS.
Protein adsorption
After saliva exposure, the amounts of adsorbed saliva proteins on coated and noncoated surfaces were detected.100 µl of warmed SDS buffer (2%, 95 C) was added to the titanium discs (n = 3) and incubated for 5 min.The detachment of proteins was prompted with brushing, all the solution was collected in Eppendorf tubes, boiled for 7 min and centrifuged for 2 min.The solutions were diluted with PBS (1:20) and 150 µl were pipetted into 96-plate with 150 µl of Micro BCA™ Protein Assay Kit (Thermo Sci-entific™) following 2 h incubation (+37 C).The total protein amounts were measured with wavelength of 562 nm with Multiskan FC reader (Thermo Scientific) and compared the given values to standard curve.
Water contact angles measurements
The surface hydrophilicity of TiO 2 -coated and non-coated titanium with and without saliva exposure were measured with water contact angle measurements using the sessile drop method (Attension Theta, Biolin Scientific).Altogether, six drops of distilled water for each group were used at room temperature (n = 6 technical replicates).Each drop was imaged for 10 s after dropping and the mean contact angle value was determined.
Cell cultures
Spontaneously immortalized human gingival keratinocytes (hGKs) obtained from gingival biopsy by Mäkelä et al. [27] with the passage of 20 were used as a cell type.The hGKs were mixed in keratinocyte-serum-free medium (SFM) (Gibco®, Thermo Fisher, USA).
Cell adhesion strength against enzymatic detachment
To measure the adhesion strength against enzymatic detachment, the hGKs were cultured at a density of 12,500 cells/cm 2 on NC, NC-S, TiO 2 and TiO 2 -S surfaces for 2 and 6 h (n = 6/group/time point).Attachment strength was measured with serial trypsinization earlier described by Meretoja et al. [22] After 2 and 6 h of cell attachment, the discs were washed with PBS and set on trypsin solution (0.005% trypsin (Gibco, Invitrogen) diluted in PBS (1:5)).The discs were incubated at room temperature for 20 min replacing trypsin after 1, 5 and 10 min collecting the solution to cryotubes.After, the discs were treated with undiluted trypsin at 37 °C for 5 min to detect the number of adherent cells.To all tubes, 500 µl of TE-Triton X-100 was added and frozen at -70 °C.Afterward, the amount of released DNA was measured with PicoGreen dsDNA-kit (Molecular Probes Europe, Netherlands).The fluorescence values were detected with wavelengths of 490 and 535 nm.The percentage of detached cells was calculated by comparing the amounts of detached cells to amounts of adherent cells.
Cell attachment and proliferation
Long-term cell attachment and growth were studied by cultivating hGKs on titanium discs for 1, 3 and 7 days (n = 6/group/time point), which followed treatment with Alamar Blue (Thermo Fischer, USA) blended in SFM.The Alamar Blue solution was incubated on the discs for 3 h in a CO 2 -incubator at 37 °C degrees.Thereafter, 200 µl from each specimen was used to measure the absorbance of the solution with a wavelength of 569 and 594 nm (Multiskan FC, Thermo Scientific).The cell amounts were calculated by comparing the absorbance values to the standard curve.
Cell spreading and hemidesmosomes formation
After one day of cell culture, the discs were fixed with paraformaldehyde (4%) for 15 min, washed once with PBS and stored at 4 °C.Later, the discs were treated with 300 µl TRITON-X-100 in PBS (0,5%, 15 min).The primary antibodies [laminin y2 (1:100, sc-7652, Santa-Cruz Biotechnology), integrin β4 (1:100, ab182120, Abcam)] were mixed with horse serum in PBS (30%) and the discs were covered with antibody dilution overnight.The next day, the discs were washed three times with PBS and covered with secondary antibody dilution [Anti-Rabbit 488, Anti-Goat 555, (both from ThermoFisher Scientific)], DAPI (nucleus staining, 1:200) and Phalloidin Atto (1:400, Sigma-Aldrich) in 30% horse serum in PBS] for one hour.After staining, the discs were washed in PBS and glued to microscope glass using Mowiol (Sigma-Aldrich).A spinning disc confocal microscope (63x Zeiss Plan-Apochromat, 3i CSU-W1 Spinning Disk) was used to image the stained discs.Cell area was measured from 30 cells from each group using ImageJ Fiji program.
Data analyses
The data analysis was made with GraphPad Prism-program.One-way analyses of variance (ANOVA) with Tukey's multiple comparisons test in case of normal distribution and otherwise Kruskal Wallis test were used to analyse the significance of differences.Confocal images were analysed with ImageJ, Fiji-program.
TiO 2 coating and saliva exposure increases hydrophilicity
Whether the TiO 2 -coating and adhered saliva proteins would affect the surface hydrophilicity, the water contact angle measurements were accomplished.TiO 2 -coated surface had significantly lower contact angle values when compared to non-coated titanium.In addition, both non-coated and TiO 2coated surfaces with saliva exposure had a significant decrease in contact angles.These results indicate that saliva exposure increase the hydrophilicity of both surface but the effect is more intense on non-coated titanium (Fig. 1A).
Total saliva protein adsorption is equal between TiO 2 -coated and non-coated titanium
To determine, whether there is a difference in saliva protein adherence on TiO 2 -coated and non-coated titanium, the total protein amount was tested after 30 min of saliva treatment.The amounts of adhered saliva proteins were equal between TiO 2 -coated and non-coated surfaces, indicating that the observed difference is not a result of variance between the saliva protein adherence (Fig. 1B).
Saliva exposure reduces cell adhesion strength on non-coated surfaces
To determine adhesion strength against enzymatic detachment, the number of detached cells was measured after 2 and 6 h of cell culture using 1, 5, 10 and 20 min of serial trypsinization.Non-coated titanium with saliva exposure had significantly higher detachment levels with one minute trypsinization after 2 and 6 h of cell culture indicating weaker cell adhesion on saliva treated titanium.The difference was also significant between saliva treated titanium and saliva treated TiO 2 -coated titanium with 5 min of trypsinization.TiO 2 -coated titanium with or without saliva exposure had no significant difference in adhesion strength (Fig. 2).
Highest cell proliferation on TiO 2 -coated titanium without saliva exposure
To measure, whether saliva exposure affects cell growth and proliferation, HGKs were cultivated on the samples for 1, 3 and 7 days.After the first day, the proliferation level was significantly higher on TiO 2 -coated titanium compared to all other groups.Also, after one week there were significantly more cells on TiO 2 -coated titanium compared to non-coated titanium and TiO 2 -coated titanium with saliva exposure.Meanwhile the saliva treated TiO 2 -coated surface had significantly lower proliferation than the same surface before saliva treatment, the noncoated surface with saliva exposure had the opposite effect indicating higher proliferation or no effect on saliva treated titanium compared to the non-treated surface (Fig. 3).
Saliva exposure reduces cell spreading on TiO 2coated surface
In order to study, if results from adhesion measurements correlated with cell spreading, confocal microscope imaging of cell morphology was performed (Fig. 4).Cell spreading was analyzed based on the actin staining.More spread cells with higher density were found on TiO 2coated titanium.After saliva exposure, cell spreading was significantly lower on non-coated and TiO 2 -coated titanium (Fig. 4E).To study expression of Laminin-332 that binds specifically to integrin α6β4, a laminin γ2 subunit and Integrin β4 was stained.In line with cell spreading, signal level of Laminin γ2 was significantly lower on both saliva treated surfaces compared to TiO 2 -coated titanium (Fig. 4F).All the same, TiO 2 -surface after saliva exposure, had significantly lower Integrin β4 signal compared to non-coated and TiO 2 -coated titanium without saliva exposure (Fig. 4G).This study evidenced higher cell proliferation, cell spreading and signals of important adhesion proteins laminin γ2 and integrin β4 on TiO 2 -coated titanium.The result is in the same line with earlier studies, that TiO 2 -coating produced in sol is able to enhance epithelial cell attachment and growth on titanium surface [25,26].Earlier studies have also revealed positive effects of sol-gel coated titanium on fibroblast and soft tissue adherence [22,23,28] indicating all in all enhanced cell response on bioactive TiO 2 -surface.Enhanced cell attachment on titanium surface is important, as more uniform cell adhesion makes a stronger barrier against oral microbes and consequently could decrease the risk for peri-implant infections.However, the results evidenced that saliva exposure decreases cell attachment and growth on both surfaces and seems to neutralize the positive effects of TiO 2 -coating equalizing the cell adhesion and proliferation levels.When it comes to cell attachment to saliva treated surface, fibroblast adhesion and proliferation have been shown to be weaker to implant surface after saliva treatment [29][30][31].Hirota et al. [32] also found out, that saliva contamination on commercially pure titanium decreased osteoblast growth and spreading on titanium surface.Reduced osteoblast activity was also found by Kunrath et al. [3,7].However, [32] demonstrated, that if the discs were treated with UV saliva contamination, the negative effects of saliva contamination were avoided.As the results indicated weaker cell attachment after saliva exposure on TiO 2 -coated titanium surface, a proper saliva control while placing implant abutment is crucial to avoid saliva contamination.
Like this study, also earlier studies have revealed induced hydrophilicity on TiO 2 -coated surface [21,33].In addition, this study demonstrated a decrease in WCA after saliva treatment concerning both TiO 2 -coated and non-coated titanium indicating a more hydrophilic surface after saliva exposure.Also, Schweikl et al. demonstrated lower contact angles on saliva treated titanium compared to PBS washed titanium [34].This increase in hydrophilicity after saliva exposure can be due to adhered water molecules on the titanium surface, since saliva is mostly composed of water [5].Hirota et al. measured WCA on saliva treated cpTi, which was around 40° [32], meanwhile our study evidenced lower than 20°WCA on non-coated saliva treated titanium.However, Kunrath et al. demonstrated loss of hydrophilic properties of titanium surface after saliva exposure [7,35].As WCAs have been shown to be similar on cpTi and titanium alloy after saliva treatment [36], the difference in contact angle results can be due to variations in saliva treatment methods.Besides hydrophilicity, the TiO 2 -coated surface is thought to have favorable cell response due to its nanotopography and also calcium phosphate growth on its surface [24].
In this study, no significant difference was found in total protein amounts between hydrophilic TiO 2 -surface and noncoated titanium.This is in line with previous studies, where salivary and serum protein pellicle formation on dip-coated cpTi and non-coated titanium had similar profiles [37].Also, serum protein adsorption to nanoporous TiO 2 -coated zirconia has been tested and neither significant difference in serum protein adsorption was found [38].According to this study, the protein adsorption seems to neutralize the bioactive effects of TiO 2 -coating and equalize the surface properties between TiO 2 -coated and non-coated surfaces.
Conclusion
All in all, this study demonstrated lower adhesion and proliferation levels on in sol derived TiO 2 -surface after saliva exposure.Thus, a proper saliva control during the abutment placing is suggested to avoid saliva exposure of the whole abutment surface.Even though adhered saliva proteins seem to effect on cell adhesion strength and growth, clinical studies are needed to study the clinical outcome of in sol derived TiO 2 coatings.
Fig. 1
Fig. 1 Saliva proteins adsorb equally and increase hydrophilicity of the surfaces (A) the water contact angle measurements and (B) salivary protein adsorption on non-coated and TiO 2 -coated.NC non-coated, TiO 2 TiO 2 -coated, NC-S non-coated with saliva exposure, TiO 2 -S=coated with saliva exposure.Mean ± SD+ technical replicates, ANOVA | 4,176.2 | 2024-04-18T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
A Reduced Complexity Quasi-1d Viterbi Detector
This paper develops a reduced complexity quasi-1D detector for optical storage devices and digital communication system. Superior performance of the proposed detector is evidenced by simulation results.
Introduction
Recent literature is rich enough for improvements in multi-user detection system like that of Digital Communication or Optical Storage system.Such improvement with use of turbo encoding/decoding algorithms [1] for digital communication, non-coherent Ultra Wide band (UWB) detector for in the context of distributed wireless sensor networks [2].However, in this paper, we focus on Optical storage systems.
The perpetual push for higher track density necessitates the two-dimensional optical storage (Two-DOS) systems to have large number of tracks in a single group.In the current stage, the number of tracks is chosen to be 11 within the group [3].The complexity of two-dimensional (2D) Viterbi detector (VD) grows exponentially with both the target length g N and number of tracks r N in a single group.Hence, truncating the channel memory by means of pre-filtering techniques does not sufficiently reduce the complexity of 2D VD for the current Two-DOS system.For example, though we have shortened the channel memory by setting 3 g N , it is by far impractical because the number of states for the full-edged 2D VD will reach 2 22 for 11 r N .For this reason, in this paper, we develop a quasi-one-dimensional (quasi-1D) VD, which exploits the cross-track decisions as the feedback to facilitate the implementation of reducedcomplexity 2D Viterbi-like detectors for systems with large number of tracks per group.
Decision Feedback Equalization
Decision feedback equalization is a nonlinear detection technique that is quite popular in digital communication systems [4,5].Figure 1 shows the block diagram of a discrete time decision feedback equalizer (DFE).In the figure, h k is the discrete-time channel symbol response, is the additive white Gaussian noise (AWGN) with variance 2 , and k w and k f represent the taps of the forward and feedback equalizer, respectively.The forward equalizer shapes the channel into a prescribed target k g , which is constrained to be causal and the first tap 0 g is constrained to be one.Feedback equalizer has a strictly causal impulse response k f that should match k g for all 1 k in order to cancel the causal intersymbol interference (ISI), i.e. the ISI due to the symbols that have already been detected.By removing the causal ISI, the DFE uses the threshold comparator to make the bit decision based on the input of the slicer.Though the DFE is the optimum detector that has no detection delay [6], its performance lags behind that of the VD because of the following two main reasons.
• Error propagation: Any decision errors at the output of the slicer will cause a corrupted estimation of the causal ISI, which is to be generated by the feedback equalizer.The result is that a single error causes the detector to be less tolerant of the noise for a number of future decisions.This phenomenon is referred to as the error propagation and degrades the performance of the detector.• Energy reduction: Even in the absence of error propagation, the DFE is still sub-optimum compared to the VD in terms of performance.This is because in the decision process, the DFE subtracts the causal ISI and thus ignores the signal energy embedded in this causal ISI component.In other words, some signal energy that is beneficial for the optimum detection is neglected.The adverse effect on the detection performance is referred to as the energy reduction.To minimize the energy reduction effect due to neglecting the energy of causal ISI, the target is designed to have minimum-phase characteristics, i.e. the energy of the target is optimally concentrated near the time origin.
Fixed-Delay Tree Serch
Unlike the DFE that makes the bit decision instantly, the fixed-delay tree search (FDTS) detection technique makes the bit decision after a delay of D [7,8].In this technique, the bit decision is based on a sequence of D + 1 input samples before the detector and uses the maximum-likelihood (ML) decision rule for the bit decision with a delay of D. The ML decision exploits partly or all of the signal energy embedded in the causal ISI components, and thus reduces the energy reduction effect compared to the DFE.The choice of parameter D is limited by the compromise between performance and complexity.If D + 1 is smaller than the target length Ng, the FDTS is referred to as the fixed delay tree search with decision feedback (FDTS/DF) [8].In fact, the FDTS can be considered as a generalization of the DFE since the FDTS is essentially equivalent to the DFE when D = 0. Similar to the DFE, the FDTS first uses the forward equalizer to shape the channel into a known target.
Then, the noiseless input of the detector is , where i g represent the coefficients of the target whose length is Ng, and a(n) is the channel input bit at time index n.The FDTS uses a fixed-depth ML decision rule implemented as a tree search algorithm.The tree representation with depth D = 2 is shown in Figure 2 for illustration.Each branch corresponds to one input bit at a particular time.A sequence of branches through the tree diagram is referred to as a path.Each possible path corresponds to one input sequence and vice versa.At time index n, the tree diagram consists of D + 1 bits.Thus, at each time index, the trellis contains 2D + 1 paths that represent all the possible 2D + 1 input se-quences.Detection based on the smallest Euclidian distance between the detector input z(n) and the desired noiseless detector input d(n) is optimum in the ML sense when the noise component of the detector input is white and Gaussian.
Thus, similar to the trellis diagram that corresponds to the VD, the Euclidian distance is defined as the branch metric for each branch, and the summation of the branch metrics associated with each path is called the path metric.Since the FDTS performs ML detection based on a sequence of samples, it chooses the path whose path metric is minimum as the most likely transmitted sequence and releases the first bit associated with this path as the detected bit.More specifically, the FDTS operates recursively as follows [8]: step, the tree structure has a depth of 1 D .Each path retains the path metric obtained from the previous iteration.
• Path extension: At the nth step, the tree structure is extended such that the depth is increased to D .
The new input sample z n is used to compute the branch metric • Path selection: After computing all the path metrics for the extended paths, the first bit of the path that has the smallest path metric is selected and released as the detected bit.Then, half of the total paths that are incompatible with the detected bit are discarded.As a result, the tree structure that remains has a depth of 1 D .As time progresses, the root node moves along the ML path and a fixed-size identical tree structure is maintained at each time index.Therefore, the complexity of the FDTS is kept constant for each time index.Similar to the VD, the ML decision rule makes the FDTS unduly complicated if D is large.An efficient and simple realization of the FDTS for systems using run length-limited (RLL) (1; k) codes can be found in [9,10].
Sequence Detection with Local Feedback
Many detection techniques with sequence feedback, such as the DFE and FDTS/DF, use the detected bits as the input of the feedback equalizer, resulting in the error propagation problem.Nevertheless, this problem can be reduced by resorting to local feedback [11,12].The local feedback is based on the trellis structure, and uses the path memory associated with the current state instead of the past decisions to estimate the causal ISI.The local feedback guarantees that the branch metric of the correct path is the ML metric, as long as it is discarded in favor of some incorrect path [11].As a result, it improves the performance of those detectors with sequence feedback at the price of requiring a large memory to store paths associated with each state.
Complexity of 2D VD
2D PR equalization to shape the 2D channel into a known 2D target with controlled ISI and intertrack interference (ITI).These controlled ISI and ITI are left to be handled by the 2D VD.The noiseless input of the 2D VD is given by , where, i g is the target matrix whose length is Ng, and a(n) is the channel input vector at time index n.As indicated earlier, the complexity of 2D VD grows exponentially with both the target length Ng and number of tracks r N in a single group.For a better understanding, the trellis structure for the case of target length 3 g N and number of tracks per group 2 r N is shown in Figure 3.In this figure, the '+' and ' ' represent the bits '+ 1' and ' 1', respectively.The trellis is assumed to start at the node S0, and then becomes steady at instant 3 n (i.e.
g n N
).Here, the labels of states represent the channel memory and number of tracks per groups associated with the paths that pass through these states.At time index n, each state consists of states.At time index n, each branch specifies the channel memory associated with the state that the branch originates from and the possible channel input vector _a(n).Therefore, each branch corresponds to one possible noiseless detector . For the binary channel input bit, each state possesses 2 r N incoming and 2 r N outgoing branches and thus there are totally 2 r g N N incoming and 2 r g N N of outgoing branches for each time index of the trellis.
In Figure 3, it is clear that even in this simple 2D case, the trellis of 2D case is much more complicated than the one-dimensional (1D) case though the target length is the same.Thus, the practical implementation of the 2D Viterbi-like detector for large Nr also requires the significant reduction of the complexity arising from the cross-track direction.In [13], a technique using the Viterbi detector track-by-track, as well as the decision feedback to estimate the ITI between tracks was proposed.We call this detector the DFE-VD.It uses a set of sub-2D VDs, each corresponding to one track.In the bit decision process for a given track, the known bits just above (or below) the current track are used as the feedback to calculate part of the ITI.These known bits can be previously detected bits, or can be zeros if the upper (or lower) track is the guard-band.
The branch metric is then computed by subtracting the effect of these known bits.However, in this track-bytrack technique, the ITI from either only the upper track(s) or only the lower track(s) estimated, and the remaining ITI estimations are still dependent on the trellis states.As a result, the number of states should be larger than that of 1D VD with the same target length.Moreover, this redundant complexity will not benefit performance much since the detector makes the detection based still only on the input samples from the current single track.An improved detector is the stripe-wise Viterbi detector (SWVD) [3,14].This detector consists of a set of sub-2D VDs, each dealing with one stripe that consists of a limited number of tracks.The number of stripes is equal to that of tracks in a single group.The preliminary decisions from one sub-2D VD is used for estimating the ITI in the next sub-2D VD, which is shifted up (or down) by one track.This procedure is continued for all the stripes and the full procedure from bottom to top (or top to bottom) of the group is considered to be one iteration.Note that at least two iterations are required in order to estimate the ITI from both upper and lower tracks.Unlike the DFE-VD that resorts to the trellis states to estimate the ITI from the lower (or upper) track(s), the SWVD uses the preliminary decisions from the previous iteration to estimate the ITI from the lower (or upper) track(s).This additional decision feedback not only reduces the complexity but also improves the performance compared with the DFE-VD since its decisions exploit the input information from both upper and lower track(s) as well as that from current.However, the use of iterations increases complexity as well as latency.Our new proposal, whereas, is a non-iterative reduced-complexity detector that is applicable to any 2D system.
Causal ITI Target
In this subsection, we introduce the causal ITI target as a starting point for the development of our reduced-complexity 2D Viterbi-like detectors.Conventionally, the causal and anticausal ISI are referred to as the ISI from the past and future bit decisions, respectively [6].Similarly, we refer to the causal and anticausal ITI as the ITI resulting from the lower and upper tracks, respectively.
The concept of causal ITI was first used in the multichannel DFE [15].Similar as shown in Figure 1, this multi-channel DFE consists of a multi-channel forward filter, a multi-channel feedback filter, and a decision block.The multi-channel forward filter is designed to constrain the channel to be causal ISI and ITI.The multichannel feedback filter is designed to remove the causal ISI based on the previous bit decisions.The causal ITI is left to be handled by the decision block.Motivated by this, we propose the causal ITI target such that the 2D target matrices are constrained to be the right triangular matrices.It should be noted that this target is the basis for the development of our reduced-complexity 2D Viterbi-like detectors.As a starting point for our development, we first examine the suitability of the causal ITI target in Two-DOS. Figure 4 shows the performance of full-edged 2D VD for four different targets when 5 r N and target length 3 g N .In the figure, the diagonal elements of G0 in the causal ITI target are constrained to be 1s to avoid trivial solutions of the target and equalizer.We use a fixed 2D target with elements 1 2 and 2D monic constrained target, which are reasonable targets described in the last chapter for Two DOS, as reference targets.Note that we impose a symmetry constraint, which constrains all the tracks within the same group to suffer the same amount of ITI, in the design of the 2D monic constrained target.In other words, after the finite length equalizer, all the tracks within the same group ideally suffer the same amount of ITI.However, due to the presence of guard-bands serving as boundaries of the group, before the finite length equalizer, not all the tracks suffer the same amount of ITI.In addition, the 2D monic constrained target only allows ITI from adjacent tracks.Therefore, the symmetry constraint will burden the design of finite length equalizer and result in residual ISI and ITI.Note that the causal ITI target does not have this symmetry constraint, and allows ITI not only from the adjacent tracks.Therefore, compared with the 2D monic constrained target, the causal ITI target burdens the finite length equalizer less and is expected to achieve better performance.From Figure 4, it is shown that the causal ITI target outperforms all the targets at every SNR.This result indicates that it is reasonable to use the causal ITI target for Two-DOS.More importantly, based on this target, we propose some reduced-complexity 2D Viterbi-like detectors that are quite different from DFEVD and SWVD since the latter two detectors suffer ITI from both lower and upper tracks.
Principle of Quasi-1D VD
Since the causal ITI target contains ITI only from the lower tracks, the bits in the upper tracks will not affect the desired output.Based on this idea, a set of 1D VDs are used to detect the bits, each deals with one track.More specifically, as shown in Figure 5, the first 1D VD that deals with the lowest track is processed with no delay and the bits are detected after a delay D. The second 1D VD that deals with the second lowest track is processed with the delay D in order to use the detected bits from the lowest track to estimate all the ITI in the second lowest track.The third 1D VD that deals with the third lowest track is processed with a delay D after the second 1D VD, and the detected bits from the lowest two tracks are used to estimate the ITI in the third lowest track.This procedure continues for all the tracks.Since the bits detection does not need to consider the interference from the upper tracks, this detector is distinct from the DFE-VD and SWVD.Compared with the DFE-VD, this detector has less computational complexity since fewer states are needed for bit detection.More importantly, the quasi-1D VD has better BER performance since it uses all, while DFE-VD uses part, of the input information that is needed in the cross-track direction.As illustrated in Figure 6, the quasi-1D VD outperforms the DFE-VD significantly no matter what target is chosen for the DFE-VD.Compared with the SWVD, as mentioned previously, it has much lower complexity since it has no iterative procedures.
Link with QR Detector Our quasi-1D VD is developed for the Two-DOS system, which is a multiple-input multiple-output system having a large temporal span of the channel.Obviously, this quasi-1D VD is applicable to multiple-input multiple-output systems having an arbitrary temporal span of the channel.In many wireless communication systems, the multiple-input multiple-output channel is assumed to be at-fading [16,17], i.e. the temporal span 1 h N .In such systems, the channel is characterized by a matrix, Where, z and a are the ( 2 1 N ) channel output vector, and ( 1 1 N ) channel input vector, respectively, H is the ( 2 1 N N ) at-fading channel matrix.For the sake of simplicity, the time index is ignored here.Then, QR decomposition of the channel matrix yields
QR H
, where Q is an ( 2 1
N N
) ortho-normal matrix constructed to make the ( 1 1 N N ) matrix R right triangular [19].Pre-multiplying the channel output vector z with H Q , the resulting vector ẑ is given by Note that if the noise in z is additive white Gaussian noise (AWGN), the noise in ẑ remains AWGN since
N N
) identity matrix.Comparing R with the causal ITI target discussed in the previous subsection, we find that R can be seen as a special case of causal ITI targets.Then, like the quasi-1D VD, the first element from the bottom of the channel input vector a is first detected.The detected element is used to estimate interferences for the detection of the second element from the bottom of a.This procedure continues until all the elements in a are detected.
This detector is commonly referred to as the QR detector and has been investigated in multiple-input multiple-output at-fading channels [19][20][21].The QR detector is also applicable in multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems [20,22], since the channel at each sub-carrier of MIMO-OFDM systems is considered as a multiple-input multiple-output at-fading channel.Note that our proposed quasi-1D VD is suitable for any multiple-input multiple-output channel with arbitrary positive h N , while the QR detector is only applicable for multiple-input multiple-output at-fading channel, i.e. 1 h N .Therefore, the QR detector is considered as a special case of our proposed quasi-1D VD.
Performance of 1D VD
As shown in Figure 5, though the quasi-1D has much lower complexity than the DFE-VD and SWVD, it causes significant detraction from optimality.We consider three factors that affect the performance of quasi-1D VD: target length, error propagation and energy reduction.In Figure 7, "L4" and "L5" represent that the lengths of targets are four and five, respectively.Otherwise, the length of target is three."No EP" means detectors without suffering error propagation.In simulation, "No EP" is achieved by use of correct input bits to estimate ITI.The length of the equalizer is 31 in all the simulations.As illustrated in Figure 7, the BER performance is not significantly improved by increasing the target length.Further investigation shows that all the elements in target matrices 3 g and 4 g approach zero, therefore confirming that there is no need to increase the channel memory beyond two.Figure 7 also shows that the error propagation degrades performance by 1 dB for BER is 4 10 .Thus, the energy reduction should be the dominant factor that degrades the performance.
Conclusions
In this paper, we have first briefly reviewed prior work on the detectors with sequence feedback.Then, by constraining the target with causal ITI, we have developed a quasi-1D VD, which uses a computationally efficient technique whose complexity, grows only linearly with the number of tracks.This is a significant complexity reduction compared to the conventional 2D VD whose complexity grows exponentially with the number of tracks.We have shown that the quasi-1D VD improves over the DFE-VD and SWVD in terms of complexity.Further, we have shown that the widely known QR detector is a special case of our proposed quasi-1D VD.However, we have found that the quasi-1D VD still causes significant detraction from optimality in the Two-DOS system.Therefore, effective compensation techniques are needed to ensure reliable data recovery.To achieve this goal, we have investigated the factors that might degrade the performance.Our simulation results implied that the energy reduction is the dominant factor that degrades the performance of the quasi-1D VD.Therefore, in the next chapter, we develop some effective techniques to reduce the effect of this energy reduction problem.In addition, the effect of error propagation still needs to be minimized since it degrades the performance by roughly 1 dB when BER is
Figure 1 .
Figure 1.Block diagram of a discrete-time decision feedback equalizer.
Figure 2 . 1 gN 1 gN
Figure 2. Tree representation with depth D = 2 for the uncoded binary channel input data.
Figure 3 .
Figure 3. Trellis structure for a channel with Ng = 3 and Nr = 2.
Figure 4 .
Figure 4. BER performance for different target constraints.
Figure 5 .
Figure 5. Principle of the quasi-1D VD.The solid lines represent the input and output of sub-VDs, the dashed lines represent the feedback coming from the output of the previous sub-VDs.
Figure 6 .
Figure 6.Performance comparison of different detection techniques.instead of a sequence of matrices in the Two-DOS system.Let 1 N and 2 N represent the number of transmit and receive antennas, respectively, in multiple-input multiple-output wireless communication systems.Then, the channel output vector at a given time is given by z Ha (1)
Figure 7 .
Figure 7. BER performance of quasi-1D VD with different target lengths. | 5,296.2 | 2011-02-22T00:00:00.000 | [
"Computer Science",
"Engineering",
"Physics"
] |
Semi-Supervised Cross-Modal Retrieval Based on Discriminative Comapping
Most cross-modal retrieval methods based on subspace learning just focus on learning the projection matrices that map different modalities to a common subspace and pay less attention to the retrieval task specificity and class information. To address the two limitations and make full use of unlabelled data, we propose a novel semi-supervised method for cross-modal retrieval named modal-related retrieval based on discriminative comapping (MRRDC). The projection matrices are obtained to map multimodal data into a common subspace for different tasks. In the process of projection matrix learning, a linear discriminant constraint is introduced to preserve the original class information in different modal spaces. An iterative optimization algorithm based on label propagation is presented to solve the proposed joint learning formulations. The experimental results on several datasets demonstrate the superiority of our method compared with state-of-the-art subspace methods.
Introduction
In real applications, data are often represented in different ways or obtained from various domains. As a consequence, the data with the same semantic may exist in different modalities or exhibit heterogeneous properties. With the rapid growth of multimodal data, there is an urgent need for effectively analyzing the data obtained from different modalities [1][2][3][4][5]. Although there is much attention to the multimodal analysis, the most common method is to ensemble the multimodal data to improve the performance [6][7][8][9]. Cross-modal retrieval is an efficient way to achieve data from different modal data. e typical example is to take the image as a query to retrieve related texts (I2T) or to search images by utilizing the textual description (T2I). Figure 1 shows the detailed process for I2T and T2I tasks. e results obtained by cross-modal retrieval are more comprehensive compared with the results of traditional single-modality.
Generally, semantic gap and relevant measure impede the development of cross-modal retrieval. Although there are many approaches to solve this problem, the performance of these approaches still cannot achieve a satisfactory level. erefore, the methods [10][11][12][13][14][15][16] are proposed to learn a common subspace by minimizing the pairwise differences to make different modalities comparable. However, task specificity and class information are often ignored, which leads to low-level retrieval performance.
To solve these problems mentioned above, this paper proposes a novel semi-supervised joint learning framework for cross-modal retrieval by integrating the common subspace learning, task-related learning, and class discriminative learning. Firstly, inspired by canonical correlation analysis (CCA) [7] and linear least squares, a couple of projection matrices are learnt by coupled linear regression to map original multimodal data to the common subspace. At the same time, linear discriminant analysis (LDA) and task-related learning (TRL) are used to keep the data structure in different modalities and the semantic relationship in the projection space. Furthermore, to mine the category information of unlabelled data, a semi-supervised strategy is utilized to propagate the semantic information from labelled data to unlabelled data. Experimental results on three public datasets show that the proposed method outperforms the previous state-of-theart subspace approaches. e main contributions of this paper can be summarized as follows: (1) e proposed joint formulation seamlessly combines semi-supervised learning, task-related learning, and linear discriminative analysis into a unified framework for cross-modal retrieval (2) e class information of labelled data is propagated to unlabelled data, and the linear discriminative constraint is introduced to preserve the interclass and intraclass similarity among different modalities e remainder of the paper is organized as follows. In Section 2, we briefly overview the related work on the crossmodal retrieval problem. e details of the proposed methodology and the iterative optimization method are introduced in Section 3. Section 4 reports the experimental results and analysis. Conclusions are finally given in Section 5.
Related Work
Because cross-modal retrieval plays an important role in various applications, many subspace-based methods have been proposed by establishing the intermodal and intramodal correlation. Rasiwasia et al. [7] investigated the retrieval performance of various combinations of image features and textual representations, which cover all possibilities in terms of the two guiding hypotheses. Later, partial least squares (PLS) [17] has also been used for the cross-modal matching problem. Sharma and Jacobs [18] used PLS to linearly map images from different views into a common linear subspace, where the images have a high correlation. Chen et al. [19] solved the problem of cross-modal document retrieval by using PLS to transform image features into the text space, and the method easily achieved the similarity measure between two modalities. In [20,21], the bilinear model and generalized multiview analysis (GMA) have been proposed and performed well in the field of cross-modal retrieval.
In addition to CCA, PLS, and GMA, Mahadevan et al. [22] proposed a manifold learning algorithm that can simultaneously reduce the dimension of data from different modalities. Mao et al. [23] introduced a cross-media retrieval method named parallel field alignment retrieval, which integrates a manifold alignment framework from the perspective of vector fields. Lin and Tang [24] proposed a common discriminant feature extraction (CDFE) method to learn the difference within each scattering matrix and between scattering matrices. Sharma et al. [21] improved LDA and marginal Fisher analysis (MFA) to generalized multiview LDA (GMLDA) and generalized multiview MFA (GMMFA) by extending from single-modality to multimodalities. Inspired by the semantic information, Gong et al. [25] proposed a three-view CCA to deeply explore the correlation between features and their corresponding semantics in different modalities.
Furthermore, other methods, such as dictionary learning, graph-based learning, and multiview embedding, are proposed for the cross-modal problem [26][27][28][29]. Zhuang et al. [30] proposed SliM2 by adding a group sparse representation to the pairwise relation learning to project different modalities into a common space. Xu et al. [31] proposed that dictionary learning and feature learning should be combined to learn the projection matrix adaptively. Deng et al. [32] proposed a discriminative dictionary learning method with the common label alignment by learning the coefficients of different modalities. Wei et al. [33] proposed a modal-related method named MDCR to solve the modal semantic problem. Wu et al. [34] utilized spectral regression and a graph model to jointly learn the minimum error regression and latent space. Wang et al. [35] proposed an adversarial learning framework, which can learn modality-invariant and discriminative representations of different modalities. And in this framework, the modality classifier and the feature projector compete with each other to obtain a better pair of feature representations. Cao et al. [36] used multiview embedding to obtain latent representations for visual object recognition and cross-modal retrieval. Zhang et al. [37] utilized a graph model to learn a common space for cross-modal by adding the relationship of intraclass and interclass in the projection process. e main purpose of these methods is to solve the correlation of distance measure, but the class information and task specificity are not well solved. erefore, how to solve the two problems at the same time for different tasks is particularly important. Based on the idea, we learn two couples of projections for different retrieval tasks and apply a linear discriminative constraint to the projection matrices. To achieve this goal, we combine task-related learning with linear discriminative analysis through semi-supervised label propagation. Figure 2 shows the flowchart of our method. Experimental results on three open cross-modal datasets demonstrate that our cross-modal retrieval method outperforms the latest methods.
Methodology
To improve the retrieval performance, we introduce the discriminative comapping and pay more attention to different retrieval tasks and class information preservation. Here, we focus on the retrieval of I2T and I2T, and it is easy to expand our method to the retrieval of other modalities. e Objective Function. Define image data as I � [I l ; I u ] ∈ R n×p and text data as T � [T l ; T u ] ∈ R n×q separately, where I l ∈ R n l ×p and T l ∈ R n l ×q denote the labelled image n l and its text with p dimensions, and I u ∈ R n u ×p and T u ∈ R n u ×q represent the unlabelled image n u and its text with q dimensions. Let D � I i , T i n i�1 be n pairs of image and text documents, where D l � I l , T l n l i�1 and D u � I u , T u n u i�1 denote the labelled and unlabelled documents, respectively. S � [S l ; S u ] ∈ R n×c is the semantic matrix, where c is the category number, S l is the label of labelled data with one-hot coding, and S u is the pseudo-label of unlabelled data. e goal of our method is to learn two couples of projection matrices that project data from different modalities into a common space for different tasks. en, the cross-modal retrieval can be performed in the common space.
We propose a novel modal-related projection strategy based on semi-supervised learning for task specificity. Here, the pairwise closeness of multimodal data and the semantic projection are combined into a unified formulation. For I2T and T2I, the minimization forms are obtained as follows: where V and W stand for the projection matrices for modalities I and T separately. e linear discriminant constraint to equations (1) and (2) is introduced to preserve the class information in the latent projection subspace. We denote m i as the mean of the labelled samples in the ith class and m as the mean of all labelled samples. e intraclass scatter matrix can be defined T . e objective function is represented as follows: where W ∈ R d×k is the projection matrix and d is the dimension of the basic vector. According to equation (3), the linear discriminant constraint can be transformed into WS w−t W T , where S w−t is S w − cS t . e intraclass scatter of I is represented as S w I, and the interclass scatter of I is S t . Under the multimodal condition, our method utilizes LDA projections to preserve class information of each modal. e corresponding formula is as follows: where A and B denote S w I − c 1 S t I and S w T − c 2 S t T separately. We add equation (4) to equations (1) and (2), respectively, and then get the objective functions of I2T and T2I in the following: where λ is a tradeoff coefficient to balance pairwise information and semantic information and μ 1 and μ 2 are regularization parameters to balance the structure information of the image and text. According to equations (1) and (2), the structure projection of I and T is the same as the semantic projection. Consequently, our method can bridge the feature and semantic spaces. is can decrease the loss of projection and improve the performance of cross-modal retrieval.
We introduce the semi-supervised learning strategy. To propagate the label information from the labelled data, we utilize the radial basis function (RBF) kernel to evaluate the pairwise similarities between the unlabelled data after projection, and then the similarities are regarded as the label information to be updated in the optimization process until the results converge. For any data x i and x j , the kernel function is defined as follows: where β is the kernel parameter.
Algorithm
Optimization. e objective functions of equations (5) and (6) are nonconvex, so the iteration method is used to update each variant when other variants are fixed alternatively.
For any matrix M ∈ R N×d , the partial derivative of equation (5) is represented as follows: Similarly, the partial derivative of equation (6) is given as follows: According to equations (8)-(11), our method can be solved by gradient descent. Algorithm 1 describes the optimization of cross-modal learning. After the projection matrices for the I2T and T2I tasks are obtained, I and T can be mapped to the common space where cross-modal retrieval is achieved.
Experiments
To evaluate the performance of the proposed method (MRRDC), we do comparison experiments with several other methods on three public datasets.
Wikipedia Dataset.
is dataset consists of 2,866 image-text pairs labelled with one of 10 semantic classes. In this dataset, 2,173 pairs of data are selected as the training set, and the rest are the testing set. In our experiments, we use the public dataset [7] provided by Rasiwasia et al. (wiki-R), where images are represented by 128-dimensional SIFT description histograms [38], and the representation of the texts with 10 dimensions is derived from an LDA model [39]. At the same time, we also use the dataset provided by Wei et al. (wiki-W) [40], where 4,096-dimensional CNN features [41] are used to present images and 100-dimensional LDA features are utilized to denote the texts. [40].
Pascal Sentence Dataset
is dataset consists of 1,000 image-text pairs with 20 categories. We randomly choose 30 pairs from each category as training samples and the rest as test samples. e image features are 4,096-dimensional CNN features, and the text features are 100-dimensional LDA features. [42].
INRIA-Websearch
is dataset contains 71,478 pairs of image and text annotations from 353 classes. We remove some pairs which are marked as irrelevant and select the pairs that belong to any one of the 100 largest categories. en, we get a subset of 14,698 pairs for evaluation. We randomly select 70% of pairs from each category as the training set (10,332 pairs), and the rest are treated as the testing set (4,366 pairs). Similarly, images are represented with 4,096-dimensional CNN features, and the textual tags are represented with 100-dimensional LDA features.
Evaluation Metrics.
To evaluate the performance of the proposed method, two typical cross-modal retrieval tasks are conducted: I2T and T2I. In the test phase, the projection matrices are used to map the multimodal data into the common subspace. en, the data of different modalities can be retrieved. In all experiments, the cosine distance is adopted to measure the feature similarities. Given a query, the aim of each cross-modal task is to find the top-k nearest neighbors from the retrieval results. e performance of the algorithms is evaluated by mean average precision (mAP), which is one of the standard information retrieval metrics. To obtain mAP, average precision (AP) is calculated by where R is the number of correlation data in the test dataset, P(i) is the precision of top r retrieval data, and if σ(i) � 1, the top r retrieval data are relevant; otherwise, σ(i) � 0. en, the value of mAP can be obtained by averaging AP for all queries. e larger the mAP, the better the retrieval performance. Besides the mAP, the precision-recall curves and mAP performance for each class are used to evaluate the effectiveness of different methods.
Comparison Methods.
To verify that our method has good performance, we compare our method with seven state-of-the-art methods, such as PLS [18], CCA [7], SM [7], SCM [7], GMLDA [21], GMMFA [21], MDCR [33], JLSLR [34], ACMR [35], and SGRCR [37]. PLS, CCA, SM, and SCM are typical methods that utilize pairwise information to learn a common latent subspace, where the similarity between different multimodals can be measured by metric methods directly. ese kinds of approaches make the pairwise data in the multimodal dataset closer in the learned common subspace. GMLDA, GMMFA, and MDCR are based on the semantic category information via supervised learning. Due to the use of label information, these methods can easily learn a more discriminative subspace.
Results and Analysis.
is may be because the projection matrices preserve more discriminative class information via semi-supervised learning. e common subspace of our method is more discriminative and effective by further exploiting the class semantic of intramodality and intermodality similarity simultaneously. From Table 1, we also find that, in most cases, GMMFA, GMLDA, MDCR, and MRRDC always perform better than PLS, CCA, SM, and SCM, and images with CNN features have superiority compared with the shallow features. For the first result, this is because PLS, CCA, SM, and SCM only use pairwise information, but the other approaches add class information to their objective functions, which provides better separation between different categories in the latent common subspace. For the second result, this is due to the powerful semantic representation of CNN. e precision-recall curves on wiki-R, wiki-W, Pascal Sentence, and INRIA-Websearch are plotted in Figure 3. Figure 4 shows the mAP scores of comparison approaches and our method, and the rightmost bar of each figure shows the average mAP scores. For most categories, the mAP of our method outperforms that of comparison methods. From these experimental results, we can draw the following conclusions: (1) Compared with the current state-of-the-art methods, our method improves the average mAP greatly. Our method consistently outperforms compared methods, which is due to the factor that MRRDC learns projection matrices in task-related and linear discrimination ways for different modalities, where different modalities can preserve semantic and original class information. Besides, both labelled data Input: all image feature matrices I ∈ R n×q , all text feature matrices T ∈ R n×p , and the corresponding semantic matrix S � [S l ; S u ]. Initial: V i , W j , i � 0, j � 0, and set the parameters λ, μ 1 , μ 2 , ∈ 1 , ∈ 2 , σ and maximum iteration time. σ is the step size in the alternating updating process, ε 1 and ε 2 is the convergence condition.
Until t > maximum iteration number Output: V i , W j ALGORITHM 1: Optimization for MRRDC. 10 Complexity and unlabelled data of all the different modalities are explored. e labelled information can be propagated to the unlabelled data during the training process.
Complexity
(2) In most cases, GMLDA and GMMFA outperform CCA since GMLDA and GMMFA add category information to their formulation, which makes the common projection subspace more suitable for cross-modal retrieval. (3) Compared with the shallow features, CNN features have great advantages for the I2T task, which is because CNN features can easily obtain the semantic information from original images directly.
To further verify the effectiveness of our proposed MRRDC, we also provide the confusion matrices on singlemodal retrieval and the query examples for I2T and T2I in Figures 5 and 6 separately. Intuitively, from Figure 5, our method can achieve high precision in each category, which proves that the projection space is discriminative. We also observe from Figure 6 that, in many categories, our proposed method always successfully obtains the best retrieval results from query samples.
Convergence.
Our objective formulation is solved by an iterative optimization algorithm. In a practical application, a fast retrieval speed is necessary. In Figure 7, we plot the convergence curves of our optimization algorithm as to the objective function value of equations (5) and (6) at each iteration on wiki-W and Pascal Sentence datasets separately. In this figure, the curve is monotonic at each iteration, and the algorithm generally converges within about 20 iterations for these datasets. e fast speed can ensure the high efficiency of our method.
Conclusion
In this paper, we propose an effective semi-supervised crossmodal retrieval approach based on discriminative comapping. Our approach uses different couples of discriminative projection matrices to map different modalities to the common space where the correlation between different modalities can be maximum for different retrieval tasks. In particular, we use labelled samples to propagate the category information to unlabelled samples, and the original class information is preserved by using linear discriminant analysis. erefore, the proposed method not only uses the relationship of different retrieval tasks but also keeps the structure information for different modalities. In the future, we will mine the correlation between different modalities and focus on the unsupervised cross-modal retrieval method for unlabelled data.
Data Availability e data supporting this paper are from the reported studies and datasets in the cited references.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 4,641 | 2020-07-18T00:00:00.000 | [
"Computer Science"
] |
Influence of Stress Jump Condition at the Interface Region of a Two-Layer Nanofluid Flow in a Microchannel with EDL Effects
The influence of stress jump conditions on a steady, fully developed two-layer magnetohydrodynamic electro-osmotic nanofluid in the microchannel, is investigated numerically. A nanofluid is partially filled into the microchannel, while a porous medium, saturated with nanofluid, is immersed into the other half of the microchannel. The Brinkmann-extended Darcy equation is used to effectively explain the nanofluid flow in the porous region. In both regions, electric double layers are examined, whereas at the interface, Ochoa-Tapia and Whitaker’s stress jump condition is considered. The non-dimensional velocity, temperature, and volume fraction of the nanoparticle profiles are examined, by varying physical parameters. Additionally, the Darcy number, as well as the coefficient in the stress jump condition, are investigated for their profound effect on skin friction and Nusselt number. It is concluded that, taking into account the change in shear stress at the interface has a significant impact on fluid flow problems.
Introduction
Two-layer flow in a microchannel is essential in practical applications like crude-oil extraction, thermal insulation, solidification of castings, and several other geophysical applications. Another example is the design of micro-electromechanical systems (MEMS). In addition, fluid flow properties depict unusual behaviors in a microchannel compared to a macro-scale channel. Consequently, it is of significant importance to scientifically study the two-layer microchannel flow, particularly taking into account the possible effect of EDL. Due to this reason, many research studies have been conducted on flows through a microchannel, considering the electric double layer effects for Newtonian fluids [1][2][3][4], and non-Newtonian fluids [5][6][7][8][9][10]. However, most of the works mentioned above are connected to single-layer flow. The flow attributes of immiscible liquid are noticeable in the biochemical and biological investigation processes [11]. A laminar fluid interface is rendered when two or more immiscible liquids stream in microfluidic devices. In most cases, the influences of the fluid interface are noteworthy and cannot be neglected in the investigation of biological sample separation. Some research studies that have investigated this correlation include the work Gao et al. [12], who obtained theoretical and experimental results to investigate the two-fluid electro-osmotic flow in microchannels, but the Maxwell stress balance condition at the interface was not taken in account. Later, Gao et al. [13] modified the interface condition, by including the shear stress balances, that result in a jump at the interface resulting from the specific surface charge density. Some more exciting work [14][15][16] includes the investigation of two-layer microchannel flow along with the electro-osmotic effect, and using the shear stress balance interface condition. Recently, Niazi and Xu [17] used nanofluids to assess the electro-osmotic effect in two-layer microchannel flow. They used Buongiorno's model [18] to construct a mathematical model, and obtained the analytic solutions for their problem. Mainly, they concluded that the flow behavior was altered dramatically in the presence of Brownian diffusion, thermophoresis diffusion, and viscosity. H. Tahir et al. [19] used the optimal homotopy approach, to analyze the performance of a hybridized two-phase ferromagnetic nanofluid of ferrite nanoparticles, and their effects on heat transmission in the flow of the hybrid nanofluid. Based on their investigation, it can be concluded that the thermophysical characteristics and Curie temperature with two or more ferrites suspended in two or more base fluids, can be enhanced. In-depth analysis by Hammad et al. [20], covers the numerous uses of nanofluids, as well as the implications of variables such as nanoparticle type and size, which may open up new prospects for commercial applications.
Porous media are also critical for exploring the applications described above. For example, thickening alloys do not have a eutectic composition, resulting in the separation of the frozen and liquid portions of the casting. In this instance, the partially frozen areas can be thought of as a porous medium with varying permeability. While porous media have been used for a wide range of commercial and geological purposes, there are opportunities to investigate alternative uses, particularly for energy systems, such as compact heat exchangers, heat pipes, electronic cooling, and solar collectors, by exploiting porous media. For certain applications, it is not necessary to entirely fill the system with the porous medium; partial filling is adequate. In comparison to a system that is totally filled with porous media, partial filling reduces the pressure drop. In addition, partial filling prevents contact between the porous material and the surface, reducing heat loss from the porous material to the surface. Such a criterion is necessary in a system where the primary objective is to improve the thermal coupling between the porous medium and fluid flow, and reduce the system's high thermal coupling with the surrounding environment. For instance, the objective of Mohamad's [21] solar air heater, was to increase the rate of heat transfer from the porous medium, which is heated by solar radiation, to air, while minimizing heat loss to the ambient environment. In addition, partial filling helps to decrease the pressure drop. A partial filling of a channel with porous media drives the flow to exit from the core area to the outer region, depending on the permeability of the medium. This decreases the thickness of the boundary layer and therefore increases the rate of heat transfer. The porous medium also alters the effective thermal conductivity and heat capacity of the flow, and the solid matrix increases the rate of radiative heat transfer in a gas-based system. Hence, increases in heat transfer occur through three mechanisms: flow redistribution, thermal conductivity adjustment, and medium radiative property modification. Beavers and Joseph [22] pioneered this type of study, by modeling flow in a porous material using Darcy's law. The effects of the interfacial layer on fluid mechanics and heat transmission are discussed in further detail in [23,24]. These articles investigate non-Darcian effects in flow in porous media, via the Brinkman-Forchheimer-extended Darcy equation. In [24], the authors presented a precise solution for the flow field at the interface. The fluid layer is located among a semi-infinite porous object and an impermeable outer border, in their proposed model. Nield [25] demonstrated that velocity shear is continuous across the porous part of the contact. This is not always the case for solid sections, as the averaged velocity shears do not always coincide. Then, Kuznetsov [26], and Ochoa-Tapia and Whitaker [27,28], developed the strategy for comparing the Brinkman-extended Darcy law to Stokes' equations, that need a discontinuity in the stress but retain continuity in the fluid flow. They determined that solving at the interface utilizing the Ochoa-Tapia and Whitaker conditions, resolves the over-determination problem demonstrated in Nield [25].
We intend to investigate fluid flow in a microchannel half filled with porous media, in light of the practical implications of two-layer fluid flow in a microchannel. In the region with a porous layer, the Brinkmann-extended Darcy's law is used to mathematically predict fluid flow, whereas Buongiorno's model is used in the other zone. For this topic, we used the stress jump boundary condition at the interface, which had been overlooked in prior research, as well as the impacts of the electric double layer (EDL) and magnetic field.
Utilizing the interface stress jump condition, it is possible to correct for the overestimation of the physical parameters involved in the problem. The Darcy number and stress jump condition variations are critical in analyzing heat and mass transport in this two-layer fluid flow problem.
Problem Formulation
We analyze the flow of an electro-osmotic fluid within a microchannel divided into two distinct regions (I and II). The elongated rectangular microchannel is horizontally positioned, with a width W, that is adequately greater than its height H (W/H > 4; see Dauenhauer and Majdalani [29]). The length of the microchannel, L, is believed to be sufficient to prevent the apertures at the end from having an effect. H 1 + H 2 = H are the height of the lower and upper layers, respectively. The interface among immiscible fluids is planar, based on the aforementioned assumptions. The parallel flow proposition can also be used to reduce the dimensions of the problem to two (2D). In Figure 1, the Cartesian coordinate system (x, y, z) is used, with x along the streamwise direction, y is parallel to the surfaces and normal to x, and z is perpendicular to the plates, parallel to each other. The lower and top walls have zeta potentials, temperature and nanoparticle volumes are represented asζ 1 , T w , C w andζ 2 , T w , C w . Region I receives nanofluid containing Al 2 O 3 nanoparticles, whereas Region II has porous media saturated with TiO 2 . Table 1 lists the physical parameters of the fluid and nanoparticles. The Buongiorno model is used to simulate nanofluid flow in Region I. The Brinkmann-extended Darcy law is employed to illustrate the flow of nanofluids in a porous layer region. The steady-state laminar flow is considered to be one-dimensional, owing to the significant presence of an electric field. due to the presence of an electric double layer (EDL) and the applied pressure. The governing equations are modeled after the Navier-Stokes equations, with the driving force deriving from the electric and magnetic field, along with a pressure gradient. The mathematical models representing the physical phenomena in both the regions are as follows: Region I: Region II: Here, ψ 1 and ψ 2 represent the dimensional electrostatic potentials in the two regions, and Φ 1 and Φ 2 are the viscous dissipation factor in two regions. The general forms of Φ 1 and Φ 2 are as follows:
1.
The direction of the flow is assumed to be along the x-axis.
2.
The flow velocity in the z-direction is negligible, since the length of microchannel L is much larger than its height H. Hence w i ≈ 0, 3.
The velocity component in the y-direction is considered to be zero, i.e., v i = 0, 4.
The flow is assumed to uni-directional along the x-axis but its properties changes with respect to the z-axis, hence V i = (u i (z), 0, 0), 5.
The body force, F i = ae ei E + J i × B, represents the sum of electro-osmosis and the electromagnetic forces, where E = (E x , E y , 0) is the electric field, B = (0, 0, B 0 ) is the applied magnetic field, and is the current density of the ion. 6.
The inertial effects in the porous region of the microchannel (Region II) are negligible. 7.
Region I of the channel is filled with nanofluid, while the channel's Region II is filled with the porous medium saturated with nanofluid, having uniform permeability only.
8.
Proceeding from the analysis presented in [26], the stress jump condition is utilized at the interface. Simultaneously, the electric potential, temperature, nanoparticle concentration, and flux at the interface are presumed to be continuous. Finally, the no-slip condition is applied to the velocity boundaries, while the temperature and nanoparticle concentration are assumed to have a constant distribution on the boundaries.
In light of the above assumptions, Equations (1)-(10) now take the form, Region II: Here, α i = k n f i /(ρ i c p i ) f is the thermal diffusivity, with i = 1, 2 representing Region I and Region II, f is the heat capacity ratio between the two regions of the microchannel, and µ e f f = µ 2 / , where is the porosity. The boundary conditions for the above stated governing equations in the two regions are as follows: when z = −H 1 : when z = 0: when z = H 2 : where β is the adjustable stress jump coefficient and κ is the permeability of the porous medium. The Poisson-Boltzmann equation [29], simplifies the relationship between the electrostatic potential ψ i , near the surface and the cumulative number of electrical charges for each unit of volume ρ ei , at any point in the fluid.
When the electrical potential is sufficiently small in comparison to the thermal energy of the ions, the Debye-Huckel linear approximation holds true, i.e., |k B T| | zeψ i (z)|, Equation (26) is reduced to
Problem Non-Dimensionalization
To transform the governing equations to dimensionless forms, we introduce the following similarity transformations: Substituting the non-dimensional variables defined in Equation (28), the fluid flow region is changed to [−h 1 , h 2 ], with h 1 = H 1 /H, h 2 = H 2 /H, and the governing equations now take the form: Region II (0 ≤ η ≤ h 2 ): The corresponding boundary conditions are reduced to, when η = −h 1 : when η = 0: when η = h 2 : where ζ i =ẑe 0ζi /(k BT ) is the zeta potential, k i the electro-osmotic parameter.
To measure the difference of physical properties, the following ratios are defined: where the physical parameters of the two regions are related as follows: The required ratios are calculated using the values from Table 1.
Skin Friction Coefficient and Nusselt Number
For the heat and mass transfer analyses we calculated the skin friction coefficient and Nusselt number as follows: where i = 1, 2 denotes regions I and II, τ wi denotes the shear stress, and q wi denotes the heat flux, which can be calculated using Substituting Equations (28) and (44) into Equation (43), we get where Re i = Hρ i U ai µ i is the Reynolds number. The relationship between two regions' Reynolds numbers is defined by where λ ρ = ρ 2 /ρ 1 .
where F represents the field variables ψ, u, θ, and φ, and N represents the number of grid points. Here, 10 −8 is the predefined tolerance error. To confirm our findings, we replicated those of Niazi and Xu [17], by setting β = 0, Da = 1, and γ = 1. The comparisons for the velocity and temperature profiles are shown in Figure 2, which validates the results of the current problem. In this analysis, values of the parameters are selected based on the properties of the nanofluid given in Table 1. These values can vary depending upon the values of other parameters, to keep the system stable. For some values of parameters, such as β and Se 1 , we have referred to the papers by Kuznetsov [26] and Niazi et al. [17]. Figure 3 illustrates the velocity profiles calculated for various different values of β and Da. The interface is located at η = 0. According to the analysis in [26], we chose values for β that range between −0.8 and +0.8, and a Darcy number of the order of 10 −1 or less. Figure 3a demonstrates that a change in stress can fundamentally alter the velocity profiles. When the stress at the interface increases, the slope of the tangent to the velocity distribution at η = 0 changes dramatically. By gradually increasing the coefficient β, which accounts for the stress jump, the velocity is reduced noticeably. When β is negative, this apparent impact is particularly strong. Additionally, when the stress jump coefficient is varied, there is no change in the velocity profile near the upper wall. The Darcy number's effect on the velocity profile is depicted in Figure 3b. The curve computed for Da = 10 −2 contains three segments. One portion is contained within the momentum boundary layer adjacent to the boundary at η = −1, while the other portion is contained within the momentum boundary layer adjacent to the interface at η = 0. As per the classical Darcy law, the fluid velocity increases inbetween two boundary layers, but stays unchanged in the porous layer. Additionally, as it enters the porous layer, the velocity decreases more rapidly in this third section. Similarly, the curves corresponding to Da = 10 −3 and Da = 10 −4 are almost identical; the difference is simply not visible due to the low velocity in the porous layer. There is no point on the curve Da = 10 −1 where the velocity is constant. This is because, as the Darcy number increases, the width of the momentum boundary layers decreases. The influences of the physical ratios on the flow characteristics are displayed in Figure 4. It is observed in Figure 4a that, with the increase in the ratio of electric conductivity (λ ε ), the average velocity increases in Region I. At the same time, it decreases in Region II with a porous medium. The fluid in Region I conducts electricity better than in Region II. Further, in Figure 4b, an increase in the viscosity ratio (λ µ ) decreases the velocity throughout the channel. In the case of a larger viscosity ratio, the velocity is smaller in Region I and Region II. The reason for these phenomena, is that when the viscosity ratio λ µ > 1, the fluid viscosity in Region I is greater than that in Region II, resulting in a larger value of the average velocity in Region I.
The significant influence of stress jump condition coefficient and Darcy number on the temperature profile was examined, and is shown in Figure 5. It can be observed that the temperature decreases throughout the channel for larger values of β, as displayed in Figure 5a. This figure demonstrates the significant effect of the stress jump on the nondimensional temperature profile. A peak in the θ(η) is observed for a considerably smaller value of β = −0.8, and this curve flattens for a very large value of β. This illustrates that the increase in the stress jump coefficient reduces the temperature throughout the channel. While an adverse behavior in θ(η) is seen in the case of the Darcy number, as shown in Figure 5b. For lower values of the Darcy number, the temperature profile decreases significantly in the two regions. It is also observed that the temperature profile is indeed more significant in Region II than in Region I, which in turn depicts that the heat transfer rate is higher in the porous layer. Figure 6 shows the variation in the temperature profile for distinct values of the Brinkman number and viscosity ratio. An increase in Brinkman number Br 1 , tends to increase the temperature profile, as in Figure 6a. A higher value of Br 1 , slows the conduction of heat produced and hence the temperature rise is more considerable. The viscosity ratio shows an opposite trend on θ(η) as compared to the Brinkman number, as given in Figure 6b. As the value of the viscosity ratio, λ µ , increases, the temperature profile decreases throughout the channel. Physically, as the value of λµ increases, so does the amount of molecular conduction in the second region. As a result, the temperature of Region II decreases, which results in a decrease in the temperature throughout the channel, as illustrated in Figure 6b. However, when the viscosity ratio, λ µ , is minor (λ µ ≤ 1), the position of the maximal value for θ(η) shifts towards Region II. For larger values of λ µ , the position shifts towards Region I. This shift occurs because the fluid interface must satisfy the boundary condition for continuous thermal flux. Figure 6. Temperature, θ(η), for different values of physical ratios λ n f and λ µ , when β = 0.05, Figure 7 illustrates the evolution of the nanoparticle volume fraction φ(η) as the stress jump coefficient (β) and Darcy number (Da) increase. As shown in Figure 7a, the volume fraction of nanoparticles decreases rapidly as β decreases, particularly for negative values of β. Nevertheless, an adverse behavior is seen in the φ(η) profile for the case of the Darcy number, as given in Figure 7b. The increase in Darcy number causes a significant decrease in the nanoparticle profile. In contrast, smaller values of the Darcy number have a minimal impact on the φ(η) and give almost a flat curve for Da = 10 −3 and Da = 10 −4 . The impacts of the physical ratios and Brinkman number on the nanoparticle volume fraction, are shown in Figure 8. It is observed that the influence of Br 1 and λ µ on φ(η), show opposite trends. Figure 7a shows that the φ(η) decreases as the Brinkman number increases, due to the increased fluid viscosity; while increasing the value of λ µ accelerates the movement of nanoparticles toward the upper wall, this results in a decrease in the nanoparticle volume fraction, as illustrated in Figure 8b.
The variations in skin friction coefficient, C f i , with κ 1 , β, and Da are presented in Figures 9 and 10, respectively. Since it has hitherto been observed that the β and Da produce a significant effect on the velocity profile, given in Figure 3, this pattern also holds true for the fluctuation of C f i , but the orientation on the upper wall is the inverse of the direction on the lower wall. When β < 0, the local skin friction (C f 1 ) rises, for increasing values of κ 1 , and this increase becomes increasingly evident for larger values of κ 1 . On the other hand, for β ≥ 0, there is a slight increase in skin friction (C f 1 ) for larger values of κ 1 but for smaller values of κ 1 this increase is negligible. At the upper wall, variation in the β parameter has no effect on the skin friction coefficient (C f 2 ) when the electro-osmotic parameter is changed. Figure 10 illustrates the increase in the skin friction coefficient on the lower wall and the decrease on the upper wall, as κ 1 increases. This is because the coordinates are set in such a way that at the interface between the two layers, the signs on the top and bottom walls are reversed. An increase in the Darcy number increases the C f 1 significantly, but for smaller values of the Darcy number, this increase is not prominent. While at the upper wall, the decrease in C f 2 is evident for the increasing values of Darcy number. The influence of Nusselt number (Nu i ) with β and Da, for several values of κ 1 , are shown in Figures 11 and 12, respectively. As can be seen from these figures, increasing the value of κ 1 decreases the Nusselt number on the top wall, while increasing it on the bottom wall. Physically, the increase in κ 1 reduces the EDL effect, which leads to the enhancement of the fluid motion. Thus, it causes more heat conduction than heat convection, causing the decrease in the Nusselt number at the upper wall. On the other hand, for larger values of β and Da, the effect on Nu i is more evident, both at the upper and lower wall. As a result, calculations that do not take the increase in stress into account, may suffer a significant loss of accuracy.
Conclusions
In the EDL effect, physical analysis was performed on two-layer nanofluids in a microchannel partially filled with porous medium. The mathematical model for the twolayer nanofluid flow is developed using Buongiorno's model. At the interface, Ochoa-Tapia and Whitaker's proposed jump boundary condition is used, to map the Brinkman-extended Darcy equation to the Stokes' equation. The finite difference method is employed to solve and investigate the nonlinear system of differential equations. Exact solutions are obtained for the electrostatic potential and velocity. Different from the work of Niazi and Xu [17], we have considered a partially filled microchannel with a porous medium and used the stress jump condition at the interface. Two momentum boundary layers are formed in the porous region, to ensure uniform permeability of the porous medium, as illustrated graphically. One of these boundary layers forms immediately adjacent to the impermeable boundary, while the other forms immediately adjacent to the interface. The fluid velocity is constant between these momentum boundary layers. For this reason, the Darcy number and stress jump coefficient show significant variations with velocity, temperature, and nanoparticle concentration. In addition, increasing the physical ratio, i.e., viscosity, decreases the velocity and temperature profiles. Simultaneously, an adverse trend is observed in the volume fraction profile of the nanoparticles. Additionally, the effect of the Darcy number and stress jump coefficient on the skin friction, and the Nusselt number on the upper and lower microchannel walls, are visible. As a result, it is concluded that the stress jump boundary condition is critical for solving fluid flow problems in a wide variety of practical applications.
Author Contributions: A.R. conceived that idea and drafted the paper; M.R.u.H. performed the numerical analysis on the data, H.X. contributed in the problem supervision and S.X. contributed with resources allocation. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement:
The data presented in this study are available on request. The data include the MATLAB code and the figures.
Acknowledgments:
The authors of this paper would like to express their gratitude to the Huaiyin Institute of Technology, for providing research facilities and encouraging faculty members to engage in research activities along with teaching.
Conflicts of Interest:
The authors declare no conflict of interest.
Nomenclature
The following abbreviations are used in this manuscript: C 1 , C 2 volumetric fractions of nanoparticles; C f 1 , C f 2 local skin friction coefficients; D B 1 , D B 2 Brownian diffusion coefficients; D T 1 , D T 2 thermophoretic diffusion coefficients; P pressure, Pa; T 1 , T 2 non-dimensional nanofluid temperatures in two regions, K; V 1 , V 2 non-dimensional velocities of the fluid, m/s; Br 1 , Br 2 Brinkman numbers; B 0 magnetic field in z-direction; Da Darcy number; (c p ) f , (c p ) s fluid and nanoparticle specific heats; e charge of a proton; C 0 reference volume fraction for nanoparticles; C w volume fraction for nanoparticles on the microchannel walls; E s non-dimensional external electric field parameter; E x , E y electric field in x− and y−directions, respectively; F 1 , F 2 body forces caused by uniform electromagnetic field; H channel height; H 1 , H 2 channel height of two regions ; h 1 , h 2 non-dimensional heights of two regions; Ha 1 , Ha 2 Hartman numbers; k B Boltzmann constant; k f 1 , k f 2 fluid's thermal conductivity in two regions; k n f the ratio of the fluid's thermal conductivities; L microchannel length; n 0 bulk ionic concentration; N B1 , N B2 Brownian motion parameters; N T1 , N T2 thermophoresis parameters; Nu 1 , Nu 2 local Nusselt numbers; | 6,246.8 | 2023-03-28T00:00:00.000 | [
"Physics"
] |
Comparison of kaon and pion valence quark distributions in a statistical model
We have calculated the Bjorken-x dependence of the kaon and pion valence quark distributions in a statistical model. Each meson is described by a Fock state expansion in terms of quarks, antiquarks and gluons. Although Drell-Yan experiments have measured the pion valence quark distributions directly, the kaon valence quark distributions have only been deduced from the measurement of the ratio $\bar{u}_K(x)/\bar{u}_\pi(x)$. We show that, using no free parameters, our model predicts the decrease of this ratio with increasing x.
Introduction
A determination of parton distribution functions (PDFs) in mesons is important as a test of QCD. A Drell-Yan experiment has measured the pion valence quark distributions [1], but the kaon valence quark distributions have only been deduced from the measurement of the ratioū K (x)/ū π (x) [2]. Theoretical calculations of pion PDFs have used the Dyson Schwinger equations (DSE), the Nambu-Jona-Lasinio (NJL) model, instantons, constituent quark models, and statistical models. For a recent review see Holt and Roberts [3]. Kaon PDFs have been calculated with the NJL model [4,5], a meson cloud model [6], a valon model [7]. and the Dyson Schwinger equations [8,9].
In section 2 we summarize the statistical model, review our calculation for pion PDFs, and describe our calculation for kaon PDFs. In section 3 we compare our valence quark ratio to experiment and other theoretical calculations.
statistical model
Zhang and collaborators [10,11,12] have used a simple statistical model to calculate parton distribution functions in the proton. They considered the proton to be an ensemble of quark-gluon Fock states, and used detailed balance between states to determine the state probabilities. A Monte-Carlo program was used to generate momentum distribution functions for each state, from which proton PDFs were determined. Their model, using no free parameters, predicted an integrated asymmetry (d −ū) = 0.124, consistent with the experimental value 0.118 ± 0.012 measured in deep inelastic scattering (DIS) [13]. Their results were also in good agreement with thed(x)−ū(x) distributions measured in DIS [14] and Drell-Yan experiments [15,16].
statistical model for the pion
We have used Zhang et al.'s statistical model to calculate PDFs for the pion that are in good agreement with experiment and other calculations [17]. We assumed a light sea ofūu,dd and gluons. The Fock state expansion for the pion is with i the number ofūu pairs, j the number ofdd pairs and k the number of gluons. The leading term in the expansion, {ijk} = {000}, represents the valence quark state ud. The probability of finding the pion in the state Detailed balance between any two Fock states requires that in which N (A → B)) is the transfer rate of state A into state B. Transfer rates between states are assumed to be proportional to the number of partons that can split or recombine. Taking into account the processes q ↔ q g, g ↔ qq, and g ↔ gg, This equation, together with the normalization condition (2), determines the ρ ijk . The π + sea is flavor symmetric, since the equation is symmetric in i and j.
The average number of partons in the pion,n π , is given bȳ For the pion we assumed massless partons, and used the Monte Carlo event generator RAMBO [18] to determine the momentum distribution f n (x) for each n-parton state, n = 2 + 2(i + j) + k. The flavor distributions for each Fock state arē and for the gluons The PDFs are found by summing these distribution functions over all values of {ijk} and The valence quark distribution function is and the momentum distribution functions satisfy In order to compare our valence quark distributions to experiment, we carried out an evolution in Q 2 . We determined the starting scale of our distributions by requiring that the first and second moments of our valence quark distribution at Q 2 = 4 GeV 2 be equal to those found by Sutton et al. [19]. This gave us a starting scale of Q 2 0 = 1.96 GeV 2 . We used Miyama and Kumano's code BF1 [20] for the evolution. Our results, shown in Fig. 1, are in reasonable agreement with experiment. The non-zero values at x ≈ 1 are contributed by the first term in the Fock state expansion, which consists of two massless partons, for which the momentum distribution is constant.
statistical model for the kaon
As for the pion, we assume that the kaon has a light sea ofūu,dd and gluons. The Fock state expansion for the K + is Again, assuming that transfer rates between states are assumed to be proportional to the number of partons that can split or recombine, and including the processes q ↔ q g, g ↔ qq, and g ↔ gg, We find the average number of partons in the kaonn K = 5.23. The K + sea is asymmetric, with <ūu >= 0.424 and <dd >= 0.685. As in the case of the proton, the valence u quark provides more pathways for annihilation of theū quarks in the sea, whereas thed quarks can only annihilate on other sea quarks. Detailed balance then requires an excess ofdd overūu.
For the kaon we also used RAMBO to determine momentum distributions for each n-parton state. If all partons are considered to be massless, including thes, PDFS for the kaon differ from those of the pion, becausen K >n π . The kaon's momentum is shared among more partons than the pion's momentum, so the momentum fraction contributed by its valence quarks 1 0 xu K (x)dx = 0.46 (16) is less than the momentum fraction contributed by the pion's valence quarks This is seen in the valence PDFs shown in Fig 2. The momentum distributions at x ≈ 1 are determined by the probability of finding the meson in the leading term of its Fock state expansion, the valence state, which is 0.14 for the pion and 0.10 for the kaon.
We also considered the case in which thes was given its current quark mass M = 100 MeV, but all other partons were considered massless. We then have two distributions, f nM (x), for the massives, and f n0 (x) for all the other partons, considered massless. We determined PDFs for the kaon, and compare them to the pion, in Fig. 3. As expected, the u K (x) distribution shifts to lower x, and thes K (x) distribution peaks at higher x. Parton numbers and momentum fractions for the pion and the kaon are shown in Table 1. Table 1: Parton numbers and momentum fractions for the pion and the kaon, calculated at the starting scale of Q 2 0 = 1.96 GeV 2 . For the kaon, < x > 0 is the momentum fraction calculated assuming a masslesss, and < x > M is the momentum fraction calculated for ans mass M = 100 MeV.
Comparison with experimental ratio and other theoretical calculations
Badier et al. [2] determined the valence quark ratioū K (x)/ū π (x) from Drell-Yan experiments with K − and π − beams incident on a platinum target. By symmetry this ratio is equal to the valence quark ratio u K (x)/u π (x) for the positively charged mesons we have discussed above. We compare our results to experiment in Fig. 4. Evolution has very little effect on the ratios. The massless parton calculation agrees better with experiment than the calculation that included thes mass. However, as seen in Table 1, the momentum fraction carried by the u K (x) distribution, drops from 23% (masslesss) to 20% (massives) and the momentum distribution shifts to lower x, as seen in Fig. 3. Both effects cause the ratio to increase for x ≤ 0.2 and decrease for larger values of x. In Fig. 5 we compare our results to other calculations, and to experiment. x u K u Π Figure 5: Calculations of the valence quark distribution ratio u K + (x)/u π + (x) compared to experiment [2]. The solid line is our calculation using M = 100 MeV for thes mass. The long dashed curve is our calculation using massless partons. The short-dashed line is the DSE prediction of Nguyen et al. [8]. The dot-dash curve is the NJL calculation of Davidson and Arriola [5] The dotted curve is the valon model calculation of Arash [7] for a massives.
The best agreement with experiment is the recent DSE calculation of Nguyen et al. [8], which used a full Bethe-Salpeter amplitude. Other calculations predict an x-dependence similar to ours. Davidson and Arriola, using an NJL model, found the same trend of a decreasing ratio with increasing x, as did the earlier NJL calculation of Shigetani et al. [4]. Arash, using a valon model with equal masses for the u ands valence quarks in the kaon, found reasonable agreement with experiment, but for a massives, the ratio did not agree as well. These models do not fit the experimental values of the ratio for the mid-range of x as well as the DSE calculation. The models predict a wide range of values for the ratio as x → 0. Data for the ratio in the region x < 0.2 is needed to test the models. In their review article, Holt and Roberts [3] have noted that the experimental data for the ratio are 'not of sufficient quality to test and verify our understanding of pion and kaon structure'. It is important to make higher statistics measurements of both the pion and kaon parton distributions, and to extend the measurements to lower x.
Conclusions
We have used a simple statistical model, developed for the calculation of parton distribution functions in the proton, to calculate the parton distribution functions of the pion and the kaon. We find that the ratio of valence quark distributions, u K (x)/u π (x), shows the expected decrease with increasing x. | 2,317 | 2011-10-07T00:00:00.000 | [
"Physics"
] |
Aesthetics and neural network image representations
We analyze the spaces of images encoded by generative neural networks of the BigGAN architecture. We find that generic multiplicative perturbations of neural network parameters away from the photo-realistic point often lead to networks generating images which appear as “artistic renditions” of the corresponding objects. This demonstrates an emergence of aesthetic properties directly from the structure of the photo-realistic visual environment as encoded in its neural network parametrization. Moreover, modifying a deep semantic part of the neural network leads to the appearance of symbolic visual representations. None of the considered networks had any access to images of human-made art.
Introduction
Among the many strands of contemporary Machine Learning, a prominent place is taken by generative neural networks [1][2][3][4] .These neural networks aim to generate new, unseen examples based on a given dataset and thus aim to learn the variability structure of the data.Of particular interest for the present investigation are neural networks which generate photo-realistic images of the natural and human environment.In order to do so, they have to incorporate extensive knowledge about the structure of the visual photo-realistic world.This information is encoded in a nontrivial way in the weights of the neural network layers.The goal of this work is to explore global properties of this neural encoding.To the best of our knowledge, such properties have not been investigated so far.
In the present paper, we show two surprising features of these neural encodings.First, moving away from the photo-realistic world in a generic (multiplicative) manner leads in many cases to the emergence of "artistic rendition" and aesthetic properties as perceived by humans.Second, upsetting a part of deep semantic information leads to the appearance of imagery which can be interpreted as symbolic visual representations.These results may have far reaching interdisciplinary consequences, touching upon our understanding of the neural basis of aesthetics (neuroaesthetics) [5][6][7][8][9][10] , the theory and philosophy of art and, given the similarities between deep convolutional networks and the visual cortex [11][12][13] , these results may inspire novel investigations within cognitive neuroscience.
The two main classes of generative neural networks are Generative Adversarial Networks (GAN) 1 which appear in a multitude of variants 2 , as well as Generative Autoencoders e.g.Variational Autoencoders (VAE) 3 , Wasserstein Autoencoders (WAE) 4 and many others.At the end of training these constructions provide for us a generator network G θ with a given set of "optimal" weights (i.e.neural network parameters) θ = θ * .The resulting network generates an image given as input a set of latent variables {z i }, and optionally (depending on the particular construction) the class c to which the generated image should belong.The latent variables {z i } are usually drawn from some random distribution and encode the variability of the overall space of images.Most of the focus in this domain of Machine Learning research is concentrated on finding the optimal architecture and training procedure so that the generated images represent best the space of images given as training data.In the present paper, we would like to pursue, however, an orthogonal line of investigation and study how the generated space of images changes as we move in the space of neural network parameters.
Indeed, a particular generator neural network G θ =θ * with the specific set of optimal weights θ * , obtained through training on the ImageNet dataset 14 , may be understood as providing a neural network representation of the space of photo-realistic natural images (including also human-made structures, objects, vehicles, food etc. but no art).
Similarly, we can view each particular choice of parameters θ of the generator neural network G θ as encoding some specific space of images.We thus have a mapping The above mentioned optimal weights θ * get mapped to the space of photo-realistic natural images.The goal of this paper is to investigate in what way the space of images changes as we move away from the photo-realistic point θ = θ * in the space of neural network parameters.
Aesthetics and artistic rendition
For the experiments performed in the present paper we utilized the generator part of the BigGAN-deep-256 network 15 trained on the ImageNet dataset 14 .The general structure of the network is shown in Fig. 2. It consists of an entry stage, followed by 13 groups of layers, which also receive shortcut connections directly from the entry stage, followed by the output stage (see the original paper 15 for details).The generator network has around 55 million trainable parameters θ .As the photo-realistic point, we take the weights θ * of the pretrained model 16 .
In order to move away from θ * , we employ a multiplicative random perturbation of the neural network parameters where • denotes element-wise multiplication, random is of the same shape as θ * and is drawn from a normal distribution with zero mean and unit standard deviation.We take the constant α = 0.35 so that we move noticeably away from the photo-realistic point θ * .The precise value of α does not matter much.
It is important to contrast here moving in latent space {z i } for a fixed generative neural network, which is often studied in the Machine Learning literature, with moving in the space of weights θ , which we do in (3).In the former case, each point {z i } corresponds to a single image generated by the fixed generator network G θ * .Thus, when varying {z i }, one moves in the given fixed photo-realistic space of images associated to G θ * .In contrast, in the case studied here and given by (3), each point θ is a different generator network G θ , and thus corresponds to a different visual universe of images which can be potentially generated by G θ .
In Fig. 3 we show images generated by five networks G θ with weights θ given by random perturbations of the form (3) with random seeds chosen from the range 0 − 10 and, for comparison, images produced by the original photo-realistic network G θ =θ * .The images represent stupa, espresso, dial telephone and seashore.The respective latent noise {z i } inputs in each row of Fig. 3 for all networks were identical.The two leftmost images in the top row 1 of Fig. 1 were also obtained using (3).A striking feature of the obtained images is that they seem to give an "artistic rendition" of the original photo-realistic objects.The perturbation of the space of parameters θ away from the point θ * clearly breaks the fine-tuning necessary for the photo-realistic rendition of the images by the original network, which is of course not surprising.What is quite unexpected, however, is that this manner of breaking leads to aesthetically pleasing and interesting images, at least for a range of object classes.The deformations of photo-realism are reminiscent of the kind of simplifications that a human artist would employ when painting or making a rough sketch.Indeed, many of the obtained images could arguably be mistaken at first glance for paintings or sketches made by a human.Moreover, in the majority of cases the utilized colour palette and colour transitions appear balanced and aesthetic -they do not strike us as artificial, which would be a natural expectation given the random character of the perturbation (3) of the photo-realistic network parameters θ * .Most probably, the multiplicative character of the perturbation (3) helps in this respect as small weights do not get disproportionally large modifications.
Another intriguing feature is that quite often one can discern a particular style characteristic of a specific perturbed neural network, which differentiates it from neural networks obtained through other perturbations.This can be seen as a certain visual consistency in the columns of Fig. 3.
Let us also mention some limitations of these results, which have to be kept in mind.Only a subset of ImageNet classes (architectural, some objects, landscapes) behaves equally well under these deformations.Most probably the other ones require more fine-tuned weights.Indeed, generated images for certain classes exhibit some pathologies even at the photo-realistic point θ * , i.e. for the original pretrained network.Imposing further weight perturbations in these cases may easily "break" the images.Furthermore, we do not claim that every randomized perturbation leads to an aesthetic result.However, quite a lot do (in particular, the examples shown in Fig. 3 were chosen just out of consecutive random seeds 0-10).As a further test of genericity, in the Supplementary Fig. S1, we show examples of a stupa and espresso for a wide range of consecutive random seeds, without any human selection.Let us note that occasionally one encounters visually stunning examples such as those shown2 in Fig. 1.One might make here an analogy with the wide spectrum of artistic talent in the human population.
Finally, we would like to emphasize that the appearance of aesthetic properties should be interpreted in the context of the immense dimensionality of the space of parameters (∼ 55 million).In such a high dimensional space any two randomly chosen directions of deformation are essentially mutually orthogonal.Therefore, if any qualitative property repeats itself under random sampling even in a subset of cases, it is, in our opinion, a significant observation and the aforementioned property can be considered as being to a large degree generic.
What do the above experiments tell us?
Firstly, we may infer that the property of being perceived as aesthetic by a human may be related to the very nature of the photo-realistic world.Indeed, the original network was exposed only to photo-realistic images and did not have any contact with human art.The perturbations of the neural parametrization of the space of images leading to the ones shown in Fig. 3 were generic (multiplicative) random perturbations which were not biased by any further image input or optimization procedure.
Secondly, this property is firmly tied to the neural network representation G θ of spaces of images of the human visual environment, which apart from the particular values of the parameters θ , incorporates as a kind of structural prior the specific generator architecture of the BigGAN-deep-256 model.
Thirdly, the observed interplay of aesthetics and neural parametrization ties in with the hypothesis of [5][6][7] that the perception of aesthetics is linked with features of the human visual system in the brain.This may go beyond being just an analogy as there are already indications that the higher stages of human visual processing are quite well correlated with deeper levels of convolutional neural networks [11][12][13] .We will return to this point in more detail in the Discussion section.
Finally, we believe that the above findings could be of potential interest for humanities, in particular for the theory and philosophy of art and aesthetics.In this respect, the results of the experiments performed in the present paper could be treated as providing an unexpected piece of evidence of a latent possibility of a biological (non-cultural) origin of some kinds of "artistic renditions".
Differences with other approaches
It is important to contrast the results obtained in the present paper with some other approaches linking artistic renditions and neural networks as superficially they may seem similar.
A very well known construction is the so-called Neural Style Transfer 17 , where a given input image is transformed into the style of a second image (the style image), typically an image of a painting or work of art, with the similarity in style measured by a deep neural network pretrained on an image classification task.Alternatively, GANs have also been trained on art (see e.g. 18) to generate new images based on the given artistic styles.These techniques use explicit input of human-made art to produce new images similar in style, which was of course their key goal.
The aim of our investigation was, however, quite different and the "artistic character" of the images described in the present paper appeared spontaneously as an a-priori unexpected byproduct.Our results were obtained using only photo-realistic images of the natural world and the human environment without any contact with human or machine-made art.They thus provide a realization of the emergence of aesthetic properties directly from a neural network parametrization of a photo-realistic world.
An approach perhaps closest in spirit to ours is the hand picking of exceptional latent variables {z i } for the photo-realistic model θ * in order to generate surreal images (see 19 for a discussion).That procedure really exploits the deficiencies of the generative photo-realistic model in order to produce artistically interesting images.The more modern text-to-image generators DALL-E and DALL-E2 from OpenAI 20 can generate stunning images especially through paradoxical input text (which can be thought of as an analog of the exceptional latent variables {z i } mentioned above), hence receive essential input from a human.Moreover, their training data is much richer and includes, in particular, human art.Our result is conceptually quite different, as we show the essentially generic appearance of aesthetic/"artistic" images in the neighbourhood of the photo-realistic point θ * , without any human intervention (see e.g. also Supplementary Fig. S1) and without any contact with human art.
Visual symbolic representations
Another surprising feature of the generative neural representation of the space of images provided by the BigGAN-deep-256 model is that it allows to exhibit a certain kind of visual symbolic representations.In order to see that, we first heuristically identify the location of some high level semantic information in the neural encoding of G θ .
In contrast to the usual neural networks used for classification, the flow of information in a generative network generally runs from the most semantic/global features (incorporating here the class c given as input) to the low-level pixel-based visual output.We may therefore expect, that closer to the input we have more high-level semantic information.In a subsequent experiment, we substitute the parameters of the second block of layers, which we denote by B 2 (see Fig. 2), with values drawn randomly from normal distributions: The parameters of the normal distributions are taken from the statistics of θ * for the corresponding weights3 of the given layer L ∈ B 2 .This can be viewed as upsetting only a deep semantic part of the neural representation of the space of images.We will comment on the specific choice of the second block B 2 further below.Images generated by neural networks G θ constructed through (4) for five random seeds in the range 0 − 10 are shown 4 in Fig. 4. At first glance, individually the images may seem haphazard and quite disconnected from the original photo-realistic objects, but viewing them side by side we observe surprising similarities.Indeed, an overall distinctive characteristic of the original object seems preserved -like the round shape of the espresso or the triangular form of the volcano.It is however articulated using quite different and varying graphics primitives and materials.A similar phenomenon, but on a slightly more subtle level, occurs also for the stupa in the first row.There, the overall shape morphs either into some quasi-architectural form or into a person-like depiction.The dial telephone in the bottom row is most extreme.Here one cannot really identify by eye a strong dominant feature, so its visual representations may be difficult to interpret -although one could perhaps put forward some arguments for certain specific networks.
The above deformations can be thus understood as inducing a visual symbolic representation, where a dominant strong characteristic of the original object is realized in terms of completely unrelated materials and ingredients 5 We expect this interpretation to hold under the condition that such a very strong simple dominant feature exists for the object class in question.
The fact that the dominant visually prominent feature is still present after the modification of the weights in ( 4), indicates that it must be encoded also in the undeformed parts of the network.From this point of view, the second block of layers B 2 seems to play a privileged role in the neural network representation G θ , as it does not destroy that feature but rather swaps in varying local visual ingredients while still preserving some sharpness and locally detailed depiction.
Upsetting similarly only the first block B 1 loses any resemblance to the original objects, while doing the same for further blocks leads first to a loss of the local photo-realistic depiction and sharpness still seen in Fig. 4, while for still further blocks the original object becomes more and more directly recognizable.Consequently, from the point of view of the visual symbolic representations, the block B 2 is essentially singled out.
The fact that the phenomenon is mostly restricted to a subset of the neural network architecture should not be understood as a problem.First, we do not expect that all blocks/layers in a deep neural network play an equivalent role.The differentiation of their roles is in fact a very interesting feature (recall that e.g. the visual system in the brain has clear non-interchangeable modular structure).Second, the subset is quite sizeable, as the dimensionality of the B 2 parameter space in ( 4) is still very large, equal to around 8.5 million.Last but not least, we find it extremely intriguing that examples of such visual symbolic representations can be indeed realized in an artificial neural network context.
Discussion
In this paper we studied the global properties of a generative neural network parametrization of spaces of images.We found that essentially generic deviations of the neural network parameters from the photo-realistic point θ * quite often lead to neural networks which generate images which may appear aesthetic to humans.In many cases these images are difficult to distinguish at first glance from images of paintings or sketches made by a human, even though the neural network did not encounter any human-made art.
The above observation shows that aesthetic properties could arise in an emergent way directly from the nature of the photo-realistic visual environment through the character of its neural network representation.What is particularly intriguing about this result arises from tension with the belief that aesthetic perception is intimately linked to the human observer and appears to us as very subjective.Yet, the artificial neural network construction presented in this paper in some sense objectivizes this quality.This opens up numerous questions.What is the interplay between subjectivity and objectivity in aesthetic perception?To what extent and at what level can we draw an analogy between aspects of the neural parametrization and the biological roots of aesthetic perception in the human brain [8][9][10] ?In particular, how does this fit with the hypothesis of [5][6][7] that aesthetic perception is related to the human visual system in the brain?
On the one hand, as already mentioned, there is research showing that activations in the human visual cortex as measured by fMRI are quite well correlated with features in a deep convolutional network [11][12][13] .Also the RGB encoding used in images being input to the artificial neural networks already takes into account some very elementary aspects of human color vision.
On the other hand, the cited results [11][12][13] were obtained for discriminative/classification networks and, as far as we know, there is no similar investigation for generative networks.Indeed in the latter case, the information flow goes in an opposite direction as the generative networks produce images (thus intuitively mimicking visual imagination), while the human brain in the studies [11][12][13] perceives them.Of course, the human brain visual system with bidirectional information flow is certainly quite different in detail from a standard feedforward convolutional neural network.Paradoxically, this may increase the potential relevance of generative neural networks as they may be considered as modelling the top-down pathway in perception (e.g.along the lines of 21 ).In addition, one should note that there are marked similarities between perception and visual imagery seen in neuroimaging studies 22 (see also 23 for an extended discussion).
On a higher, more qualitative level, a common feature of the analysis of aesthetic perception motivated by neuroscience in5-7 is its emphasis on the essences of particular concepts, characteristic of the brain seeking constancy in its environment and thus abstracting away transient particularities.From this point of view, a pictorial representation which is closer to the internalized essence is more likely to be perceived as aesthetic.Photo-realistic details, on the other hand, are specific to particular object instances and tend to lower the aesthetic appeal.
In this sense, one can view the randomized perturbations away from the photo-realistic point as dispensing with the fine-tuned particular details, which due to the uncorrelated nature of the perturbations would get averaged out.The outcome could thus be interpreted indeed as generating more essence-like depictions.But this is certainly not the whole story, as just performing gaussian blurring on images does not make them aesthetic or essence-like.The neural network randomized perturbations must therefore act in a more subtle way and the concrete form of the generator neural network parametrization G θ somehow manages to capture some finer aspects of human aesthetic perception.Indeed, the emergence of visually appealing forms and colour transitions probably depends crucially on properties of convolutions appearing in the neural encoding, the overall colour structure of the natural environment and the specific mixing induced by the randomized modification of weights.
In addition, the specific randomized perturbations leave an imprint on the overall style of images generated by a particular deformed network.One could think of this as an analog of inter-subject "artistic" variability.
In this respect, it is interesting to speculate to what extent natural randomness and stochasticity in the nervous system 24, 25 could be relevant in the context of the present observations.One could expect that randomness would lead to more robust (essence-like?) concepts.Indeed, in the artificial neural network context it has been shown that adding random noise to neural networks during training and evaluation increases their resilience to adversarial examples 26,27 .This type of randomness, however, would be associated with intra-subject (or here intra-network) variability and is not directly represented in the constructions of the present paper.
The second main result of this paper is that randomly scrambling a specific part of the deep semantic structure of the neural network parameters θ * can lead to visual symbolic representations, where a dominant visual feature of a particular object is realized in terms of atypical and nonstandard visual ingredients.
This result is quite intriguing, as symbolic representations are an important component appearing throughout human culture, ranging from a key element of artistic expression (see e.g. 28,29 to the way that psychoanalysis interprets dreams 30,31 , with some important psychological concepts manifesting themselves encoded in various proxy objects, persons or scenes.In this context, we should nevertheless emphasize that the type of symbolic representation appearing in the present work is very much simplified, restricted just to some visual characteristics and completely blind to any aspect of cultural meaning, as the original generative neural network's world was just the purely visual environment.Even with these caveats, however, we find it very surprising that an analog of a symbolic representation can arise naturally in an artificial neural network context.
As a side remark, let us note that all the constructions in the present paper involve various kinds of randomized "rewirings" of the connection strengths of the artificial neural network.If one would look for brain states where randomness is enhanced, then a natural example would be the psychedelic state, where increased neural signal diversity was measured 32 in accordance with the "entropic brain" picture 33,34 .Perhaps some analogies could be pursued in this direction.
Finally, we would also like to make a methodological comment.The method of analysis of the neural network encoding used in the above case is in fact akin to the classical practice in neuroscience/neurology of analyzing the cognitive characteristics of patients with various brain lesions as a window on the functioning of the corresponding subsystems of the brain.In the present paper, we basically artificially induced a lesion in the generator network by substituting the values of a subset of neural network weights with completely random numbers.Subsequently, we examined the resulting neural network output.We expect that this technique may be quite useful for analyzing the structure of deep neural network knowledge representations for very complex models.Although here our focus was slightly different, as we emphasized more the qualitatively novel "positive" aspects (the visual symbolic representations) rather than the breakdown of photo-realism.
We believe that the obtained results and the consequent questions could foster new research on the borderline of cognitive neuroscience, (neuro)aesthetics and artificial neural networks.Moreover, we hope that both of the two main results of the present paper would be of potential interest for humanities, wherein they can be considered as proofs of concept showing the possible roots of some key human phenomena.
Figure 1 .
Figure 1.Selected images generated by neural networks obtained through various ways of randomized modifications from a BigGAN network generating photo-realistic images (for further examples see https://neuromorphic.art).
Figure 2 .
Figure 2. Schematic structure of the BigGAN-deep-256 generator network G θ taking as input a 128-dimensional vector of latent variables {z i } and one of 1000 ImageNet classes c.The blue blocks are residual blocks with two 1 × 1 and two 3 × 3 convolutions as well as four conditional batch normalization layers which receive shortcut connections from the entry stage.The purple block is a "self-attention" block.The blocks B 2 , B 4 , B 6 , B 8 , B 11 and B 13 increase image dimensionality by factors of 2. See 15 for details.
Figure 3 .
Figure 3. Images generated by neural networks with weights given by (3), realizing various deviations from the photo-realistic point θ = θ * .Each column corresponds to a distinct neural network.None of the networks had access to any human-made art.Far right: corresponding photo-realistic images generated by the original BigGAN-deep-256 network.The inputs to the different networks were identical.
Figure 4 .
Figure 4. Images generated by neural networks with weights given by (4), upsetting the deep semantic structure of the representation of the space of images.Each column corresponds to a distinct neural network.Most of the images exhibit dominant features of the original object realized in terms of different ingredients (see text).Far right: corresponding photo-realistic images generated by the original BigGAN-deep-256 network.The inputs to the different networks were identical. | 6,064.8 | 2021-09-16T00:00:00.000 | [
"Computer Science",
"Art"
] |
Presence of a cryptic Onchocerca species in black flies of northern California, USA
Black flies (Diptera: Simuliidae) serve as arthropod vectors for various species of Onchocerca (Nematoda: Onchocercidae) that may be associated with disease in humans, domestic animals, and wildlife. The emergence of zoonotic Onchocerca lupi in North America and reports of cervid-associated zoonotic onchocerciasis by Onchocerca jakutensis highlight the need for increased entomological surveillance. In addition, there is mounting evidence that Onchocerca diversity in North America is far greater than previously thought, currently regarded as Onchocerca cervipedis species complex. This study reports new geographic records and black fly vector associations of an uncharacterized Onchocerca species. To better understand the biodiversity and geographic distribution of Onchocerca, 485 female black flies (2015: 150, 2016: 335) were collected using CO2-baited traps from February to October 2015–2016 in Lake County, northern California, USA. Individual flies were morphologically identified and pooled (≤ 10 individuals) by species, collection date, and trap location. Black fly pools were processed for DNA extraction, and subsequent PCR and sequencing targeting of the NADH dehydrogenase subunit 5 gene of filarioids. Among the pools of black flies, there were 158 individuals of Simulium tescorum (2015: 57, 2016: 101), 302 individuals of Simulium vittatum (sensu lato [s.l.]) (2015: 82, 2016: 220), 16 individuals of Simulium clarum “black” phenotype (2015: 5, 2016: 11), and 13 individuals of S. clarum “orange” phenotype (2015: 6, 2016: 7). PCR analysis revealed the percentage of filarioid-positive pools were 7.50% (n = 3) for S. tescorum, 3.75% (n = 3) for S. vittatum (s.l., likely S. tribulatum), 7.69% (n = 1) for S. clarum “black” phenotype, and no positives for S. clarum “orange” phenotype. Genetic distance and phylogenetic analyses suggest that the northern California Onchocerca isolates belong to the same species reported in black flies from southern California (average pairwise comparison: 0.32%), and seem closely related to Onchocerca isolates of white-tailed deer from upstate New York (average pairwise comparison: 2.31%). A cryptic Onchocerca species was found in Lake County, California, and may be a part of a larger, continentally distributed species complex rather than a single described species of North America. In addition, there are at least three putative vectors of black flies (S. clarum, S. tescorum, S. vittatum) associated with this cryptic Onchocerca species. A comprehensive reassessment of North American Onchocerca biodiversity, host, and geographic range is necessary.
Background Onchocerca Diesing, 1841, a genus of filarial nematodes, is a globally distributed, vector-borne parasite that infects a wide variety of species that includes both animals and humans [1]. Well-known species of Onchocerca include Onchocerca volvulus (Leuckart, 1893), also known as the agent of river blindness in humans, and the zoonotic parasite Onchocerca lupi Rodonaja, 1967, the agent for causing canine ocular onchocerciasis [2]. Onchocerca species are transmitted via blood-sucking dipteran vectors, including black flies (Simuliidae) and biting midges (Ceratopogonidae), to definitive mammalian hosts [1].
Despite the zoonotic potential and possible deleterious impacts to host health of most Onchocerca species, little is known about the clinical and ecological significance of the ungulate parasite Onchocerca cervipedis Wehr and Dikmans, 1935, or what is commonly known as the "foot worm. " Described nearly a century ago [3], O. cervipedis has an extensive distribution range from areas of Central America to Canada, and infects a variety of cervids including the white-tailed deer Odocoileus virginianus (Zimmermann, 1780); mule deer Odocoileus hemionus (Rafinesque, 1817); moose Alces americanus Clinton, 1822; elk or wapiti Cervus canadensis Erxleben, 1777; and caribou Rangifer tarandus (Linnaeus, 1758); and the antilocaprid pronghorn Antilocapra americana (Ord, 1815) [4][5][6][7][8][9][10][11][12][13][14][15][16]. Onchocerca cervipedis has always been assumed to be the only Onchocerca species to infect these North American ungulates; however, there is mounting evidence that suggests otherwise. Recent studies have shown that Onchocerca isolates from the skin of white-tailed deer from New York [17] were genetically distinct from isolates of moose from northern Canada [15]. In addition, cryptic Onchocerca DNA was discovered from black fly vectors of southern California, and blood analysis supports the notion of a possible Cervidae host [18]. Therefore, all previous reports on Onchocerca across the Americas, including ungulate host and vector associations, require a comprehensive re-evaluation [15,17,18].
In order to shed further light on the cryptic diversity of species within Onchocerca from North America, we molecularly screened putative black fly vectors trapped in Lake County, NC, USA, for filarial nematode DNA. We discuss these results in the current context of known cryptic biodiversity and historical biogeography of Onchocerca in North America.
Black fly collection
Lake County, California, was the designated area targeted for black fly collection. Lake County is located in one of the broad valleys of northern California (122°50′ W, 39°00′N) and contains the largest freshwater lake entirely in California, Clear Lake [19]. Through coordination with the Lake County Vector Control District, female black flies were caught by CDC-style miniature CO 2 -baited mosquito traps (John W. Hock Company, Gainesville, FL, USA). Dry ice kept in a cooler served as source of CO 2 , and traps were set overnight at various locations around the shores of Clear Lake, weekly or biweekly, between April 2015 and October 2016 (Fig. 1). Once collected, the black flies were morphologically identified to species/ species-complex level according to taxonomic keys [20]. Adult S. clarum black flies were recognized by a distinct three-striped scutal pattern, but were differentiated by stripe color type. All samples were stored at −80 °C until further analysis.
Molecular screening and sequencing
Individual flies were morphologically identified and pooled (≤ 10 individuals) by species, collection date, and trap location (Table 1; Fig. 1). DNA extraction of pools of black flies was performed manually using the Qiagen DNeasy © Blood and Tissue Kit (Qiagen, Valencia, CA, USA). Briefly, black flies were macerated with sterile plastic pestles in an Eppendorf tube, and homogenized with ATL buffer and proteinase K. Samples were then incubated in a dry heat block for 45 min at 56 °C, and then centrifuged for 5 min at 8000×g. The remaining protocol steps followed the manufacturer's instructions. DNA lysates were kept refrigerated at −20 °C until further processing.
Keywords: Cervidae, Filarial parasites, Filarioidea, Onchocerciasis, Parasite biodiversity, Vector-borne diseases, Xenomonitoring
Potential PCR products were subjected to agarose gel to determine if amplicon was present. An E.Z.N.A. Cycle Pure Kit (Omega Bio-tek, Norcross, GA, USA) was used to purify DNA using the manufacturer's protocol. Products were then directly sequenced with the same primers using the BigDye Terminator Cycle Sequencing Kit.
Phylogenetic analysis
Sequences were aligned and edited using MEGA X software [22]. Phylogenetic trees of the partial nd5 gene (427 bp) were constructed by utilizing the maximum likelihood method and Tamura-Nei model with gamma distribution in 2000 bootstrap replicates. All sequences at the nd5 gene for Onchocerca species available through GenBank were included. Dirofilaria immitis (Leidy, 1856) and Dirofilaria repens Railliet and Henry, 1911 were used as outgroups within the family Onchocercidae.
Taxonomy of simuliid vectors and mammalian hosts for Onchocerca
The taxonomy of black flies and artiodactyl mammalian hosts followed the most recent and comprehensive literature [20,23,24].
All seven generated nd5 sequences were deposited in the GenBank (Accession numbers: MZ420192-98). Phylogenetic analysis showed strong support that the Lake County Onchocerca isolates in northern California are conspecific with the isolates from Los Angeles in southern California (94% bootstrap support), and likely belong to an uncharacterized species (Fig. 2). In addition, the upstate New York Onchocerca isolates appear to be closely related to both Californian isolates (92% bootstrap support) (Fig. 2). Other Onchocerca isolates or species that have been reported from North American wildlife, namely O. cervipedis sensu Verocai et al. [15] of moose from Canada, and O. lupi reported from companion animals, coyotes, and humans in North America [25][26][27], were not included within this clade.
Pairwise distance data (Table 2) also show strong support for each of the three geographic isolates being closely related to one another. Of the three, both Californian isolates are more similar to each other, with a pairwise distance averaging 0.32% (0.00-2.54%). The New York Onchocerca isolate had an average pairwise distance of 2.31% (2.12-3.27%) when compared to the Lake County isolates, and 2.34% (2.12-3.27%) when compared to the Los Angeles isolates. On the other hand, when Lake County isolates were compared to O. cervipedis sensu Verocai et al. [15] isolates, there was a pairwise distance of 10.04% (9.64-10.64%). These genetic distances are similar to interspecific Onchocerca species comparisons like O. lupi, 11.75% average (11.24-11.86%), rather than intraspecific comparisons (Table 2; Fig. 3). The majority of pairwise comparisons fall outside the range of ~ 2.00-5.00% (Table 2; Fig. 3), which is comparable to other studies comparing interspecific versus intraspecific based on pairwise distances at the partial cox-1 gene of the genus Onchocerca [2]. However, when the New York isolate is compared to either Californian isolate, all pairwise comparisons fall within the range of ~ 2.00-5.00%. While evidence clearly indicates that all Californian isolates are conspecifics ( Table 2; Fig. 3), the phylogenetic relationships among the New York and Californian isolates remain ambiguous. Table 3 shows the average and range percent identity among Lake County Onchocerca isolates and other isolates also shown in Table 2 using BLAST analysis.
Discussion
Our study identified cryptic Onchocerca DNA in three different Simulium species in southern California, USA. We discovered that Onchocerca isolates found in black flies in Lake County, northern California, belong to the same cryptic Onchocerca species previously found in black flies in Los Angeles County, southern California [18]. Corroborating the findings from southern California, Onchocerca DNA was detected in two black fly species: S. vittatum (s.l.) and S. tescorum [18] (Table 1). In addition, a third species of black fly was shown to carry the same cryptic Onchocerca DNA: S. clarum belonging to the "black" phenotype (Table 1).
Phylogenetic analyses of the nd5 gene demonstrate that the cryptic Onchocerca found in southern and northern California black flies (present study; [18]) and the equally cryptic Onchocerca isolate found in New York, northeastern USA [17] represent one individual clade with little genetic divergence (Fig. 2). However, a definitive conclusion on whether the Californian isolates are conspecific with the New York isolates cannot yet be determined (Table 2; Fig. 3). Further studies targeting a multilocus approach could help shed light on the exact phylogenetic relationships and taxonomic status of these geographically distant isolates. This notion is best exemplified by comparing the nd5 gene to the cox-1 gene, which appears to exhibit greater diversity within the cryptic Onchocerca isolates [18]. In addition, at this stage, it is not possible to conclude that the cryptic species present in northern California belongs to the originally described O. cervipedis. In the original description of the species by Wehr and Dikmans [3], the authors used specimens from two different locations and at least two different hosts, including O. virginianus and O. hemionus from Montana, USA, and O. hemionus from British Columbia, Canada. To further elucidate this taxonomic conundrum, isolates from these hosts and locations should be collected, morphologically re-evaluated, molecularly characterized, and subsequently compared to these many isolates within the Onchocerca complex.
Molecular screening and putative vectors of cryptic Onchocerca isolates
The finding of cryptic Onchocerca DNA through molecular screening of arthropod vectors (i.e., xenomonitoring) provides a straightforward approach to understanding more about parasite biodiversity, geographic distribution, and putative vector associations. Moreover, the utilization of xenomonitoring of North American parasites allows for concurrent monitoring of other similar Onchocerca species, such as the zoonotic O. lupi, that are of current public health concern [28]. However, despite these advantages, implication of a given arthropod species in the transmission of Onchocerca should be cautiously interpreted until further demonstrated by recovering infective third-stage larvae or parasite DNA from the head of the vector, and/or experimentally. Comparable to Verocai et al. [18], our results showed that the positivity rate for Onchocerca DNA was low in the black fly populations. This is similar to other filarial nematode studies that revealed low positive prevalence rates of O. lupi in southern California [28], O. volvulus in Africa [29,30], and Wuchereria bancrofti (Cobbold, 1877) in American Samoa and Guinea [31][32][33].
Our study also provided evidence for an additional species of black fly as a probable vector of this Onchocerca species. Although three black fly species have been implicated as possible intermediate hosts for this Onchocerca, it should be noted that the CO 2 trapping method utilized may impact the abundance and species composition of black flies caught [34]. According to the literature, S. clarum has been reported to feed on a variety of mammals (horses, cattle, rabbits, and humans) and birds [35,36]. The finding of DNA of an Onchocerca species possibly associated with a cervid host(s) suggests that these mammals may serve as a blood source for this dipteran, similar to that of S. tescorum and S. vittatum, as suggested by Verocai et al. [18,20]. However, S. clarum is restricted to the California Central Valley region near the present study site of Lake County [20]. Similarly, S. tescorum has been reported with a limited range, spanning only California and Arizona [20,23]. This means that even if these two vectors are competent hosts for this Onchocerca species, they would only contribute to the transmission within their more restricted distribution. In contrast, species within the S. vittatum complex, which includes S. tribulatum, have a widespread distribution across North America, including both California and New York [23].
Definitive hosts of cryptic Onchocerca isolates
While relevant literature suggests that this Onchocerca isolate is associated with cervid hosts [17,18], there is a lack of experimental data to definitively confirm this hypothesis. However, the recent discoveries of at least two or more genetic Onchocerca isolates in North Fig. 3 The number of base substitutions per site are calculated and the evolutionary divergence is estimated between sequences. Each bar represents the total amount of pairwise comparisons of the nd5 gene, or nucleotide sequence divergence, from 50 different Onchocerca species or isolates. Evolutionary analysis was done using MEGA X and a Tamura-Nei model with gamma distribution. Blue bars indicate supposed intra-isolate comparisons and orange bars indicate supposed inter-isolate comparisons of all Onchocerca species or discovered isolates. Lake County, CA and Los Angeles, CA isolate comparisons have been treated as intra-specific species. Gray bars indicate NY-CA isolate comparisons Table 3 Average percent identity of Lake County isolates compared to other known Onchocerca isolates, using NCBI BLAST analysis, at the nd5 gene level Onchocerca isolates are broken down by region (Lake County, CA; Los Angeles, CA; and Ithaca, NY) or by the species it is from (O. lupi; Onchocerca sp.). Onchocerca lupi was chosen because it is a North American Onchocerca species that is not considered part of the hypothesized Onchocerca cervipedis species complex Lake County, CA America hypothesized to be associated with at least three of the cervid hosts (i.e., mule deer, white-tailed deer, and moose) raise many questions regarding Onchocerca-host assemblages. Of these three cervid hosts, only the mule deer's range encompasses southern California, including Los Angeles County [37][38][39]. Thus, it was reasonably hypothesized that the mule deer could be the putative host to the Onchocerca isolate from southern California if the parasite is truly associated with cervid hosts [18]. Lake County also includes the range of the mule deer [37]; however, unlike southern California, Lake County is also home to the Californian tule elk, or Cervus elaphus nannodes Merriam, 1905 [40]. This elk subspecies was hunted to near extinction in the late 1800s, and now has a thriving population in California. According to most recent data, about 6000 tule elk populate California, including many herds that live near the Lake County region of northern California where black flies were sampled for this current study [40][41][42]. While there was no blood meal analysis completed, it is possible that these cervids could be a blood meal source for black flies and consequently be a potential host to the hypothesized O. cervipedis species complex [8]. Ideally, adult worms or microfilariae should be sampled from necropsied elk hosts and molecularly analyzed to confirm its definitive host status. Species within O. cervipedis complex have been reported in a variety of locations across North America in the six ungulate hosts: pronghorn from Idaho [9]; moose from Alaska, Alberta, British Columbia, and the Northwest Territories [12,[14][15][16]43]; elk from Montana [8]; mule deer from Arizona, California, Montana, Utah, Wyoming, and British Columbia [3-5, 7, 8, 10, 18, 44-52]; white-tailed deer from Arizona, Missouri, Montana, New York, Oregon, Pennsylvania, and British Columbia, and also from Costa Rica [3,5,6,8,13,17,46,50,[53][54][55][56][57]; and caribou from Alaska and British Columbia [11,15]. Additional records from Odocoileus from Colorado, Idaho, and Montana were reported as "deer, " without species designation [58][59][60][61][62]. Therefore, it can be inferred that sample collection should begin in these reported locations and include all six ungulate hosts when obtaining biological samples. Recovery of nematodes from necropsy, with subsequent morphological and DNA identification, will confirm parasitic infection of a definitive host and aid in interpreting the distribution of cryptic Onchocerca isolates.
Evolutionary history and ecological considerations of cryptic Onchocerca isolates
Currently, it is hypothesized that the two, and possibly more, known Onchocerca species (i.e., O. cervipedis sensu Verocai et al. [15] and the clade comprising the Californian and New York isolates [17,18]) are the result of independent expansion events from Palearctic ungulates hosts colonizing from across the Bering Land Bridge into the Nearctic [63][64][65]. It is currently unknown if the finding of at least two Onchocerca species is the result of a small, incomplete sampling of larger species diversity or the true representation of diversity in North America. Nevertheless, there is substantial evidence from eastern Asia for prior underestimation of Onchocerca species diversity and richness. For instance, Onchocerca suzukii Yagi, Bain and Shoho, 1994, Onchocerca eberhardi Uni et al., 2007, and Onchocerca takaokai Uni, Fukuda and Bain, 2015, have been recently described from wild ungulates of Japan [66][67][68]. Furthermore, Onchocerca borneensis Uni, Mat Udin and Takaoka, 2020 [69], was described in bearded pigs of Borneo with additional molecular evidence suggesting that two closely related parasites, Onchocerca dewittei Bain, Ramachandran, Petter and Mak, 1977, and Onchocerca japonica Uni, Bain and Takaoka, 2001, which were considered subspecies of the former were, in fact, separate species [69]. Indeed, it is feasible that the North American Onchocerca species complex, about which much is still unknown, could comprise undescribed Onchocerca diversity, similar to the pattern that we have witnessed in Asian suids and ungulates. Moreover, host-parasite biogeography appears to play a critical role in Onchocerca diversification. As noted by Uni et al. [69], O. borneensis and O. dewittei infect Sus barbatus Müller and Sus scrofa vittatus Boie in the Indomalayan region, but O. japonica and O. dewittei infect different subspecies of the same host species in the Palearctic and Indomalayan regions. Thus, when reevaluating Onchocerca in the North American landscape, collecting specimens from sympatric and allopatric host ranges may yield more complete information about parasitic diversity.
Conclusion
A cryptic Onchocerca species was found in Lake County, California, which is likely conspecific to isolates previously characterized from southern California. Putative vectors of this cryptic parasite include S. tescorum and S. vittatum. In addition, a previously unrecognized black fly vector, S. clarum, was discovered to be a potential vector. In order to understand the true biodiversity of the genus Onchocerca in North America, a complete continental re-evaluation of definitive hosts, vector associations, and geographic distribution is necessary through the integration of classical and molecular methods. | 4,510.8 | 2021-09-15T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Long-wavelength macromolecular crystallography – First successful native SAD experiment close to the sulfur edge
Phasing of novel macromolecular crystal structures has been challenging since the start of structural biology. Making use of anomalous diffraction of natively present elements, such as sulfur and phosphorus, for phasing has been possible for some systems, but hindered by the necessity to access longer X-ray wavelengths in order to make most use of the anomalous scattering contributions of these elements. Presented here are the results from a first successful experimental phasing study of a macromolecular crystal structure at a wavelength close to the sulfur K edge. This has been made possible by the in-vacuum setup and the long-wavelength optimised experimental setup at the I23 beamline at Diamond Light Source. In these early commissioning experiments only standard data collection and processing procedures have been applied, in particular no dedicated absorption correction has been used. Nevertheless the success of the experiment demonstrates that the capability to extract phase information can be even further improved once data collection protocols and data processing have been optimised.
Introduction
Structural biology as a field for understanding biological functions on an atomic level has expanded greatly during the sixty years of having high resolution models of macromolecules available. At the time of writing, the number of deposited structures in the protein data bank is larger than 120,000, of which most (>109,000) are based on data from X-ray crystallographic experiments [1].
The phase problem remains a major challenge in macromolecular X-ray crystallography. The intensities measured in a diffraction experiment only contain the information of the amplitudes of the complex structure factors, but not their phases. Without the phase information the Fourier transformation to calculate electron density maps in real space is not possible.
Most macromolecular crystal structures are nowadays solved by molecular replacement. With the growing database of macromolecular models, homologous molecules can be used for calculating initial phase estimations. The key tools for molecular replacement were developed in the 1960s [2] and the method has become predominant for macromolecular phasing today. However, the use of experimental phasing has continued to be needed for novel structures for which no homologous protein models are available or for validation purposes, when the risk of strong phase bias from a molecular replacement solution cannot be excluded.
In the first macromolecular structure determinations the crystallographic phase problem was solved by the multiple isomorphous replacement method (MIR). In MIR typically electron-rich elements bind to the macromolecule with the aim that changes to the measured diffraction intensities come from these introduced elements, while causing minimal disturbance to the remaining protein structure [3]. With multiple derivatives an unambiguous estimation of phases can be made. The concept of such difference measurements coming from a subset of atoms within each unit cell has remained as the main method for experimental phasing of macromolecules, but in slightly different forms.
Anomalous diffraction was early identified as a possible way of phasing macromolecules [4] and provides the benefit of not requiring multiple different isomorphous crystal structures from several heavy atom derivatives. It was first successfully applied for phasing the structure of the small protein crambin, using the anomalous diffraction from the sulfurs intrinsically present in the molecule [5]. This type of single wavelength experiment became known as the single-wavelength anomalous diffraction method (SAD). Also, during the 1980s, synchrotron light sources with the option to tune the X-ray wavelength enabled the use of anomalous diffraction from multiple wavelengths to solve the phase problem. This method uses the changes of the anomalous and dispersive contributions to the structure factors around an absorption edge from elements present in the crystal structure. This technique is called multiple wavelength anomalous dispersion (MAD) method [6,7].
While MIR requires multiple heavy atom derivatives in isomorphous crystal forms, SAD and MAD can be performed on one crystal and can thereby avoid the necessity for isomorphous crystals. However, the presence of anomalous scatterers is needed. These scatterers can be introduced by soaking, co-crystallisation or biological incorporation of modified amino acids, such as selenomethionine. An alternative to this is to make use of anomalous scattering from naturally occurring elements. For metalloproteins absorption edges lie typically within the wavelength range accessible at standard macromolecular crystallography beamlines (k = 0.9 Å-2.5 Å). However, the edges of sulfur, which is present in the amino acids cysteine and methionine, and phosphorus, which forms part the RNA or DNA backbone, are at significantly longer wavelengths, at k = 5.02 Å and 5.78 Å, respectively. Therefore sulfur and phosphorus based native SAD has remained inaccessible to many projects due to the very small anomalous signals present at shorter wavelengths. The anomalous signal increases approximately with the cube of the wavelength towards the sulfur and phosphorus K edges. Hence, long-wavelength native SAD experiments offer an opportunity to solve the phase problem directly from crystals without additional labelling. In recent years a combination of improved experimental setups on third generation synchrotrons has allowed successful native SAD studies from increasingly complicated structures using standard beamlines at wavelengths between 1.8 and 2.3 Å [8][9][10]. Only recently, new instruments have started to offer access to even longer wavelengths (2.7-3.3 Å) as at P13, PETRA III [11] and BL-1A, Photon Factory [12].
During the 1990s, proof of principle experiments were performed by H. Stuhrmann to utilise very long wavelengths for maximised anomalous differences. These experiments were performed in chambers with helium or air-gapped sample stages surrounded by vacuum, to minimise background scattering, and showed how some of the challenges with longer wavelength diffraction setups could be addressed [13][14][15][16]. However, it was not possible to overcome all of them at this stage and further improvements on the beamline instrumentation were needed [17].
A dedicated macromolecular crystallography beamline, I23, for long-wavelength X-ray diffraction experiments, has been built at Diamond Light Source. I23 is designed to minimise background scattering and absorption by performing experiments in an invacuum end station, including the detector and sample environment. The semi-cylindrical Pilatus 12 M detector covers a large 2h range of diffraction angels up to ±100°. Cooling of the crystals is realised by conductive links through the multi-axis goniometer in kappa geometry. Samples are transferred through a shuttle based air-lock system adapted from cryo-electron microscopy [18].
Here we present results from the ongoing commissioning work of this novel beamline at Diamond Light Source. A SAD experiment on a crystal from the protein thaumatin from Thaumatococcus daniellii was performed at a wavelength of 4.96 Å. While studies at similar wavelengths have previously been published [13][14][15][16], we show the first successful phasing experiment at such a long wavelength, only 0.06 Å below the theoretical sulfur K edge.
Crystallisation and sample handling
Thaumatin crystals were prepared as described in [18] changing the potassium/sodium tartrate concentration to 0.7 M. The crystal (approximately 110 Â 60 Â 60 lm 3 ) used for data collection was harvested using a sample mount laser-cut from 10 lm thick glassy carbon SigradurÓ (HTW, Thierhaupten Germany) and plungefrozen in liquid nitrogen.
Data collection
A reference dataset was collected at a wavelength of 1.38 Å over a total range of 90°, followed by 400°of data at a wavelength of 4.96 Å. For both datasets diffraction images of 0.1°with 0.1 s exposure were recorded with the in-vacuum Pilatus 12 M detector in a continuous sweep. Datasets were collected with an unfocused beam of 300 Â 300 lm 2 in size, illuminating the whole crystal throughout the data collection. The flux of 1.6 Â 10 11 and 4.6 Â 10 11 photons/s, respectively, was determined by a diode positioned after the beam-defining slits. The temperature of the goniometer head was 43 K at the time of data collection, with an estimated temperature rise of 6 K across the thermal interface of the sample holder. Studies to accurately determine the sample temperature are currently being conducted.
Dose estimations were done with RADDOSE-3D [19] with a model of the crystal geometry generated in OpenSCAD [20]. The Bijvoet ratio of thaumatin for the different wavelengths was estimated as in [5].
Data processing and phasing
Data were processed with XDS [21]. No further attempts beyond the strict absorption correction model used in the COR-RECT step of XDS were undertaken. The anomalous signal (|F (+) À F(À)|/r) as a function of resolution was calculated with XSCALE [21]. For comparison of anomalous signals between the two different wavelength datasets, the first 90°of data for the k = 4.96 Å dataset was processed and reported separately. For all further work, the complete 400°data range was used for the k = 4.96 Å dataset. Substructure determination was performed with SHELXD [22,23], using the k = 4.96 Å dataset with 10,000 trials searching for 9 sites. Heavy atom sites were refined and used for phasing in SHARP [25] with density modification using DM and SOLOMON [26].
Initial model building
The density modified map together with the heavy atom sites were used for manually placing the cysteine and the methionine residues. Polyalanine chains were extended from these positions in Coot [27]. The model and phases were improved by iterating between phenix phase_and_build [24] and manual model building.
Once roughly half the model had been accounted for, buccaneer [28] was able to trace the remaining residues with only minor registry and connectivity errors to manually correct for.
Refinement
Iterating between manual model building and refinement with phenix.refine [24,29] started from the buccaneer model for the k = 4.96 Å dataset and from PDB entry 4zg3 [18] for the k = 1.38 Å dataset. MolProbity geometry validation [30] was used throughout the refinements. The selection of R free reflections was imported from PDB entry 4zg3 for the k = 1.38 Å dataset, while the k = 4.96 Å dataset used a randomised selection of 10% of the reflections. B-factors were modelled isotropically per atom for the k = 1.38 Å dataset, while the k = 4.96 Å dataset was refined with one isotropic B-factor per amino acid and with secondary structure restraints. The model and structure factors of the k = 4.96 Å dataset have been deposited in the Protein Data Bank under entry name 5TCL.
Distance comparison between the sulfur positions in the model and the SHELXD sites, used for phasing, was performed with phenix.emma [24].
Electron density map preparation
Phases from the density modified SHARP output were combined with the structure factor amplitudes from XDS using CAD [31] to generate anomalous difference maps with FFT [32]. This output was used for figure preparation with PyMOL [33]. Map correlation was calculated with get_cc_mtz_mtz in the phenix package [24], comparing maps calculated from the above mentioned merged file with the 2F o À F c map from the k = 4.96 Å dataset phenix.refine output.
Results and discussion
The k = 1.38 Å dataset first collected acts as a crystal quality indicator with the crystal diffracting to a resolution of 1.5 Å and a strong asymptotic I/r(I) (ISa) of 65 (further statistics available in Table 1). The absorbed X-ray dose of 0.2 MGy was calculated by RADDOSE-3D for this first dataset. The k = 4.96 Å dataset exposed the crystal to a 50 times higher dose of 11.4 MGy, which is still well within the Henderson limit of 20 MGy [34]. The maximum resolution at this very long wavelength is limited by the detector geometry to 3.2 Å, rather than the sample, and strong spots are seen to the edge of the corner detector panels. The detector geometry with an aspect ratio of 2:1 limits data completeness achievable with a single axis goniometer at highest resolution, depending on sample orientation and symmetry, which in this case leaves the outer shell with a low completeness of 75%. The ISa of 16 for the k = 4.96 Å dataset also indicates a reduction of data quality compared to the k = 1.38 Å dataset. This is also manifested in the R-factors in the same resolution shell (data not shown), which are significantly higher for the k = 4.96 Å dataset. As the increase of the absorption cross section is around 45-fold when changing wavelength from k = 1.38 Å to 4.96 Å, standard absorption correction protocols as used in XDS are no longer good enough to accurately correct for the absorption effects from the sample, sample mount and solvent. In fact the overall transmission through a path length of 60 lm in a protein crystal is less than 20%, so the overall data quality is surprisingly good. Nevertheless, a strong anomalous signal as predicted from the Bijvoet ratio of 8.8% for thaumatin, is present in the data despite the decreased data quality. This is highlighted in Fig. 1 with the anomalous signal as a function of resolution for the two different wavelengths.
The sulfur substructure determination with SHELXD for the k = 4.96 Å dataset was successful, as indicated by the separation of a second population with higher correlation coefficients in Fig. 2, with a success rate of 579 hits in 10,000 trials (5.8%). Due Table 1 Data processing statistics from XDS and XSCALE with Friedel mates treated as separate reflections. The datasets were collected on the same crystal with the shorter wavelength being collected first. # Calculated as in [5]. § Accumulated dose as calculated by RADDOSE-3D for the whole crystal which fit inside the beam for all rotations. * Fig. 1. The anomalous signal, as reported by XSCALE, plotted as a function of resolution for the two datasets. Due to the k = 4.96 Å dataset covering a larger rotation range than the k = 1.38 Å dataset, data for both 90°and for 400°are shown. The lower anomalous signal for the 90°section of the k = 4.96 Å dataset at higher resolution is an effect of the lowered completeness, due to the detector geometry. Fig. 2. 10,000 substructure solution attempts, searching for 9 sites, with SHELXD plotted with CC weak vs CC all for the k = 4.96 Å dataset. to the relatively low resolution limit of the long wavelength dataset of d min = 3.2 Å disulfide bridges are not resolved and can be considered as super-sulfurs occupying single sites. Hence, 9 sites were found, as thaumatin contains 8 disulfide bridges and one methionine. A comparison of the substructure atom positions with the refined thaumatin sulfur positions shows that the substructure sites are positioned in between the two sulfur positions from the two cysteine residues forming the disulfide bridges (Table 2) and on the methionine. Phasing and density modification as performed with SHARP gave a map that carries several characteristic features of the protein backbone and aromatic residues as seen in Fig. 3. The anomalous difference Fourier map using these initial phases indicates the positions of all the sulfurs in thaumatin (Fig. 4). The experimental map connectivity and side chain features are generally good. The map correlation coefficient between the initial map based on the experimental phases and the final map after refinement is 0.725. This experimental electron density map at 3.2 Å resolution allowed to place a first protein model for manual building and subsequent model completion and refinement. In this first proof-of-principle experiment there has not been much optimisation of density modification parameters, or on the absorption correction during postprocessing. Already without these optimisations the results show that datasets can be collected at such long wavelengths and be successfully used for substructure determination, phasing and refinement.
Model refinement against the k = 1.38 Å dataset gave R-factors of 17/19% (R/R free ) with mostly favourable geometry as seen in Table 3. For the k = 4.96 Å dataset, no water molecules were built nor any split conformations, except for one tyrosine side chain otherwise causing strong positive difference map peaks. The reduced resolution for the long-wavelength dataset resulted in a lower number of reflections to refine against and only one Bfactor per amino acid was refined. This resulted in refinement Rfactors of 20/25% (R/R free ).
Conclusions
This study shows that the I23 in-vacuum experimental setup enables crystallographic phasing of macromolecules at a wavelength close to the sulfur K edge. This first successful structure determination has been performed with standard crystallographic software packages, without specific adaptation to the unconventional experimental conditions, other than the detector geometry. This opens the door for harnessing the increased phasing power at these wavelengths by optimising data collection and data processing protocols. Dedicated absorption correction models will yield major improvements, while optimised data collection strategies, such as making use of multi axis goniometry, low-dose-highmultiplicity data collections and inverse beam datasets, will help to plan phasing experiments adequately. Altogether this warrants much improved data quality and novel experiments that previously were outside the reach of any other experimental setup. | 3,875.2 | 2017-11-15T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Validation of stratospheric water vapour measurements from the airborne microwave radiometer AMSOS
. We present the validation of a water vapour dataset obtained by the Airborne Microwave Stratospheric Observing System AMSOS, a passive microwave radiometer operating at 183 GHz. Vertical profiles are retrieved from spectra by an optimal estimation method. The useful vertical range lies in the upper troposphere up to the mesosphere with an altitude resolution of 8 to 16 km and a horizontal resolution of about 57 km. Flight campaigns were performed once a year from 1998 to 2006 measuring the latitudinal distribution of water vapour from the tropics to the polar regions. The obtained profiles show clearly the main features of stratospheric water vapour in all latitudinal regions. Data are validated against a set of instruments comprising satellite, ground-based, airborne remote sensing and in-situ instruments. It appears that AMSOS profiles have a dry bias of 0 to –20%, when compared to satellite experiments. Also a comparison between AMSOS and in-situ hygrosondes FISH and FLASH have been performed. A matching in the short overlap region in the upper troposphere of the lidar measurements from the DIAL instrument and the AMSOS dataset allowed water vapour profiling from the middle troposphere up to the mesosphere.
Introduction
Water vapour is important for our environment and climate. It is a key element in the radiative budget of the earth's atmosphere and contributes the largest to the greenhouse effect due to strong absorption in the troposphere. In the stratosphere water vapour is a source for the formation of polar stratospheric clouds and the OH radical molecule and thus it is involved in the process of ozone depletion. In the mesosphere it is destroyed by photolysis. As a long-lived and variable trace gas it provides the possibility to study atmospheric motion. The importance of knowledge about this key parameter is evident (SPARC, 2000) (WMO, 2007).
A very common technique to measure water vapour is by passive remote sensing in the infrared or microwave regions by satellite, aircraft or ground-based instruments. Other techniques use in-situ sensors FISH (Zöger et al., 1999) and FLASH (Sitnikov et al., 2007) or Frost-Point-Hygrometers (Vömel et al., 2007) from balloon or aircraft, or active remote sensing with differential absorption lidar (Ehret et al., 1999). Satellite observations have been made by UARS/HALOE (Russell III et al., 1993), UARS/MLS (Lahoz et al., 1996), ERBS/SAGE-II (Chiou et al., 1997), SPOT4/POAM-III (Lucke et al., 1999), AURA/MLS (Schoeberl et al., 2006), ENVISAT/MIPAS (Fischer et al., 2008) and Odin/SMR (Urban et al., 2007) over the last two decades and deliver an excellent three dimensional global coverage of the water vapour distribution. From aircraft a two dimensional section of the water vapour distribution in the Published by Copernicus Publications on behalf of the European Geosciences Union. atmosphere along the flighttrack is obtained. Because stratospheric water vapour has a latitudinal dependence the main distribution patterns can be measured by a flight from northern latitudes to the tropics. Ground-based (Deuber et al., 2004) (Nedoluha et al., 1995) or balloon soundings determine the one-dimensional distribution on a continuous time basis and thus are very interesting for local trend analyses. With the Airborne Microwave Stratospheric Observing System (AMSOS), carried by a Learjet-35A of the Swiss Airforce, we measured the latitudinal distribution of water vapour from the tropics to the north pole during one week per year from 1998 to 2006. A former version of the instrument had been flown from 1994 to 1996 (Peter, 1998). The instrument was flown in spring or autumn during active stratospheric periods due to the change between polar nighttime and day-time. Measurements inside the polar vortex as well as in the tropics including one overflight of the equator were accomplished. The dataset overlaps several satellite experiments in time. In a previous work (Feist et al., 2007) this dataset was compared to the ECMWF model.
In this paper we first present the AMSOS retrieval characteristic of the version 2.0 data and the 9-year AMSOS water vapour climatology of the northern hemisphere. Secondly the validation of the data which has been performed against already validated datasets from satellite experiments (Harries et al., 1996), (Rind et al., 1993), (Nedoluha et al., 2002), (Milz et al., 2005), (Raspollini et al., 2006) and the ground-based station MIAWARA (Deuber et al., 2005) for the whole profile range, as well as with in-situ and lidar measurements in the Upper Troposphere-Lower Stratosphere (UTLS) region. The advantage of this dataset is its coverage of all latitudes from -10 • to 90 • North for the UTLS region up to the mesosphere for early spring and autumn periods with a good horizontal resolution of 57 km. The data are useful for studies of atmospheric processes and validation. The profiles are available for download at http: //www.iapmw.unibe.ch/research/projects/AMSOS.
AMSOS water vapour measurement and retrieval
2.1 Measurement method AMSOS measures the rotational emission line of water vapour at 183.3±0.5 GHz (Vasic et al., 2005) by up-looking passive microwave radiometry (Janssen, 1993). Performing observations at this frequency is dependent on atmospheric opacity. Figure 1 shows a set of spectra measured at different flight altitudes during the ascent of the aircraft over the tropics. Under humid conditions, as encountered in the tropics, the line is saturated up to an altitude of more than 9 km. On the other hand in polar regions it is possible to make good quality measurements at flight levels down to approximately 4 km. Under very dry conditions in the winter months it is also possible to retrieve stratospheric water vapour from the alpine research station Jungfraujoch in Switzerland at 3.5 km altitude for about seven percent of the time (Siegenthaler et al., 2001).
The AMSOS instrument was flown with a broadband Accousto-Optical Spectrometer (AOS) of 1 GHz bandwidth during all missions, a broadband digital FFT spectrometer with the same bandwidth and a narrowband digital FFT spectrometer with bandwidth of 25 MHz only in 2005 and 2006. In this work only profiles from the AOS are presented.
AMSOS profile retrieval setup, characteristic and error analysis
For the retrieval of water vapour profiles from the measured spectra we need knowledge of the relationship between the atmospheric state x and the measured signal y. This is described by the forward model function F (Eq. 1). To find a solution for the inverse problem, we use the optimal estimation method (OEM) according to Rodgers (Rodgers, 2000). The forward model is split up in a radiative transfer part F r calculated by the Atmospheric Radiative Transfer Simulator (ARTS) and a sensor modelling part F s which is done by the software package Qpack . The implementation of the retrieval algorithm is also done by Qpack. Forward model parameters b include the instrument influencing the measurement, namely antenna beam pattern, sideband filtering, observation angle, attenuation due to the aircraft window, standing waves, as well as atmospheric parameters, like pressure and temperature profiles, other species, spectral parameters and line shape. is the measurement noise. 2. Characterisation of the AMSOS retrieval with the averaging kernel functions. With the black dashed-dotted line the flight altitude is marked. The gray dashed lines mark the independent layers. To make the width of the averaging kernel functions directly visible, two functions are plotted as thick lines (a). The vertical resolution is between 8-16 km, and increases with altitude (b). AMSOS profiles for an altitude range between 15 and 60 km can be retrieved from the AOS spectrometer as seen in the measurement response (c). The total error (total=smoothing+observation) is less than 20% for the useful part of the profile (d).
Inverse problems are often ill-posed and lead to a best estimatex of the real state by minimising the so called costfunction with the help of apriori information x a of the retrieval quantity x, the measured spectrum y and their covariance matrices S x and S y . The best estimatex is found by an iterative process with the Marquardt-Levenberg approach.
where K i = ∂F(x i ,b) ∂x i , γ a trade-off parameter and D a diagonal scaling matrix.
The character of a retrieval is derived from the averaging kernel matrix A=A(K, S x , S y ). A= ∂x ∂x describes the sensitivity of the retrieved profile to the true state. A also provides information about the measurement response that is a measure of how much the retrieved profile depends on the measurement and how much on the apriori profile by taking the integral over A. Another term often used in this context is the apriori contribution. The sum of measurement response and apriori contribution is 1. The full width at half maximum of each averaging kernel function, represented by a row in the matrix A, provides the vertical resolution. averaging kernel matrix A, the meaurement response and the vertical resolution of a retrieval from AMSOS is shown in Fig. 2a-c. Between approximately the flight altitude and 60 km the measurement response is more than 80%. The vertical resolution ranges between 8-16 km increasing with altitude. The trace of A is an indicator for the number of independent points in the profile and is between 4-6 for AMSOS. They are marked by the dashed gray lines in Fig. 2. There is one layer in the troposphere and one in the mesosphere, the remaining four are in the stratosphere. Qpack takes into consideration the model uncertainties and the measurement error and the error of the apriori profile, called the smoothing error. The total error (total=observation+smoothing) is in the order of 10-15% for the altitudes with apriori contribution less than 20% (Fig. 2d). The observation error is due to the remaining thermal noise on the spectrum and the smoothing error part is due to the covariance of the apriori information.
The smoothing error part is almost the double of the observation error.
2.3 Water vapour apriori information, covariance matrix and model parameters An important issue for processing our AMSOS dataset was the selection of an appropiate apriori water vapour profile to constrain the retrieval algorithm to a reasonable solution.
We made the choice to use a global mean of monthly means of the ERA40 climatology from ECMWF from ground up to 45 km. H 2 O vmr profiles from ERA40 were derived from the specific humidity field according standard conversion equations. To build the mean profile we introduced a latitudinal weight to avoid an overweight of polar profiles since the number of ECMWF grid points per latitude is constant. Out of the statistic of these 425 000 profiles we set up the covariance matrix S x as shown in Fig. 3a. The standard deviation ( Fig. 3b) in the stratosphere is lower than 10% and in the troposphere it raises up to 80%. This change is directly visible in the diagonal elements of the coavriance matrix S x . For altitudes above the ERA40 grid we used the US-Standard Atmosphere (US Committee on Extension to the Standard Atmosphere, 1976) as apriori information. The change is done at the intersection point of the ERA40 profile and the US-Standard profile in about 45 km altitude. From an earlier study (Feist et al., 2007) we know that ERA40 values at the top of the stratosphere diverge from observations. For the temperature and pressure profiles we used data from ECMWF continued by CIRA86 (Rees et al., 1990) for the altitude levels above the top of the ECMWF atmosphere. Spectral parameters are taken from the HITRAN96 (Rothman et al., 1998) molecular spectroscopy database.
Additionally a baseline which originates from a standing wave between the mixer and the aircraft window, resulting in a sinusoidal modulation of the spectrum with a frequency of 75 MHz, and a constant offset of the spectrum is retrieved.
Spectra pre-integration
To reduce thermal noise to approximately 1% before retrieving a profile we had to pre-integrate several spectra. It is important to integrate spectra that were measured under similiar conditions. The most critical parameters that could change quickly during flight are the flight altitude and the instrument's elevation angle. The elevation angle depends on the aircraft's roll angle as well as the position of the instrument's mirror elevation angle. Only spectra with a maximum roll angle difference of ±0.1 • , a mirror elevation of ±0.1 • and a flight altitude within ±100 m were integrated. To avoid integration of spectra over a too large distance, the spectra were only considered if they were measured within 10 min. This selection finally determines the horizontal resolution along the track of 57 km±30 km of the AMSOS dataset. The remaining noise that overlay the spectrum determines the diagonal elements of the covariance matrix of the measurement error of S y . Fig. 4. The AMSOS dataset. Each plot (a)-(f) is devoted to the AMSOS missions 1-5 and 9 from Western Africa to the North pole in the different seasons spring and autumn and contains a graph with the measured vertical water vapour distributions plotted versus latitude. Graphs (g) and (h) are both for mission 8 from Europe to Australia once plotted versus latitude and once versus longitude. Only data with measurement responses larger than 50% has been included. Gaps are due to bad quality based on instrumental problems or due to measurements of ozone at 176 GHz. Profiles are averaged to 1 • in latitude respectively 1 • in longitude.
AMSOS campaigns and dataset
The here presented AMSOS dataset contains 4100 profiles presented in Fig. 4 from flight campaigns or missions between 1998 and 2006 as overviewed in Table 1 and Fig. 5. The instrument participated often as a part of international campaigns during this period. In most cases the flight route was planned to cover as many latitudes as possible between the equator and the north pole. The AMSOS flight track is indicated in blue for each mission in Fig. 5 where every plot is dedicated to one campaign. During participation in the SCOUT-O3 Darwin campaign in 2005 our track was in east-west direction including an overpass of the equator (see Fig. 5f).
Every flight mission presented in Fig. 4 by altitude latitude plots shows a very dry stratosphere with no more than 4 ppm volume mixing ratio over the tropics up to 40 km. In the mid-latitudes and polar region values of 5 ppm are reached down to 20 km. In the upper stratosphere the increase of water vapour generated by methane oxidation resulted in measurements at 50 km of up to 7 ppm. Above this height we observed decreasing water vapour induced by photolysis. In the November and February/March missions numbered 2, 3, 4 and 9 (see Fig. 4b, c, d and f) the water vapour maximum subsided to a level of 35 km above the Arctic. In the Tropics the tropopause extends to higher altitudes than in arctical regions. This effect can also be seen in the water vapour distribution. For example in Fig. 4e the high values at the bottom of the plot in red color extend to 17 km in the tropics and 13 km in the Arctic. For additional dynamical discussions see (Feist et al., 2007).
Comparison technique
When comparing data from two remote sensing instruments their vertical resolution has to be considered. Let us assume the instrument to compare with has a higher resolution. Applying the averaging kernels A of AMSOS according to Eq. (4) reduces its vertical resolution to the resolution of the lower resolved profile and smoothes out fine structures.
where x HR is the high resolution and x LR its equivalent reduced resolution profile from the comparative instrument and x a the apriori profile of AMSOS. This is a technique already used for comparisons between low and high resolution remote sounders by and . In our case it was necessary to do a small modification due to the character of the water vapour profile and our possibility to measure in the upper troposphere. Below the hygropause water vapour increases exponentially. The term (x HR −x a ) in (4) can become very large for different hygropause levels in x HR and x a . An averaging kernel function corresponding to a certain altitude level is minimal but not necessarily zero below the hygropause level as shown in Fig. 6 and consequently contribute significantly to the values of the smoothed profile in the upper stratosphere. Since our apriori profile is global and the altitude level of the hygropause changes with latitude this effect is encountered quite often. To get rid of this, we must apply the averaging kernels from hygropause level upwards, and lower down we take the direct difference of the profiles withx the retrieved AMSOS profile and x LR , x HR the reduced resolution respectively original instrument profile. This approach is used to compare the AMSOS measurements with the higher resolved limb sounding profiles from satellite observations. In case of the comparison to the microwave ground-station MIAWARA we do not have to apply the averaging kernels because they already have similar vertical resolutions. For the comparisons to the in-situ instruments FISH and FLASH we picked the AMSOS value out on the corresponding altitude level. Concerning the differential absorption lidar we plotted the independent AMSOS and DIAL profiles in the overlap region in the UTLS. Independent in terms of that the DIAL profile was not used as apriori information in the AMSOS retrieval as better knowledge of the tropospheric water vapour distribution like in (Gerber et al., 2004).
Comparisons with other instruments
For an ideal validation study instruments measuring water vapour at almost the same place at the same time are needed. By flying directly over a ground-based station the constraint of place and time can be satisfied easily, as well as for the case of flying in parallel with another aircraft. In case of crossing the footprint of a satellite-based instrument a certain space and time frame has to be selected as the satellite and aircraft paths are not crossing at the same time or only nearby.
Comparative (Sitnikov et al., 2007) and Odin/SMR. In case of the MIPAS instrument the comparison was available using two different datasets from the European Space Agency (ESA) and the Institut für Meteorologie und Klimaforschung (IMK), Karlsruhe, Germany. This set of satellite experiments observing at different times makes the AMSOS instrument also useful for cross-validation studies by the technique given in (Hocke et al., 2006). During the transfer flight of the SCOUT-O3 Darwin campaign, the Learjet has flown in parallel with the two aircraft, DLR Falcon 20 and the Russian Geophysica M55. Onboard the Falcon a Differential Absorption Lidar (DIAL) (Ehret et al., 1999) system was operated to measure the water vapour above the aircraft up to an altitude of about 17 km. This gave an overlap region with the AMSOS profile in the upper troposphere letting us combine the water vapour profiles from two different systems. Finally, we compared our data with the instruments FISH and FLASH, which both use the Lyman-α line in the UV and perform in-situ measurements from the Geophysica aircraft.
An overview of all the instruments is given in Table 2. The whole set of instruments used for comparison include different remote sensing and in-situ techniques, passive and active methods, occultation, limb and up-looking, ground-based, airborne and satellite borne, and cover the electromagnetic spectrum from the ultraviolet to the microwave region.
Validation with observations from satellites
For the purpose of validation at all altitudes we compared the dataset to the six satellite experiments mentioned in Sect. 3.2. Figure 5 shows all the collocation pairs with satellite sensors for all AMSOS missions. During each flight mission we can find at least one collocation of an AMSOS profile and a satellite experiment within a radius of 500 km and a time difference of 10 h. We are aware that this criteria can cause problems in the presence of the vortex edge with strong PV gradients. Nevertheless if we use a more stringent criteria the number of collocations would decrease rapidly. We found about 10 matching profiles in the first four AMSOS missions with SPOT4/POAM-III and 2 with ERBS/SAGE-II. These two satellite experiments are solar occultation instruments and thus only performed measurements during sunrise and sunset while the AMSOS instrument was flying mostly during daytime. With the UARS/HALOE instrument which also accomplished solar occultation measurements only two For better visuality the color code of the satellite plots was adapted accounting for a 10% dry bias of the AMSOS instrument.
collocations were found in mission 5. In the same mission there are more than forty coinciding measurements with EN-VISAT/MIPAS which is a full-time measuring instrument. In the last two AMSOS missions several track crossings with the AURA satellite resulted in more than 75 collocation pairs with the MLS instrument. The comparison is done over the altitude region where the satellite profile and the AMSOS profile overlap or where the measurement response is larger than 50%. Profile differences were plotted in relative units according to In Fig. 7 the thick red line is the mean relative difference of all the single difference profiles in dotted blue. The offset is negative when AMSOS measures drier values and positive when AMSOS has a wet bias. The comparison to the POAM-III instrument (Fig. 7a) shows a relative difference of -35% at 90 hPa and then decreases below to -10% at 1 hPa with respect to AMSOS. SAGE-II (Fig. 7b) shows a bias of -22% at 90 hPa which turns to positive values in the lower stratosphere before the mean difference is stabilized at -12% between 10 and 5 hPa. Also HALOE (Fig. 7c) shows a -29% offset at the 90 hPa level and a quasi constant offset of -10% up to 0.1 hPa. Both HALOE and SAGE-II with only two collocations did not issue additional statistical information but nevertheless they show the same typical features in the mean difference profile as the others.
In case of the MIPAS instrument we compared to two different independent retrievals. On the one hand the IMK (Fig. 7d) retrieval and on the other hand (Fig. 7e) the ESA operational retrieval. Collocations with IMK profiles do not cover latitudes northerly than 66 • N. Both profile sets show a similar behaviour. Again the 90 hPa level is offsetted by -20% (IMK) to -25% (ESA). At 30 hPa it changes to -5% (IMK) and no offset (ESA). Between 10 and 0.1 hPa the offset is -15% in the mean for both. In case of the ESA retrieval it is slightly decreasing in this altitude range.
The two profiles of the Odin/SMR instrument compare well with the AMSOS instrument. The error amounts between -15% and +5% in the altitude range of 60 to 0.1 hPa. It is slightly positive between 1 and 10 hPa. But also here the number of collocations is too low to make a statistical conclusion.
The AURA/MLS instrument using the same observation frequency as AMSOS also shows a clearly offset of -20% at the 90 hPa level. Throughout the stratosphere it is similar to the HALOE comparison at -10% and increasing to -20% in the lower mesosphere between 0.1 and 1 hPa. This maybe due to increasing apriori contribution in the AMSOS profiles at these altitudes. In the tropics the MLS data show another enhanced water vapour layer at 20 km (cyan colored) which is not seen in the AMSOS data. The vertical extension of this layer is too small and the enhancement in water vapour amounts is not enough to be observed by AMSOS due to the limited altitude resolution.
In mission 9, which shows again an Arctic-to-tropics cross section, both datasets provide a rapid change between typical arctic and tropical profiles between 40 • N and 45 • N seen in the change in altitude of the stratospheric water vapour maximum. Also a very dry mesospheric part in the Arctic is seen in both.
Latitude dependence
As seen in Sect. 3.3 all the mean difference profiles of the comparisons with satellite data show a negative peak at the 90 hPa level. It seems to be a character of the AMSOS profile to be very dry around the hygropause. When analyzing the locations of the collocations most of them originate in the mid-latitudinal to polar region. For the collocations between AMSOS and AURA/MLS, or AMSOS and ENVISAT/MIPAS, both of which cover subtropical and more northerly latitudes, we separated the profile comparisons in two geographical regions, the first from 90 • N to 45 • N and the second from 45 • N to 0 • N. As shown in Fig. 9 the mean difference profile is dependent on latitude. The characteristic peak is visible only in the profiles north of 45 • N. In case of the MIPAS instrument the two mean difference profiles differ only in the UTLS region up to 10 hPa while the two MLS regional mean difference profiles are slightly offset by less than 10% over the whole altitude range. The origin of this peak at 90 hPa can be explained by a shift in altitude of the hygropause in the AMSOS profiles. The reason why this appears only in the polar profiles is due to the apriori profile of AMSOS and its covariance matrix. In polar regions the hygropause is located at a lower altitude level than in tropical or mid-latitudinal regions. The location of the hygropause in the used apriori profile is at 90 hPa and represents more a sub-tropical case. The constraint of the apriori profile by the covariance matrix S x in the upper troposphere retained the hygropause of the AMSOS profiles on a certain altitude level. This leads to too high AMSOS values and a positive difference profile below 120 hPa. The retrieval algorithm compensates this by too low values above and lead to this negative peak in the difference profile at 90 hPa. The positive peak at 30 hPa is another recompensation. The oscillating effect disappears in tropical and mid-latitudinal regions.
By using different apriori profiles, one for each typical regions, arctic, mid-latitude and tropical, the retrieval would be improved around the hygropause. Since the retrieval is largely dependent on the apriori information, the use of different apriori profiles would make our whole dataset inconsistent and would lead to a split in different subdatasets each for one of the apriori profiles. To avoid this we decided to use only one apriori profile. Tests with a polar apriori profile have shown to have the hygropause located at a lower altitude level and thus might improve the retrieval but an oscillating structure still remains in the difference profile when comparing to satellites.
Validation with the ground-station MIAWARA
There was one coinciding measurement with the groundbased microwave radiometer MIAWARA (Deuber et al., 2005) in Bern, Switzerland on 16th November 2005. The left hand side of Fig. 10 shows both the AMSOS and the MIAWARA profile and their apriori profiles used for the retrievals. Taking the relative difference (Fig. 10 right) resulted in an agreement of -17 to 0% at pressure levels between 30 and 0.2 hPa where the measurement response of both instru-ments is larger than 0.5. The shape of the difference profile follows the shape of the difference in the apriori profiles. Thus the difference of -17% in the altitude of 11 hPa can be explained by the difference of 10% in the apriori profiles.
Validation of AMSOS upper tropospheric humidity with lidar profiles
In each viewgraph of Fig. 11 we plotted the corresponding profiles from the two different measurement techniques lidar and microwave covering different altitude regions. Profiles are averaged over 1 degree in longitude. The lidar profile from the DIAL instrument reaches the upper troposphere where the AMSOS instrument starts to be sensitive to water vapour. 4 cloudless cases from the mid-latitudes to the subtropics are presented here. The lidar profiles match into the 2σ error of the AMSOS profile. The different vertical gradients in the profiles is clearly visible in the first case of Fig. 11 (41N, 17E) originate in the limited vertical resolution of the microwave instrument.
Validation with in-situ hygrometers FISH and FLASH
On the transfer flight of the SCOUT-O3 Darwin campaign we had the possibility to fly in parallel with the Lyman-α hygrometers FISH and FLASH that were carried by the aircraft Geophysica-M55. The measurements for this comparison were averaged to one degree in longitude along the flight track. There are no FLASH measurements available between 10 • E to 50 • E. As shown in Fig. 12b aircraft Geophysica-M55 was flying above hygropause level and, except for the path between 110 • and 130 • longitude, the absolute values are similar and fit within the error bars of the AMSOS instrument (see Fig. 12a). In the last part the in-situ instruments are at the border of the Fig. 12. Comparison of AMSOS profiles with in-situ measurements from FLASH and FISH sondes onboard aircraft Geophysica during SCOUT-O3 Darwin campaign. Plot (a) shows the measured volume mixing ratio of the instruments and modelled data from ECMWF at Geophysica-M55 flight altitude level in (b). The sharp jumps in vmr at 35 • , 60 • , 70 • , 80 • , 100 • and 120 • longitude are due to ascent and descent of the aircraft through the troposphere. The values match within the 2σ error bars until 100 • E, then the in-situ instruments measured concentrations near the border of the error bars. Correlation coefficients for AMSOS and FLASH is 0.63 and AMSOS and FISH is 0.52. Plot c) shows a thin water vapour layer in the ECMWF profile which is not seen by the AMSOS profile. FISH and FLASH measurements were made within this layer and lead to the larger difference to AMSOS in the last part of the flight. AMSOS 2σ error. Looking more into detail of the path between 110 • and 130 • we can identify in the ECMWF profile in Fig. 12c a small very dry layer near the hygropause where Geophysica was located. Due to the limited altitude resolution AMSOS did not detect this feature of the water vapour profile. In general it is difficult to compare a point measurement with a smeared measurement with a much lower altitude resolution thus it does not make sense to give an absolute value for a certain difference. Nevertheless the in-situ hygrosondes fit within the AMSOS error bars. Correlation coefficients for AMSOS and FLASH is 0.63 and AMSOS and FISH is 0.52.
Conclusions
The AMSOS water vapour dataset consists of more than 4000 profiles from the UTLS region up to the mesosphere covering all latitudes from tropical to polar regions with horizontal resolution of 57 km. The airborne instrument was running for approximately one week each year between 1998 and 2006. The main features of the vertical water vapour distribution are clearly seen by the radiometer data despite the limited altitude resolution of 8-16km. The upper tropo-spheric part with the strong gradient in water vapour is visible as well as the water vapour maximum, which is the main feature in the stratosphere, as a footprint of methane oxidation and transport by the Brewer-Dobson circulation. The water vapour minimum, also known as the hygropause, is apparent over the tropics at a higher altitude level than over the Arctic. In the late winter missions of 1999 and 2000 and the late autumn missions of 2001 and 2006 with the presence of the polar vortex lower water vapour values were measured in the Arctic upper stratosphere compared to the late summer missions of 1998 and 2002. Due to the subsidence of air over the pole by the Brewer-Dobson circulation and on the other hand also as an effect of the polar vortex that builds a barrier for the transport of mid-latitudinal air masses towards the pole.
Validation of the whole dataset in the different years of measurements and over the whole geographical region was successfully done with a large set of different instrument types using different data collection methods and different data processing algorithms. Comparisons with satellite borne passive remote sensing instruments show a dry bias of the AMSOS instrument in the order of 0 to -20%. Beside a constant offset a bias dependency appears with latitude. A typical mean difference profile in the Arctic has a sharp peak at the 90 hPa level while this does not appear in the tropical profiles as seen in the comparisons with MLS and MIPAS. Dispite the fact that there is not much statistical information the characteristic peak is also visible in comparisons with HALOE, SAGE-II and POAM-III data which have collocations only in the Arctic. The global apriori of the AMSOS dataset and its covariance matrix constrain the tropospheric part too strongly which leads to a shift in altitude of the hygropause level for retrieved polar AMSOS profiles. Our apriori profile is representing more a subtropical, mid-latitudinal water vapour distribution than an Arctic. Since the retrieval is largely dependent on apriori information, the choice to use only one apriori was made to have a consistent dataset from the Tropics to the Pole based on the same apriori profile. Concerning the in-situ instruments FISH and FLASH during SCOUT-O3 campaign in 2005 they match within the AMSOS error bars for non special conditions as was the case for flightlegs 1-4. If the sondes are flying inside small fine water vapour structures such as during flightlegs 5 and 6 then AMSOS is not able to resolve this fine structure but the measurement points of the hygrometers are in the order of the 2σ error bars. A matching of lidar profiles from DIAL and the AMSOS microwave profiles in the upper troposphere for non cloudy situations was also found during SCOUT-O3. Thus a combination of a lidar and a microwave radiometer allowed to measure water vapour from the troposphere up to the mesosphere during the SCOUT-O3 campaign.
Mesospheric water vapour profiles up to an altitude of 75 km retrieved from a different spectrometer will be added to the dataset later. | 7,897.4 | 2008-06-24T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Interdomain Interaction Reconstitutes the Functionality of PknA, a Eukaryotic Type Ser/Thr Kinase from Mycobacterium tuberculosis*
Eukaryotic type Ser/Thr protein kinases have recently been shown to regulate a variety of cellular functions in bacteria. PknA, a transmembrane Ser/Thr protein kinase from Mycobacterium tuberculosis, when constitutively expressed in Escherichia coli resulted in cell elongation and therefore has been thought to be regulating morphological changes associated with cell division. Bioinformatic analysis revealed that PknA has N-terminal catalytic, juxtamembrane, transmembrane, and C-terminal extracellular domains, like known eukaryotic type Ser/Thr protein kinases from other bacteria. To identify the minimum region capable of exhibiting phosphorylation activity of PknA, we created several deletion mutants. Surprisingly, we found that the catalytic domain itself was not sufficient for exhibiting phosphorylation ability of PknA. However, the juxtamembrane region together with the kinase domain was necessary for the enzymatic activity and thus constitutes the catalytic core of PknA. Utilizing this core, we deduce that the autophosphorylation of PknA is an intermolecular event. Interestingly, the core itself was unable to restore the cell elongation phenotype as manifested by the full-length protein in E. coli; however, its co-expression along with the C-terminal region of PknA can associate them in trans to reconstitute a functional protein in vivo. Therefore, these findings argue that the transmembrane and extracellular domains of PknA, although dispensable for phosphorylation activities, are crucial in responding to signals. Thus, our results for the first time establish the significance of different domains in a bacterial eukaryotic type Ser/Thr kinase for reconstitution of its functionality.
Signal transduction in living organisms plays a pivotal role in controlling several aspects of cellular processes such as metabolism, cell growth, cell motility, cell division, and differentiation. These processes need to be tightly regulated to ensure signaling fidelity, and this synchronization in eukaryotes is mediated primarily through phosphorylation of serine, threonine, and/or tyrosine residues catalyzed by protein kinases. Although they exhibit similarities in their catalytic domains, different kinases may contain additional regions, allowing a multitude of mechanisms for their control. The role of these Ser/Thr or Tyr protein kinases in eukaryotic signal transduction is well established and widely documented (1). Considerably less is known about the prevalence and role of these protein kinases in bacteria and Archaea, where phosphorylation events are predominantly materialized by two-component His kinases along with response regulators, which do not share any sequence similarity with Ser/Thr or Tyr kinases (2). In fact, in the advent of genome sequencing, the presence of genes encoding eukaryotic type Ser/Thr protein kinases in diverse bacterial species raises the possibility of their indispensable involvement in complex network of signal transduction cascade. Although the phosphorylation of these bacterial kinases has been intensively established, the experimental proof of their physiological function and regulation is scarce. Thus, a detailed study of eukaryotic type kinases in bacteria is indeed essential to have insights on their contribution to signaling events. In this context, we focused on the dreadful pathogen Mycobacterium tuberculosis, the causative agent of tuberculosis. The medical impact of tuberculosis (3), which infects nearly one-third of the world's human population causing considerable mortality annually, therefore provides the rationale for investigating these regulatory proteins in the signaling process.
The genome of M. tuberculosis has unveiled the presence of a family of 11 eukaryotic type Ser/Thr kinases (4). All of these kinases, except PknG and PknK, encode predicted receptors with a single transmembrane helix dividing the protein into N-terminal intracellular and C-terminal extracellular domains (5). Unlike most of the eukaryotes, the N terminus contains a kinase domain, which is linked to the transmembrane region through a variable length of juxtamembrane linker (6). The C-terminal domain outside the cell presumably binds signaling ligands and is attached to the transmembrane sequence. The architecture of these eukaryotic type kinases is very similar throughout bacteria and typical of receptor-like kinases in plants (7). Majority of them (PknA, PknB, PknD, PknE, PknF, PknG, PknH PknI, and PknL) have been shown to catalyze autophosphorylation and substrates for few of them have been identified (8 -18). Besides, the crystal structure of the catalytic domain of PknB was solved, which provided valuable information regarding the regulatory mechanism of this kinase (19,20).
In an earlier study, we reported cloning and characterization of PknA from M. tuberculosis and indicated its involvement in regulating morphological changes in the process of cell division (9). Our recent study with this kinase further indicated its association in regulating functionality of FtsZ, the protein involved in the process of cytokinesis (21). However, very little structural information of PknA is available as yet, which could aid in elaborative understanding of this kinase, especially the molecular mechanism underlying the regulation of phosphorylation activity as such toward its functionality. In this study, we report the identification of the catalytic core of PknA, which is capable of autophosphorylation as well as substrate phosphorylation. We further demonstrate that the autophosphorylation is a bimolecular reaction. It occurs in trans and follows the universal activation mechanism like PknB. Interestingly, unlike PknB, the juxtamembrane region is the integral part of the kinase domain in constituting the catalytic core of PknA. However, for reconstituting the functionality of PknA, the catalytic core itself is not sufficient. Furthermore, we unambiguously establish here that the catalytic core together with transmembrane and C-terminal extracellular domains is critical for the PknA function. Thus, we provide here for the first time the experimental evidence toward functional significance of various domains of a mycobacterial eukaryotic type Ser/Thr kinase.
Two point mutants of PknA, T172A/T174A (threonine substituted with alanine at amino acid residues 172 and 174), were generated using Expand long template PCR system (mixture of Pwo and TaqDNA polymerases; Roche Applied Science) following overlap extension method (22). For each mutation, two external (CATATGAGCCCCCGAGTTGG/TCATTGCGCT-ATCTCGTATCGG, all sequences are 5Ј to 3Ј) and two internal primers (AGCGCCCGTGGCCCAGACC/GGTCTGGGCCA-CGGGCGCT for T172A; CGTGACCCAGGCCGGCATG/C-ATGCCGGCCTGGGTCACG for T174A, all sequences are 5Ј to 3Ј and underlined bases indicate mismatch) were used. To create mutations, two sets of primary (pMAL-PknA as the template) and one set of secondary (mixture of primary reaction products as the template) PCRs were carried out. PCR products containing desired mutations were digested with XhoI/XmaI and incorporated in the corresponding sites of pMAL-PknA. The kinase-dead mutant of PknA, p19kpro-K42N, described earlier (9), was also used in this study. The same mutation was also introduced in ⌬339 -431 mutant to obtain p19kpro-K42N-Core. Mutations were confirmed by sequencing using an automated DNA sequencer. All constructs and the wild type were transformed in E. coli strain DH5␣ to build up the DNA for further processing.
Expression of Recombinant Proteins-Cells harboring MBP-PknA or different MBP fusion constructs were grown in LB broth at 37°C and induced with 0.3 mM IPTG at A 600 of 0.5. Cells were harvested after 3 h, resuspended in lysis buffer (20 mM Tris⅐Cl, pH 7.5, 200 mM NaCl, 1 mM EDTA containing 0.15 mM phenylmethylsulfonyl fluoride, 1 g/ml pepstatin, and 1 g/ml leupeptin), and sonicated. The supernatant fraction was further loaded on an amylose column, and fusion protein was finally eluted with lysis buffer containing 10 mM maltose. For purification of PknA deletion mutants as GST fusion proteins, overnight cultures (ϳ15 h at 37°C in LB broth containing 100 g/ml ampicillin) were reinoculated and grown to an A 600 of ϳ0.6. Cells were then induced with 0.4 mM IPTG, harvested after 3 h, and suspended in lysis buffer (50 mM Tris, pH 8.0, 150 mM NaCl containing 1 mM phenylmethylsulfonyl fluoride, 1 g/ml pepstatin, and 1 g/ml leupeptin). Cells were sonicated, and the supernatant fraction was loaded onto a glutathione-Sepharose 4B affinity column, and protein was eluted with lysis buffer containing 10 mM glutathione. For use as controls, in similar manner MBP-gal/GST proteins were prepared and purified from E. coli cells transformed with vectors.
Kinase Assay-The ability of autophosphorylation and substrate phosphorylation of PknA or its mutants as purified fusion proteins was determined in an in vitro kinase assay. Aliquots (usually 1 g/20 l reaction volume) of fusion protein were mixed with 1ϫ kinase buffer (50 mM Tris⅐Cl, pH 7.5, 50 mM NaCl, 10 mM MnCl 2 ), and the reaction was initiated by adding 2 Ci of [␥-32 P]ATP. Following incubation at room temperature for 20 min, the reaction was stopped by adding SDS sample buffer (30 mM Tris⅐Cl, pH 6.8, 5% glycerol, 2.5% -mercaptoethanol, 1% SDS, and 0.01% bromphenol blue). Samples were boiled for 5 min and resolved on 8 -12.5% SDS-PAGE. Gels were stained with Coomassie Brilliant Blue, dried in a gel dryer (Bio-Rad) at 70°C for 2 h, and analyzed in a phosphorimaging device (Molecular Imager FX, Bio-Rad) and also exposed to x-ray films (Eastman Kodak Co.).
Co-transformation-E. coli (strain DH5␣) cells were co-transformed with two incompatible plasmids, pMAL/ pGEX-KG and p19Kpro, harboring different constructs. The presence of different antibiotic selections (ampicillin in pMAL/ pGEX and hygromycin in p19Kpro) facilitated the co-expression of foreign proteins in E. coli using these incompatible plasmids (21,23). The E. coli cells harboring pMAL or pGEX with gene(s) of interest were grown in the presence of ampicillin (100 g/ml) and made competent using standard methods (24). The cells were then transformed with p19Kpro containing the desired gene(s) and plated on LB agar with both ampicillin (75 g/ml) and hygromycin (200 g/ml). Clones obtained were cultured in LB broth in the presence of both the antibiotics and induced with 0.2 mM IPTG (37°C/3 h). Cells were further processed for microscopy or Western blotting.
Western Blotting-Purified fusion proteins (ϳ800 ng) or samples obtained in co-transformation experiments were resolved in 8 -10% SDS-PAGE and transferred at 250 mA for 45 min to nitrocellulose membrane (0.45 m) in a mini-transblot apparatus (Bio-Rad) using Tris-glycine-SDS buffer (48 mM Tris, 39 mM glycine, 0.037% SDS, and 20% methanol, pH ϳ 8.3). Primary antibodies (anti-MBP from New England Biolabs and anti-Thr(P) from Cell Signaling Technology) used for different immunoblots were either commercially available (New England Biolabs) or raised as mentioned elsewhere (9). Horseradish peroxidase-conjugated anti-rabbit IgG secondary antibody (GE Healthcare) was chosen depending on the primary antibody used and subsequently processed with the ECL detection system (GE Healthcare) following the manufacturer's recommended protocol.
Pulldown Assay-Purified fusion proteins (GST-(253/363), GST-(363/431), GST-(⌬1-252) and MBP-Core; 100 g of each protein/reaction) were mixed in 600 l of binding buffer (200 mM NaCl, 1 mM EDTA in 20 mM Tris⅐Cl, pH 8) and incubated for 1 h at 4°C with gentle mixing. This was followed by the binding reaction (1 h at 4°C) in the presence of amylose beads (25 l) and glutathione (final concentration of 10 mM to avoid nonspecific binding of GST to amylose resin). Following washing of beads (five times with binding buffer containing 0.1% Tween 20; 1 ml/wash), the resin-bound proteins were extracted and subsequently processed for Western blotting using anti-GST/anti-MBP antibody.
Solid Phase Interaction Assay-The association between purified GST and MBP fusion proteins was identified by solid phase interaction assay as described elsewhere (20). Briefly, purified MBP-Core after dialysis against PBS was incubated (2 h at 25°C with gentle mixing) with 10-fold molar excess of biotin-NHS. The mixture was extensively dialyzed and used as a source of biotinylated protein after estimation (25). For binding studies, different GST constructs (GST-(253/363), GST-(363/ 431), and GST-(⌬1-252); 1 g/well) were coated in microtiter plates for 3 h at 37°C. Following washing with PBS containing 0.5% Tween (PBS-T), the wells were blocked with 1% BSA in PBS-T for 1 h at 37°C. After extensive washing with PBS-T, the wells were incubated (1 h at 37°C) with different concentrations of biotinylated MBP-Core. The interaction was finally monitored by addition of streptavidin-HRP (1 g/ml for 30 min at 37°C), and enzyme activity in each well was detected using 3,3Ј,5,5Ј-tetramethylbenzidine as the substrate. Binding of biotinylated core with immobilized ⌬1-252 was assessed in the presence of different concentrations of nonbiotinylated ⌬1-252 to determine the specificity of the interaction. In each case, for controls instead of fusion proteins an equivalent amount of BSA was adsorbed to the wells of microtiter plates.
Bioinformatic Analysis-The multiple sequence alignment of the protein sequences retrieved from the mail server at the National Institutes of Health was carried out using the Clust-alX1.81 program (26). Secondary structure of the protein was predicted using PSIPRED server (27), and for tertiary structure prediction an automated protein homology-modeling server "Swiss Model" was used (28). Using LSQMAN (Least Square Manipulation; see Ref. 29), the predicted structure of PknA was superimposed with that of the PknB structure, which has been complexed with nucleotide triphosphate analog. For 260 C␣ atoms the root mean square deviation was 0.47 Å. The superimposed structure was generated using PyMOL (30).
Kinase Domain Together with the Juxtamembrane Region
Constitutes the Catalytic Core of PknA-Analysis of the nucleotide-derived amino acid sequence of PknA indicated the presence of a highly conserved catalytic domain, followed immediately by a juxtamembrane region rich in alanine and proline. The juxtamembrane region leads to a hydrophobic stretch of 23 amino acids constituting the putative transmembrane and the C-terminal extracellular domain. On aligning the sequence with PknB, another mycobacterial Ser/Thr kinase located adjacent to the PknA in the genome and with known crystal structure, the catalytic domain was found to be highly homologous (ϳ78% homology with ϳ42% identity). Furthermore, the predicted secondary structure (using "PSIPRED" server) and the modeled tertiary structure of PknA (using the "Swiss-Model") catalytic domain revealed a remarkably high similarity with PknB. However, the homology between the juxtamembrane regions of these two kinases was considerably less (ϳ47% homology with ϳ15% identity) (Fig. 1A). Because the catalytic domain in PknB is sufficient for autophosphorylation as well as substrate phosphorylation (31), it was intriguing to know whether the catalytic domain alone of PknA could perform a similar function. To have insight on this aspect, several deletion mutations were constructed as outlined in Fig. 1B. The deletion mutants were purified as MBP fusion proteins, and expression was confirmed by immunoblot analysis with the rabbit poly-clonal antisera against MBP-PknA (Fig. 1C, upper panel). The autophosphorylating status of the deletion mutants was analyzed by an in vitro kinase assay (Fig. 1C, middle panel) and further confirmed by immunoblotting with anti-Thr(P) antibody (Fig. 1C, lower panel). On analyzing the kinase activity, we found that only the ⌬339 -431 deletion mutant of PknA harboring the catalytic domain and juxtamembrane region retained the autophosphorylating ability. However, unlike PknB, further shortening of this domain (⌬269 -431) by deleting the putative juxtamembrane region completely abolished the kinase activity (Fig. 1C, middle panel). This observation was further supported by the fact that anti-phosphothreonine antibody recognized only the ⌬339 -431 protein among all mutants and MBP-galactosidase control (Fig. 1C, lower panel). Thus our results argue that the catalytic domain along with the juxtamembrane region (residues 1-338) are required for PknA autophosphorylation, and hereafter we designate the mutant as the "core" of PknA.
The boundary of the catalytic domain was identified on the basis of primary sequence alignment. However, on careful analysis of the predicted secondary structure and tertiary structures of PknA, using PknB as template, it was found that the ⌬269 -431 deletion construct missed more than half of the ␣1 helix, which has been found to be involved in a four-helix bundle in PknB (20). As can be seen in Fig. 2A, the superimposed structure of PknA with PknB highlights the presence of a helix (␣I) toward the end of the catalytic domain. The boundary of the helix was further marked by analyzing the predicted secondary structure of PknA ( Fig. 2A, inset), and it extended to VRAG (ending at residue 277) corresponding to VHNG (ending at residue 279) in PknB (Fig. 1A). Therefore, to analyze the role of this region in the activity of PknA, another deletion mutant, ⌬278 -431 (retaining the helix), was constructed. The mutant was tested for its autophosphorylation ability by in vitro kinase assay as well as immunoblotting with anti-Thr(P) antibody. As shown in Fig. 2B, no incorporation of ␥-32 P occurred even with 10 g of the ⌬278 -431 mutant; however, 2.5 g of the ⌬339 -431 mutant (core) exhibited phosphorylation (compare lanes 2, 4, and 6). This observation was further supported by immunoblotting anti-Thr(P) antibody, which did not recognize the ⌬278 -431 mutant (Fig. 2C, lane 2). These observa- tions thus established that, unlike PknB, the catalytic domain alone was insufficient for phosphorylation in PknA.
The Core Mimics the Catalytic Activities of PknA-To examine the autophosphorylation pattern of the core, in vitro phosphorylation assays were carried out. The core was capable of phosphorylating itself in a concentration-dependent manner like the full-length PknA, although 5 g of the protein exhibited phosphorylation comparable with 800 ng of full-length protein (Fig. 3A) (9). Furthermore, the effect of divalent cations was also concomitant with full-length PknA, as autophosphorylation was detectable only in the presence of Mn 2ϩ and to some extent with Mg 2ϩ (Fig. 3B). Similarly, the potent kinase inhibitors (ammonium molybdate, sodium tungstate, and sodium vanadate) affected the phosphorylation of the core (Fig. 3C). All these results therefore strongly suggest that the autophosphorylation behavior of the core mimics the MBP-PknA (wild type) protein.
Thr-172 and Thr-174 in the Activation Loop of PknA Are the Phosphorylating Residues-Protein kinases exhibit a multitude of mechanisms for their regulation. The best understood aspect of regulation reconciled in recent years is phosphorylation on a residue(s) located in a particular segment in the center of the kinase domain, which is termed the activation segment or T-loop (32). The activation loop in several kinases has been found to be highly disordered in the crystal structure and is capable of undergoing large conformational changes when the kinase switches between active and inactive states (33). The crystal structure of PknB has highlighted the importance of the activation loop in supporting the universal activation mechanism of the kinase (18). The loop thus identified consists of two threonines actively participating in the activation of the protein (34). We therefore examined if PknA undergoes activa- MARCH 21, 2008 • VOLUME 283 • NUMBER 12 tion through the same mechanism. Notably, on aligning the sequence of PknA with that of PknB, two threonines corresponding to the phosphothreonines in the activation loop of PknB were located (Fig. 1A). The modeled three-dimensional structure of PknA also showed the presence of disordered activation loop (data not shown). Interestingly, comparable threonines have also been found to exist in other mycobacterial Ser/Thr kinases, like PknD, PknE, and PknF (30).
Self-association of PknA Reconstitutes Its Functionality
To analyze the role of these two mapped threonines of the putative activation loop (DFGIAKAVDAAPVTQTGM-VMGTAQYIAPE) of PknA or its core (⌬339 -431) on kinase activity, they were mutated one at a time to alanine (T172A and T174A). These single mutants were purified as MBP fusion proteins, and their autophosphorylating abilities were monitored. Both the mutations affected the autophosphorylating ability of PknA (Fig. 4A). However, the effect of T172A was predominant over the T174A, as evident by the difference in signal intensity (Fig. 4A, compare lanes 2, 4, and 6 or 3, 5, and 7). Both the mutations, on the other hand, affected the autophosphorylating ability of the core confirming the results obtained with the fulllength protein (Fig. 4B). Interestingly, the profound effect of both the mutations on the core, compared with that of the PknA, presumably point toward difference in the stability between the proteins. In PknB, replacement of both the threonines to alanine equally affected the autophosphorylating ability of the protein (33). Thus, our results establish the direct regulatory role of threonines of the activation loop in autophosphorylation of PknA and favor the universal activation mechanism of this kinase.
PknA Core Exhibits Autophosphorylation in Trans-We have reported earlier that PknA is an active eukaryotic type Ser/Thr protein kinase, and it predominantly phosphorylates at threonine residues (9). However, the biochemical mechanism underlying phosphorylation is poorly understood. Available literature indicates that autophosphorylation of protein kinases occur either through an intramolecular (cis) or intermolecular (trans) association (35). To resolve the underlying mechanism of autophosphorylation of PknA, we tested the ability of the protein to phosphorylate its inactive version. For this purpose, full-length protein (MBP-PknA) was mixed with increasing concentrations (2-8-fold excess) of a kinase-inactive version of PknA core carrying altered Mg 2ϩ -ATP orienting lysine to asparagine (MBP-Core-K42N), and the samples were subjected to kinase assay. As shown in Fig. 5A, the K42N mutant, which is unable to autophosphorylate itself (lane 5), undergoes phosphorylation on incubation with wild type protein (lanes 2-4). Interestingly, a decrease in the phosphate content of wild type protein on incubation with increasing concentrations of the K42N indicates that the mutant was capable of suppressing the intrinsic autophosphorylation ability of the PknA in a dose-dependent manner. However, no such compromise in the PknA phosphorylation ability could be observed when the wild type protein was incubated with an excess of the MBP-gal (Fig. 5A, lane 6), which further confirmed the authenticity of this experiment.
To substantiate the intermolecular mechanism of autophosphorylation of this kinase, we carried out in vivo co-expression experiments in E. coli by transforming PknA or K42N (in p19kpro vector) along with the core (in pMAL-c2X vector). Because PknA is predominantly phosphorylated at threonine residues (9), we observed that an anti-Thr(P) antibody recognizes PknA or its core (Fig. 5B, left upper panel, lane 1) (21) but not its kinase-inactive variant K42N (Fig. 5B, left upper panel, lane 3). Loading of both the proteins was confirmed by the anti-MBP-PknA antibody in the same blot following its strip- ping (Fig. 5B, left lower panel, lanes 1 and 3). We found that the use of anti-Thr(P) antibody in Western blotting of cell lysates prepared from E. coli cells co-transformed with K42N and the PknA core recognized both the proteins (Fig. 5B, right panel, lane 4). The recognition of K42N by the antibody was specific because no phosphosignals corresponding to the mutant protein could be detected in absence of PknA core (Fig. 5B, right panel, lane 5). Thus, both the lines of evidence strongly insinuate that autophosphorylation of the PknA is a bimolecular reaction.
PknA Core Is Capable of Substrate Phosphorylation-After establishing the autophosphorylation behavior of the core region, it was tempting to envisage whether the core is able to transfer phosphate to substrates known to be phosphorylated by full-length PknA. To investigate this aspect, purified core protein was incubated with [␥-32 P]ATP and casein. As shown in Fig. 6, in addition to an autophosphorylating band of PknA core, substrate phosphorylation was evident (lane 3). In earlier reports, we established that the cell division protein FtsZ is a natural substrate of PknA (21). To check the phosphorylation of FtsZ from M. tuberculosis (mFtsZ) by PknA core, kinase assay was carried out following mixing of both the proteins. As expected, in addition to PknA core a phosphorylating band corresponding to mFtsZ could be seen (Fig. 6, lane 5). These results suggest that unlike PknB and PknF (30), the juxtamembrane region encompassing residues 269 -338, is indispensable not only for autophosphorylation of PknA but also for its substrate phosphorylation ability.
Interaction between Core and C-terminal Domains Is Crucial for the Functionality of PknA-Available literature indicated that in eukaryotes for functionality, different domains of Ser/ Thr protein kinases interact with each other (36). Although this aspect has not yet been elucidated in any prokaryotes, being a sensor kinase interaction between domains of PknA is expected. We therefore focused our attention to evaluate such association between the core and the C-terminal domains (transmembrane and extracellular regions) of PknA. For this purpose, although core (⌬339 -431) was tagged with MBP (MBP-Core), other domains (juxta-transmembrane, 253/363; extracellular, 363/431; and juxta-transmembrane-extracellular, ⌬1-252) were expressed as GST fusion proteins. In vitro interactions between them were examined in pulldown assays. The GST-tagged proteins (GST-(253/363), GST-(363/431), and GST-(⌬1-252)) were incubated (one at a time) with MBP-Core and passed through amylose resin. As shown in Fig. 7A, Western blotting of the samples eluted from the column with anti-GST antibody highlighted GST-(253/363) as well as GST-(⌬1-252) (upper panel) and anti-MBP antibody recognized the MBP- Core (lower panel). Surprisingly, anti-GST antibody recognized an ϳ31-kDa band in addition to GST-(⌬1-252) (ϳ53 kDa), which seems to be a cleaved product of the fusion protein (Fig. 7A, lane 2). However, no band corresponding to the GST-(⌬1-252) could be detected on incubating the protein with amylose resin in the absence of the MBP-Core, indicating the specificity of the interaction (Fig. 7A, upper panel, compare lanes 1 and 2). GST-(363/431), on the other hand, showed negligible interaction with the MBP-Core protein (Fig. 7A, upper panel, lane 3). The results of the in vitro interaction studies were further confirmed in solid phase binding assays (see "Experimental Procedures"). As shown in Fig. 7B, following affinity purification, binding of biotinylated MBP-Core and GST-(253/363) or GST-(⌬1-252) exhibited a saturation kinetics (half-maximal binding ϭ ϳ4 ng/ml; dissociation constant ϭ 0.08 Ϯ 0.01 nM). On the other hand, GST-(363/431) exhibited hardly any binding. The binding of GST-(253/363) or GST-(⌬1-252) could not be observed when BSA replaced the MBP-Core. Because both GST-(253/363) and GST-(⌬1-252) indicate the interactions of PknA core with its transmembrane domain, we further monitored the ability of unlabeled GST-(⌬1-252) to inhibit its binding of biotinylated MBP-Core (Fig. 6C). The 50% inhibition of binding was achieved at ϳ2 g/ml of GST-(⌬1-252), although use of MBP-gal (pMAL-c2 vector as negative control) had no significant effect (Fig. 7C). Although the involvement of overlapping 87 amino acids (residues 253-339 spanning the juxtamembrane region) among the constructs (core, MBP-(⌬339 -431) and GST-(253/363) or GST-(⌬1-252)) used in this study could not be ruled out, all the lines of evidence strongly argue in favor of interaction between core and the transmembrane domain of PknA.
The role of different domains of PknA toward its functionality was further analyzed. For this purpose, a heterologous expression system was utilized. Earlier studies have already established that constitutive expression of PknA results in elongation of E. coli cells (9). We therefore utilized this distinguishing property of the full-length protein (p19kpro-PknA) to elucidate the interaction between different domains of PknA. Because the core emulates the catalytic activities of the wild type protein, we examined the effect of its expression on the morphology of the E. coli cells. Surprisingly, we found that con- stitutive expression of p19kpro-Core did not alter the cell phenotype (Fig. 8, compare a and b). This observation together with the identification of possible in vitro interaction between the core and the C-terminal region, especially the transmembrane domain, led us to speculate their self-association for the functionality of the protein. Despite their in vitro interaction (see Fig. 7, A and B), co-transformation of the p19kpro-Core and pMAL-253/363 in E. coli did not restore the morphological phenotype (Fig. 8d). On the other hand, elongated morphology of the cells could be restored on co-expression of p19Kpro-Core and pMAL-⌬1-252 constructs (Fig. 8f). Furthermore, p19kpro-Core when transformed with pMAL-363/431 did not affect the cell shape (Fig. 8e), suggesting the importance of both transmembrane and extracellular domains in exhibiting functional reconstitution of PknA. The altered phenotype of the cells thus supported our hypothesis that the domains associate in trans to reconstitute a functional protein in vivo. However, there could still be the possibility that the genetic interaction or reconstitution was mediated by the overlapping cytoplasmic portion of the kinase, i.e. juxtamembrane region. To test this possibility, nonoverlapping construct pGEX-⌬1-338 (transmembrane-extracellular domains) was co-transformed with the p19Kpro-Core. Interestingly, the phenotype was restored as shown in Fig. 8f, inset. We therefore concluded that the transmembrane domain could be involved in mediating the interaction; however, signaling by complete molecule cannot take place in the absence of the sensory cues from the extracellular domain.
DISCUSSION
Protein kinases play a cardinal role in phosphorylation of proteins and in the process regulate a variety of crucial activities in prokaryotes as well as in eukaryotes. In recent years, genes for eukaryotic type phospho-signaling system were identified in bacteria, and several studies are now focused on unraveling their physiological roles (37). In this scenario, we concentrated on PknA, one of the 11 such Ser/Thr protein kinases present in the genome of the dreadful pathogen M. tuberculosis. PknA has been implicated in playing a role in regulating morphological changes associated with cell division (9,21). Like any other bacterial eukaryotic type Ser/Thr protein kinases, although the presence of catalytic, juxtamembrane, transmembrane, and extracellular domains has been revealed in PknA through bioinformatic analysis, how the domains collaborate with each other toward its functionality is not yet known. We therefore concentrated on the structure-function analysis of this kinase, especially concentrating on the identification of region responsible for the catalytic activity, its mechanism of activation, and finally to evaluate interaction between different domains for the functionality of PknA.
Sequence alignment, as well as the modeled three-dimensional structure of PknA, has indicated its high order of homology with the catalytic domains of PknB (Figs. 1A and 2). However, in the juxtamembrane regions of the two kinases, the similarity is considerably less (Fig. 1A). Full-length PknA (431 amino acid residues) has been shown to exhibit autophosphorylation as well as substrate phosphorylation abilities (4). The truncation of the protein to cytosolic region harboring catalytic and juxtamembrane domains did not hamper such activities and behaved similarly in respect to the effect of divalent cations (Fig. 3B) or inhibitors (Figs. 1C, 3, and 6). However, unlike PknB and PknF (30), further trimming the protein to the catalytic domain alone completely abolished the enzymatic activity of PknA (Figs. 1C and 2B). Strikingly, the homology shared by the juxtamembrane region rich in alanine and proline with that of the Ala/Ser/Thr/Pro/Gly region of Pkn2, the Ser/Thr protein kinase from Myxococcus xanthus (38), and its requirement for the catalytic activity, corroborate well with our phylogenetic prediction earlier (9). Thus, from this study it is apparent that the catalytic domain along with the juxtamembrane region are the minimum spans required for autophosphorylation as well as substrate phosphorylation abilities of PknA and together constitute the "catalytic core." Although such an indispensability of the juxtamembrane region so far has not been reported for any mycobacterial Ser/Thr kinase, its involvement in activation of several eukaryotic kinases is well known. For instance, in receptor tyrosine kinases and transforming growth factor- receptor Ser/Thr kinase, the juxtamembrane region serves as a key auto-inhibitory element regulating kinase activity (39). In human epidermal growth factor receptor, the juxtamembrane region has been shown to be indispensable for allosteric kinase activation and productive monomer interactions within a dimer (40). In this scenario, our observation may point toward a dimerization interface in the juxtamembrane region of PknA, distinct from that already predicted in its catalytic domain (15), imparting the kinase activity. Furthermore, it can also be hypothesized that the juxtamembrane region constituting part of the core could be involved in providing structural integrity or stability to the kinase. Nonetheless, the indispensability of the juxtamembrane region seems to be a distinctive feature of PknA, which led us to elucidate the regulation of catalytic activity as well as its functionality.
It is well known that most of the kinases control phosphorylation status through the activation loop. Sequence comparison has identified certain structural determinants explaining the reason behind their activation through phosphorylation. It has been suggested that kinases containing a catalytic aspartate preceded by an arginine residue, termed as RD kinases, are known to be activated by phosphorylation in the activation segment (33). Analysis of the PknA sequence revealed the presence of RD motif, suggesting the existence of a parallel mechanism of activation as has been observed with PknB (20). Interestingly, we also identified the activation segment carrying the two predicted phosphorylating threonines on the basis of sequence alignment with PknB (Fig. 1A). We therefore mutated these two threonines and as expected could identify Thr-172 and Thr-174 as the phosphorylating residues (Fig. 4). Thus, it could be predicted that activation of PknA occurs by a universal activation mechanism like other eukaryotic kinases (32).
Autophosphorylation of Ser/Thr kinases in eukaryotes is known to occur either by the cis or trans mechanism (41,42). In fact, very little information is available regarding the mechanism of autophosphorylation of bacterial Ser/Thr kinases. To have an insight on this aspect, in vitro phosphorylation assays were performed utilizing the kinase-active and -inactive versions of full-length and PknA core, respectively. It is apparent from the experiment depicted in Fig. 5 that the autophosphorylation occurs in trans. Furthermore, in vivo phosphorylation of the full-length kinase-inactive mutant (p19Kpro-K42N) by the co-expressed kinase-active core (pMAL-Core) emphasized the prevalence of intermolecular autophosphorylation of PknA (Fig. 5B). Thus, our results ostensibly established that the PknA exhibits bimolecular autophosphorylation like the majority of histidine kinases in bacteria and most of the eukaryotic receptor kinases (43)(44)(45).
Previously we have reported that PknA, when expressed in E. coli under a constitutive promoter, resulted in cell elongation (9). Surprisingly, the core, which is enzymatically active, upon transformation in E. coli was unable to bring about phenotypic changes (Fig. 8b) as has been observed with the full-length protein. It was therefore hypothesized that there could be a prevalence of interaction/association between different domains of PknA, which may lead to phenotypic changes. To test this possibility, the core was co-transformed with either transmembrane or extracellular domain. Both the core and transmembrane domains were found to interact in vitro (Fig. 7); however, co-transformation of the two did not restore the phenotype, indicating the insufficiency of the transmembrane domain alone in reconstitution of the functionality of PknA (Fig. 8d). On the other hand, co-transformation of the extracellular domain along with core neither showed any interaction (Fig. 7) nor any phenotypic change (Fig. 8e). Interestingly, co-expression of core with the transmembrane and regulatory regions exhibited interaction (Fig. 7) as well as restored the elongation phenotype (Fig. 8f, inset), suggesting association of these domains in trans and reconstitution of a fully active protein in this heterologous setting. All these observations therefore indicated the indispensability of the domains for reconstitution of the functionality of PknA and therefore the prevalence of a distinct mechanism for this kinase. Such self-association and functional reconstitution have also been reported for a Ser/Thr protein kinase fused (Fu), a component of Hedgehog signaling complex in Drosophila (36). Alternatively, the interdomain interaction/association might have restored the ability of PknA to recognize regulatory cues that lead to its transformation from inactive to active state through conformational changes as has been reported for transforming growth factor- receptor Ser/Thr kinase (39).
Finally, our results for the first time signify the importance of each domain of a bacterial eukaryotic type Ser/Thr kinase toward reconstitution of its functionality. All these lines of evidence underscore the molecular mechanism of regulation of PknA and hence may provide an insight into the mechanism of signal transduction in mycobacteria. | 8,041.6 | 2008-03-21T00:00:00.000 | [
"Biology"
] |
Affordable Open-Source Quartz Microbalance Platform for Measuring the Layer Thickness
The layer thickness measurement process is an indispensable companion of vacuum sputtering and evaporation. Thus, quartz crystal microbalance is a well-known and reliable method for monitoring film thickness. However, most commercial devices use very simple signal processing methods, offering only a readout of the frequency change value and an approximate sputtering rate. Here, we show our concept of instrument, to better control the process parameters and for easy replication. The project uses open-source data and its own ideas, fulfilling all the requirements of a measuring system and contributing to the open-source movement due to the added value and the replacement of obsolete technologies with contemporary ones. The device provides an easy way to expand existing sputtering machines with a proper controller based on our work. The device described in the paper can be easily used in need, being a proven project of a fast, inexpensive, and reliable thin-film thickness monitor.
Introduction
A quartz microbalance uses the direct piezoelectric effect known since 1880 [1] and describes the formation of an electric induction in a solid under the influence of stresses. In this phenomenon, the polarization of the resulting electric field depends on the type and direction of stresses, not only on the absolute value of the stress tensor. The proportion between the electric field strength and the value of the tensor is direct and linear [2]. The linearity of this phenomenon makes piezoelectric components ideal transducers and enables the use of such transducers in sensor and actuator designs. A small disc is excited by an external oscillator and resonates at a known frequency, related to its mass. During a controlled process, we place additional material on the crystal, increasing this mass. The frequency feedbacks to the control system and is processed as an analog signal, which allows us to measure the change in mass, and thus the thickness of the deposited material. A quartz microbalance is not only a sensor but also an actuator [3]-it creates a strong gravitational field around its surface. Deposited material adsorbs onto the crystal surface, changing its mass and therefore the vibration frequency. The material used in piezoelectric transducers is quartz crystals, mainly α-quartz [4], an allotropic variety characterized by a trigonal unit cell [5]. The transducer uses the linear dependence of the frequency of vibrations on the mass [6].
The quartz crystal microbalance (QCM) is a popular instrument to measure layer thickness in microscopy and thin film formation. The technology is also used to measure biofilms [7]. In addition to this, its applications include not only measurements of film thickness, but also measurements of the number of biological substances such as viruses [8]. QCM with dissipation monitoring (QCM-D) could be used not only as a biosensor but also as an immunosensor [9,10]. In addition to life science applications, the device is used to measure the properties of different layers of materials, among others, titanum carbide [11], Sensors 2022, 22, 6422 2 of 14 and graphene [12], or to detect heavy metal ions [13]. Proper setups allow users to measure vapors [14,15] or measure viscosity [16] or even lubrication in nanomachines [17,18]. Measurement of water contamination [19] or nanoaerosols [20] is another example of the use of QCM in more complex sensoric systems. The technology can even be used in space. NASA prepared reports examining Mars samples [21] using this measurement method.
We discovered that many thickness monitors are closed solutions that cannot be modified or tailored to the needs. Every change in commercial equipment of sensor systems of this class requires fitting to all hardware elements, which, in this case, are mainly custom-made electronics consisting of high-sensitivity and high-quality parts. Moreover, commercial devices have strict requirements in design and cabling [22]. Adding lowquality components to the system worsens accuracy, while even top-tier electronics ensure that there is no additional impact on the measuring loop [23]. Therefore, the expected resolution might not be available. Our work finds the solution to the problem by reducing the hardware and shifting crucial elements into software implementations.
Today, measurement systems can be integrated into small devices that contain the sensor head with quartz crystal holder, the oscillator driver, and signal analysis and processing unit, usually a simple microprocessor. Due to the progressive simplification of microcontrollers, film thickness monitoring can become a relatively easy task. For this reason, we present an open-hardware and ready-to-3D-print sensor head design, as well as an electronic system with a detailed signal description necessary to build the whole QCM thickness monitor.
System Architecture
The creation of the system described below requires purchasing an appropriate measuring head or making it according to the model provided by the authors [24]; making a circuit board according to the electronic diagram or Gerber files created by openQCM, publicly available on their website [25]; downloading and uploading the code to the microcontroller, publicly available on the OpenQCM GitHub [26]; and downloading and running desktop application, provided by the authors [27] or another compatible software; an example can be found in the same OpenQCM GitHub.
The described measurement systems consist of the following elements ( Figure 1a): • Sensor head, taken from another existing gauge or an open-source fused deposition modeling (FDM) 3D-printed replacement or alternative to the sensor head ( Figure 2 The sensor head is a structure consisting of a quartz holder, a brass spring, and a positioning flange inside a plastic-made dielectric sleeve, which enables the quartz crystal to resonate. When the sensor head is not dielectric, the electrode itself needs to have a galvanic connection with a transmission cable. Oscillator driver capacitors discharge all charges generated on the surface. Therefore, we recommend 3D printing accordingly to the project. The original sensor head consists of elements presented in Figure 1b: steel housing with hollow cooling channels (4), plastic sleeve (7), preload spring (8), quartz crystal (4) with an electrode (9) connected to a spring, and antenna connector, replaced with SMA type (1, 2). The sensor head is a structure consisting of a quartz holder, a brass spring, and a positioning flange inside a plastic-made dielectric sleeve, which enables the quartz crystal to resonate. When the sensor head is not dielectric, the electrode itself needs to have a galvanic connection with a transmission cable. Oscillator driver capacitors discharge all charges generated on the surface. Therefore, we recommend 3D printing accordingly to the project. The original sensor head consists of elements presented in Figure 1b: steel housing with hollow cooling channels (4), plastic sleeve (7), preload spring (8), quartz crystal (4) with an electrode (9) connected to a spring, and antenna connector, replaced with SMA type (1,2).
The possibilities of 3D printing allow us to integrate the sleeve into one solid ( Figure 2). The device is used for short-term application of thin layers, but in the case of continuous operation, it must be additionally equipped with a cooling system, especially when the 3D-printed polymer case has thermal properties worse than the original metallic one. The design shows a conventional QCM head without the possibility of cooling. Four holes were made in the corners to fit additional electrostatic protection to ensure proper discharging via the connector. Our tests provided us with data that temperature does not affect measurements on a short time scale. The main purpose of the head is to fix the crystal stably and securely.
Detailed Device Design
The crystals used in the device are 6 MHz AT quartz crystals 12.5 mm in diameter (Colnatec CNT05RCSG), but the design allows one to use 14 mm crystals as well-the sensor head is slightly larger than in other designs. These quartzes are popular standard replacements for film thickness monitoring systems. We recommend the use of gold coatings for the electrode to achieve better mechanical properties of the electrical connection. Other custom coatings are sometimes used in special cases, such as nanofiber coatings for safrole vapor detection [14] or many types of plastic (such as polyvinylidene fluoride) or ceramic (such as hydroxyapatite) for bioanalytics [28]. The electrical connection is crucial for the presented solution because electrical connection problems are the most common. During our tests, we encountered muffled signals. The reason was poor electrical connections caused by the scratched surface of a quartz crystal and the insufficiently stiff and precise joint between the spring and the cable connector. Due to possible problems, we recommend using standard BNC and SMA antenna connectors. The SMA connectors should be locked with screws, to prevent them from twisting. It is important because the surface of a connector is an electrode itself. Therefore, the connection between the sensor casing and the oscillator should remain intact. We recommend using coaxial 50-ohm cables. With the shorter cable, the signal is less damped.
The coaxial cable transfers the feedback signal from and to the oscillator. It is highly recommended to have a high-quality and possibly short cable. Too long a cable increases signal damping and requires additional amplifiers. Because of the need to overcome the The possibilities of 3D printing allow us to integrate the sleeve into one solid ( Figure 2). The device is used for short-term application of thin layers, but in the case of continuous operation, it must be additionally equipped with a cooling system, especially when the 3D-printed polymer case has thermal properties worse than the original metallic one. The design shows a conventional QCM head without the possibility of cooling. Four holes were made in the corners to fit additional electrostatic protection to ensure proper discharging via the connector. Our tests provided us with data that temperature does not affect measurements on a short time scale. The main purpose of the head is to fix the crystal stably and securely.
Detailed Device Design
The crystals used in the device are 6 MHz AT quartz crystals 12.5 mm in diameter (Colnatec CNT05RCSG), but the design allows one to use 14 mm crystals as well-the sensor head is slightly larger than in other designs. These quartzes are popular standard replacements for film thickness monitoring systems. We recommend the use of gold coatings for the electrode to achieve better mechanical properties of the electrical connection. Other custom coatings are sometimes used in special cases, such as nanofiber coatings for safrole vapor detection [14] or many types of plastic (such as polyvinylidene fluoride) or ceramic (such as hydroxyapatite) for bioanalytics [28]. The electrical connection is crucial for the presented solution because electrical connection problems are the most common. During our tests, we encountered muffled signals. The reason was poor electrical connections caused by the scratched surface of a quartz crystal and the insufficiently stiff and precise joint between the spring and the cable connector. Due to possible problems, we recommend using standard BNC and SMA antenna connectors. The SMA connectors should be locked with screws, to prevent them from twisting. It is important because the surface of a connector is an electrode itself. Therefore, the connection between the sensor casing and the oscillator should remain intact. We recommend using coaxial 50-ohm cables. With the shorter cable, the signal is less damped.
The coaxial cable transfers the feedback signal from and to the oscillator. It is highly recommended to have a high-quality and possibly short cable. Too long a cable increases signal damping and requires additional amplifiers. Because of the need to overcome the vacuum-air barrier, the cable should be as short as possible but allow for easy positioning of the head inside the evaporator device. The wire is divided into two segments: from the crystal to the cable connector inside a vacuum, and from the cable connector to the oscillator outside the evaporator device. The connector should not have galvanic contact with the ground of the sputtering machine because it breaks the feedback loop carried by the wire. In our approach, the connector was produced on the basis of the available vacuum port of the device. Inside the drilled hole, a large diameter copper wire was insulted from the wall of the through hole with the use of epoxy resin. The large (2.5 or 4 mm) diameter requirement is because the width of the wire reduces the negative effect of the parasitic capacitance of the cable, which creates a second, parallel feedback loop and introduces additional high-frequency oscillations that we cannot measure and distort our measurement. The feedback loop from the crystal oscillator circuit is transmitted via this cable, and by meeting these requirements, the signal amplifiers are unnecessary in the system.
The signal generator is located on the printed circuit oscillator board. The feedback loop mentioned in the last paragraph connects the fields marked on the electronic diagram with X1 and X2. These fields are part of the crystal driver, the Pierce oscillator, working with quartzes up to 10 MHz. To change the range of measured frequencies, the capacitors need to be replaced. In addition, the microcontroller unit should have a proper clock, according to the Nyquist law. Our device is clocked by a 16 MHz quartz, and therefore the correct band is 8 MHz. It can be shifted from the 0-8 MHz range to 8-16 MHz using aliasing. The Y output signal of this circuit is passed on to the microcontroller, as is the temperature signal, measured with the MCP9808 sensor type thermistor [29]. The sensor communicates with the microcontroller unit (MCU) through the Inter-Integrated Circuit (I2C) interface. The output signal from the oscillator driver is connected directly to the Arduino module.
In our device, Arduino Micro is the microcontroller unit, preprogrammed to count the frequency of the signal using an inbuilt analog-digital converter (ADC). Any Atmel AVR or ARM architecture processor model can work properly, but to reduce costs and time, we recommend using Arduino or similar prototyping platforms. The module counts pulses from the resonating crystals and sends the data to a serial port along with the temperature. The frequency data transfer via the Universal Asynchronous Receiver/Transmitter (UART) interface using the USB cable to a PC or Raspberry Pi board on which the GUI application is running. This application works as a controller-we start and finish the measurement and record the data. The block diagram of the entire system is in (Figure 1a).
The software includes the Adafruit library, therefore, to operate the MCP9808 temperature sensor [30]. The library FreqCount measures the frequency of a signal with the help of interrupts [31]. The library is stable and was created by Paul Stoffregen, creator of Teensy Boards, which is used in more accurate applications. This code is also provided in the supplementary material. Transmission through the serial interface takes place at 115,200 baud. The temperature sensor object was created as a global variable when the program initializes, a single instance of the object is created, the sensor is reset, and the measurement starts. In the main program loop, the inbuilt analog-digital converter transforms information for each iteration, and the frequency data buffer to the serial interface using electrically erasable programmable read-only memory (EEPROM).
Measurement of this frequency, relative to the frequency before starting the sputtering process, with a correctly calibrated balance gives us a precise result of the average mass deposited on the crystal surface. To take the precise measurement of the sample placed inside the vacuum chamber, it is necessary to perform the appropriate calculation of the mass evaporated or sputtered onto the crystal relative to the process on the prepared sample. This factor can be called the tooling factor and is usually a percentage, and its determination is part of the calibration of the measurement system [32]. During sputtering, this parameter scales the value to estimate the thickness of the layer on the sample, based on the distance difference between the material source and the quartz crystal and between the crystal and the samples prepared.
The sequence of steps taken to make a proper measurement: • Put the quartz crystal into a holder and read the initial stable frequency; • The thin layer of graphite carbon evaporates onto the crystal surface; • The stable frequency value after the process is read from the device, allowing us to calculate the frequency drop; • Therefore, the mass and thickness of the carbon layer are calculated using the Sauerbrey equation [33].
After carbon deposition, the gold layer was sputtered in the same sequence of steps without changing the crystal. The described device can provide online measurements, thus calculating the rate of change in the sputtering process. After that, the crystal was crosssectioned using focused ion beam scanning electron microscopy (FIB-SEM). FEI Helios Nanolab 600i with Ga source device was used to measure the real thickness of the gold layer. Independently, another experiment was performed: sputtering gold in stages with a constant rate at the QCM and geological samples. They were placed at a distance twice as far as the QCM, so the thickness of the layer on them should reach about half the thickness of the measuring device.
Results
To measure the thickness of the layer, we need to measure the initial and final frequency, which can be expressed by the formula: where f = f 1 is the final frequency, f 0 is the initial frequency, A is the area between electrodes, ρ = 2.65 g/(cm 3 ) is the density of quartz, µ = 44 GPa is the shear modulus of quartz, and ∆m is mass change. Therefore, by transforming Formula (1), the mass change can be expressed by the following formula: Knowing the mass change, we can finally calculate the thickness of the measured layer: if we know the ρ m -density of the deposited material. All uncertainties in the physical quantities described above are further discussed in the "Discussion" section. The starting frequency for carbon deposition was 4,997,235 Hz, and the frequency change is −827 Hz. The area between the electrodes is a circle of 6 mm in diameter. Then, the value of the thickness of the graphite layer is 79 ± 2 nm. The layer in the photo (Figure 3e) is 33 px wide, which corresponds to 74.3 nm. The standard deviation of 30 samples is 2.9 px, which corresponds to 6.5 nm. The result with extended uncertainty for a 95% confidence interval is eventually 74.3 ± 11.0 nm, so the result is valid. The experiment was performed offline; therefore, we cannot show the plot versus time. After that, we performed the gold deposition on Edwards Vacuum Coater Model 306, which we equipped with a thickness meter. Figure 4a shows the original unfiltered time series and Figure 4b shows the rate of change. The starting frequency for gold deposition is the ending frequency of carbon deposition; thus, 4,998,062 Hz. The frequency change is −6136 Hz and, therefore, the thickness of the gold layer measured by QCM is 68 ± 2 nm, while measured in the SEM image is 40.6 nm (Figure 3c). The standard deviation of 30 samples is 4.6 nm; therefore, for a 95% confidence interval, we interpret the result as 40.6 ± 9.2 nm. This is a significant discrepancy from the theoretical assumptions. However, as shown in Figure 3a, the instrument is very sensitive when the Penning-type vacuum gauge is turned off and on. It is powered by high voltage, and its appearance or disappearance induces a significant alternating magnetic field, temporarily affecting the meter reading. To ensure a reliable reading under in situ conditions, it is known practice to obscure the formulation prior to establishing stable coating conditions and, after stabilizing the thickness gauge, resetting it to zero while exposing the actual formulation. Then, while maintaining the same evaporation rate, it is possible to more accurately estimate the thickness of the layer. off and on. It is powered by high voltage, and its appearance or disappearance induces a significant alternating magnetic field, temporarily affecting the meter reading. To ensure a reliable reading under in situ conditions, it is known practice to obscure the formulation prior to establishing stable coating conditions and, after stabilizing the thickness gauge, resetting it to zero while exposing the actual formulation. Then, while maintaining the same evaporation rate, it is possible to more accurately estimate the thickness of the layer. The process with a more constant rate of change is presented in Figure 4, and is, moreover, an example of the real working conditions of the device. For this purpose, we placed the QCM at a distance of approximately 50.7% (no known uncertainty of the distance) of the target-sample distance, and the sputtering object was a geological cross section. The read value of the initial frequency was 4,998,005 Hz, the read value of the final frequency was 4,981,705 Hz; therefore, the gold layer on the crystal is 251 ± 2 nm. The value of the gold layer on the rock sample is 127 ± 14 nm and, taking into account the correction factor related to a different distance from the target (1/50% = 200%), the estimated value in the sample is 254 ± 29 nm. The calculated value of uncertainty is not A (type A uncertainty-1.2% in the experiment), but rather type B uncertainty. Type A uncertainties will be described, but will not be discussed here as it is not a feature of our system, rather an experimental random error. If we ensure the exact target-sample distance, the problem will be eliminated. The process with a more constant rate of change is presented in Figure 4, and is, moreover, an example of the real working conditions of the device. For this purpose, we placed the QCM at a distance of approximately 50.7% (no known uncertainty of the distance) of the target-sample distance, and the sputtering object was a geological cross section. The read value of the initial frequency was 4,998,005 Hz, the read value of the final frequency was 4,981,705 Hz; therefore, the gold layer on the crystal is 251 ± 2 nm. The value of the gold layer on the rock sample is 127 ± 14 nm and, taking into account the correction factor related to a different distance from the target (1/50% = 200%), the estimated value in the sample is 254 ± 29 nm. The calculated value of uncertainty is not A (type A uncertainty-1.2% in the experiment), but rather type B uncertainty. Type A uncertainties will be described, but will not be discussed here as it is not a feature of our system, rather an experimental random error. If we ensure the exact target-sample distance, the problem will be eliminated.
Discussion
At this point, we would like to describe further the measurement uncertainties that have accompanied our discussions so far. To verify the work of QCM, we performed a physical measurement of the thickness of the sputtered layers using the FIB-SEM method. The results obtained correspond to cross sections; although, there are some inaccuracies to discuss. The problem with properly verifying this measurement is the electron-and
Discussion
At this point, we would like to describe further the measurement uncertainties that have accompanied our discussions so far. To verify the work of QCM, we performed a physical measurement of the thickness of the sputtered layers using the FIB-SEM method. The results obtained correspond to cross sections; although, there are some inaccuracies to discuss. The problem with properly verifying this measurement is the electron-and ion-deposited platinum layer, which is necessary to provide a cross-sectional cut. The platinum used in the process forms a solid solution with gold, which can locally change the measured value. It is worth noting that the measurement value for the much lighter and more QCM-demanding carbon remained correct. However, the probable differences are not significant and this affordable system can be useful in estimating the value of various processes. The effect of ion-platinum alloying needs further examination or using an additional carbon layer before the FIB-SEM measure. The formulas used to calculate the thickness and uncertainty were constructed from the Formulas (1)-(3), written in the previous chapter.
First, to discuss these results, we need to estimate the sensor area, part of the crystal surface that is not covered by the uncertainty (symbol-[A] in Formulas (1)-(4)) of the crystal. The uncertainty of the area is based on taking measurements using a laboratory caliper (∆x = 0.02 mm). Because all the values of a nut hole diameter obtained were 6.00 mm, we decided to go for type B uncertainty. This is example 1 of an equation: Therefore: The same principle applies to estimating the frequency uncertainty. We encounter noise during experiments, so we add them as statistical noise. For a few thousand samples we use Gaussian distribution, not the t-Student one. Therefore, the expanded uncertainty is obtained by multiplying by the factor of k = 2. The resolution of the device ∆f is 1 Hz. The standard deviation of the noise is 1.82 Hz. We see here that type A is a larger source of uncertainty than type B. However, there are still many possibilities to increase accuracy by reducing noise. The type B uncertainty of frequency measurement is given by a formula: The expanded type A is calculated using multiplying factor k: Therefore, the calculated uncertainty is eventually equal to: The first term in the sum under the radical is 3.87 × 10 −22 kg 2 , the second is 1.39 × 10 −29 kg 2 , and the third is 1.26 × 10 −22 kg 2 . The conclusion is that the uncertainty of the measurement of the initial frequency is negligible. The main sources are geometry and the final frequency (frequency change). The mass uncertainty calculated in this way is 2.26 × 10 −11 kg. The value is almost 200 times lower than the original OpenQCM project mass sensitivity. The main inaccuracy comes from both mass measurements and geometrical dimensions of the crystal (area). To sum it all up: The first term under the radical, uncertainty of mass, squared is equal to 5.50 × 10 −19 m 2 , and the second term, uncertainty of electrode area, is 4.15 × 10 −19 m 2 , which implies that elements should be taken into an estimation of the result. The final result is 0.982 nm, which is approximately 1 nm. Therefore, the maximum uncertainty type B from the device is under 1 nm, and extended uncertainty is 2 nm. Knowing this, we can say that, using our device, all errors will come from the operator, method, and statistics, not the device itself.
The measurement is thermally stable. Temperature does not affect accuracy in a short time, and the characteristic function describing the transducer is almost ideally linear in the thickness range tested. The temperature of electronics is stable; therefore, there is no need to compensate for the temperature drift of electronic components. Although the device can count every pulse, and the type B uncertainty of the device is equal to high-tech industry standards, which is about a 1 nm accuracy, the final inaccuracy is much larger and comes from the following aspects.
The first is lacking knowledge of the exact properties of the material-for example, the density of graphite is in the range of 2.09 to 2.23 g/cm 3 , with an average of 2.16 g/cm 3 [34]. Some sources describe the specimens with even higher density. The density theoretically depends on the temperature, which was 29 • C in our test, while most of the benchmark experiments try to achieve 20 • C, but in this case the impact of the temperature on the material is negligible. Polycrystalline graphite subjected to pyrolysis has a density of 2.2 g/cm 3 [35]. The article is based on the average value of 2.16 g/cm 3 . The type B uncertainty of the measurement is 0.936 nm for the lower boundary value of the graphite density of 2.09 g/cm 3 and 1.015 nm for 2.23 g/cm 3 . It decreases with the density, so the device should be more accurate for more dense gold. Assuming the pessimistic cases for uncertainty sources, 1 nm is an unextended uncertainty type B, and 2 nm is an extended one.
During the process, as a result of the sputtered layer, the density and area of the electrode are constantly changing. It is important, especially for high-density materials. The proposed solution is to calculate the mass in each step recursively, therefore changing the density value in internal memory, using a weighted average. Furthermore, as we see in the microscope picture, the golden electrode of the quartz crystal affects the mean density of the sensor, which is no longer 2.65 g/cm 3 . So, another uncertainty appears here. Changes are not big to scale, because the gold layer is in micrograms, but that impacts the result. The longer the crystal is used, the more depositions are made, and the difference increases. The best idea to address this problem is to verify the average density of the whole quartz crystal, before the first measurement, and store the adjusted density value after the last measurement when the device is turned off. After several dozen sputtering processes, the quartz should be replaced, because it would no longer oscillate properly; the damping factor comes into play with increased mass and lower signal level. The solution to this problem is the monitoring of dissipation. The measurement of the dissipation factor makes QCM an innovative biosensor and allows for such tasks as measuring flows. Therefore, it can measure lubrication in nanomachines and allows them to achieve better performance [17,18]. Knowing the exponential decay factor, we can measure the viscosity parameters [26]. Other ideas that are based on this phenomenon are biosensors for water contamination [20]. Every higher-class quartz crystal microbalance has this function implemented.
Another topic is the low sampling frequency and high acquisition time in a basic model of work: the frequency measurement comes from obtaining the number of samples in a 1-s interval. Therefore, the system is not real-time, and it is one of the reasons why some algorithms cannot overcome the problems of the QCM. Therefore, we are limited with the accuracy and the system is prone to various interruptions. The shorter the time of acquisition of frequency data, the greater the amount of noise in the frequency. The noise value down to 0.1 Hz, according to the OpenQCM website, is false and probably impossible to achieve using the standard configuration and the software provided, where the number of counted pulses is simply integers. Even with our assumption and a monitored noise of almost 4 Hz, the uncertainty remains small.
During the gold sputtering experiment, the typical stable rate of change was 35 Hz (Figure 4d) per second and due to the sampling frequency, such data could be lost. The sampling frequency could be increased when using a high-performance microcontroller unit, but it would collide with the intention of creating an affordable system. Creating a custom board with a high-performance external ADC with control registers that can send more buffered data via the I2C interface to the MCU could be an improvement here. However, the system still gives better results than commercial systems in a range of around EUR 5000, while can be built in part of that price from easily accessible elements. The great result is that there is such a resolution with little data (one sample per second). It is a compromise between accuracy and resolution because one unit cannot simultaneously count pulses and register data. The reason is that the library uses interrupts to properly measure frequency. Using two MCU units or direct memory access (DMA) could also be an inexpensive improvement. However, with the growth of additional electronics, the system is no longer simple and easy to install. At this moment, the replication of our system is a cheaper and faster way. It is though more friendly to students and doctors because it is built of simple elements they know.
The overall costs of all elements needed to build a full measurement system include less than EUR 200 for electronics (microcontroller, driver, and cables) when assumed connection to a laptop PC. An additional Raspberry Pi microcomputer configured with a touch panel may replace a desktop computer for about EUR 200, making the solution a standalone system. Three-dimensional printing of the sensor head and casings should cost less than EUR 150. The sensor head requires metal inserts, but they could be made using hand tools; CNC machining is not required. The full project should close around EUR 600. Meanwhile, the commercial instruments with such a range start their pricing at around EUR 5000. Devices with a range starting at 100 nm can cost around EUR 1000 but do not offer modularity and expansion possibilities.
The system is highly modular. Each module can be replaced with another solution, expanded, or tailored to special needs. It is possible to use industrial standard elements with these described in the article. Any sensor head with a frequency signal output can work within the system as long as the oscillator circuit can work with this signal. The sensor head with USB signal output can work with the provided software if it is connected directly and the signal frames are valid. The oscillator circuit can also be replaced to change the frequency range-only Pierce's oscillator work principle must remain. The same goes for microcontroller units and interfaces: it is possible to transfer data via Wi-Fi or Bluetooth, by changing the microcontroller unit and making small program changes.
The software can also be replaced with another third-party or open-source solution that can read from the interface. Our system worked well with Arduino with IDE Plots, OpenQCM software (Figure 5a), and our software (Figure 5b) as well. Each of them can be used on various systems and architectures, including all x86 and all ARM systems supporting Java (e.g., Raspbian). It does not require LabVIEW or expensive similar software, such as the commercial Prevac solutions [36]. The use of dedicated software running on an external microcomputer makes the described QCM a compact device that can store the characteristics of various materials and support the vaporization of multilayer coatings. It also allows for a live preview of the operating parameters of the resonating crystal, allowing for the identification of external measurement disturbances (Figures 3a and 4a) The great advantage of the system is that it does not require calibration. The only thing needed for the proper measurement is stable initial frequency-the measurement is relative to it, and the steepness of the curve does not depend on conditions. Small noises are acceptable and can be resolved owing to the median filter included in the software. However, during laboratory tests, this was not necessary.
Higher-class devices have systems that monitor and compensate for disturbances, mainly caused by temperature and attenuation of the resonant system itself, which are described in mathematical models, usually with a damped harmonic oscillator. There is no need to compensate for this damping, because the attenuation increases only with a large mass. The great advantage of the system is that it does not require calibration. The only thing needed for the proper measurement is stable initial frequency-the measurement is relative to it, and the steepness of the curve does not depend on conditions. Small noises are acceptable and can be resolved owing to the median filter included in the software. However, during laboratory tests, this was not necessary.
Higher-class devices have systems that monitor and compensate for disturbances, mainly caused by temperature and attenuation of the resonant system itself, which are described in mathematical models, usually with a damped harmonic oscillator. There is no need to compensate for this damping, because the attenuation increases only with a large mass.
Conclusions
The differences, including the price differences, between the commercial film thickness monitor (FTM) devices are mainly the number of channels supported. However, the reliability of the OpenQCM platform allows for easy expansion to the user's needs. The device's goal to provide cheap and reliable measurements for sample preparation and microscope experts could be suitable even for the high-throughput control of industrial sputtering processes. The method of measuring precise layer thickness is particularly important, among others, when it comes to health and safety [8,[13][14][15]20]. This paper shows how a simple QCM controller could be upgraded and well suited also to maximize the quality of measurements and user experience in typical deposition processes such as vacuum sputtering or thermal evaporation. Data Availability Statement: Data will be made available on request. | 8,457.4 | 2022-08-25T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
A New Finite-Time Observer for Nonlinear Systems: Applications to Synchronization of Lorenz-Like Systems
This paper proposes a synchronization methodology of two chaotic oscillators under the framework of identical synchronization and master-slave configuration. The proposed methodology is based on state observer design under the frame of control theory; the observer structure provides finite-time synchronization convergence by cancelling the upper bounds of the main nonlinearities of the chaotic oscillator. The above is showed via an analysis of the dynamic of the so called synchronization error. Numerical experiments corroborate the satisfactory results of the proposed scheme.
Introduction
Generally, nonlinear systems display complex dynamic behavior as steady state multiplicity, instabilities, complex oscillations, and so on, under different initial conditions, external disturbances, and time-varying parameters, leading to chaotic dynamic behaviors. However, besides the scientific interest on the study and analysis of nonlinear system with exotic dynamic behaviors, the applications for engineering purposes have been growing. Among these engineering applications, the employment of complex analysis for transport phenomena, chemical reacting systems, electronic industry, and synchronization technique for secure data transmission are actually very important [1][2][3][4].
In particular, the synchronization of chaotic oscillator is important for secure data transmission. Between several types of synchronization, one of the simplest and frequently studied types is the so called identical synchronization (IS). In this case the main purpose is to synchronize two or more chaotic oscillators with the same topology, which are coupled via an output injection of the measured signal from the master oscillator [5,6]. The above has been analyzed with control theory techniques under the framework of nonlinear observers, where asymptotic, sliding-mode, finite-time, high gain observers have been applied for synchronization purposes [7][8][9].
In this work an identical synchronization technique for a master-slave configuration employing a class of nonlinear coupling of the measured signal to the slave system is proposed, in order to generate finite-time synchronization. The finite-time synchronization convergence is analyzed via the dynamic of the so called synchronization error under the assumptions that the upper bounds of the chaotic oscillators are known.
The rest of this work is organized as follows. In Section 2 the problem statement is described and the observer design is presented; the finite-time convergence is proved. In Section 3 the proposed methodology is applied in the synchronization of the hyperchaotic Lorenz-Stenflo system with success. Finally, in Section 4 the synchronization of the hyperchaotic Lorenz-Haken system is given.
Observer Design and Finite-Time Convergence
Let us consider the following general state space model: where = [ 1 , 2 , . . . , ] ∈ Ω ⊂ R is the states variable vector, ∈ Ω ⊂ R is the corresponding measured output vector, : Ω → Ω is a nonlinear differentiable vector function, It is assumed that all trajectories of the state vector of system (1) are bounded, considering the set Ω ⊂ R as the corresponding physical realizable domain, such that Ω = { | ‖ ‖ ≤ max }. In most practical cases, Ω will be an open connected relatively compact subset of R , and in the ideal cases, Ω will be invariant under the dynamics of system (1).
In the synchronization scheme, system (1) is considered as the master system. Now let us propose a dynamical system to be synchronized with master system (1), which will be the slave system: for ∈ {1, 2, . . . , }, wherêis the th state variable of the slave system, = −̂is defined as the synchronization error, > 1 and it is considered an odd integer, and 1 and 2 are positive constants. Now we establish the analysis of the synchronization error and its finite-time convergence. Proposition 1. Let master system (1), and consider slave system (2), where the following conditions are fulfilled: for all ,̂∈ Ω ⊂ R .
(A2) The slave gains 1 and 2 are chosen such that Then, dynamic system (2) acts as a finite-time state observer for system (1), where the finite-time convergency is given by Proof. The dynamic modeling of the estimation error dynamics is developed employing (1) and (2) aṡ Applying the Cauchy-Schwartz inequality and (A1) to (6), Now, considering assumption (A2), Notice that inequality (8) is a class of finite-time stabilization function, where the parameter > 1 and it is considered an odd integer. Then the solution of inequality (8) is At steady state ( ( ) = 0), Then, the finite-time convergency is given by
As slave system, we consider the dynamical system given bẏ1 The numerical bounds for the trajectories of the Lorenz-Stenflo system (12) have been estimated in [10]. It was proved that system (12) has ultimate bounds and its trajectories belong to an invariant set.
For the tuning of the slave gains of system (13) we include Table 1 in order to find the upper bounds F 1 , F 2 , F 3 , and F 4 , corresponding to assumption (A1).
According to the numerical results of Table 1, the values of F are approximated as F 1 ≅ 20, x 1m x 2m x 2m x 2m x 2s x 2s Some numerical simulations are performed using the setup of parameters as = 1, = 0.7, = 26, and = 1.5 and fixing the slave system exponent as = 3. We consider the following initial conditions to the master system 1 (0) = 1, 2 (0) = 1, 3 (0) = 1, and 4 (0) = 1 and the initial conditions to the slave system̂1(0) = −1,̂2(0) = 5,̂3(0) = −2, and̂4(0) = −5. The synchronization between master system (12) and slave system (13) is shown in Figure 2, where the convergence of the state estimates to the real states is depicted. The subscripts and represent the variables of master and slave systems (12) and (13), respectively. As we can note in Figure 3, the synchronization results achieved with the finitetime observer are good, where each image represents the corresponding synchronization error defined as
Computer simulations have been carried out in order to test the effectiveness of the proposed synchronization strategy using the same set-up as above and fixing the slave system gains as x 1m x 2m x 2m x 2m x 2s x 2s | 1,442 | 2016-09-22T00:00:00.000 | [
"Computer Science"
] |
Ought-contextualism and reasoning
What does logic tells us how about we ought to reason? If P entails Q, and I believe P, should I believe Q? I will argue that we should embed the issue in an independently motivated contextualist semantics for ‘ought’, with parameters for a standard and set of propositions. With the contextualist machinery in hand, we can defend a strong principle expressing how agents ought to reason while accommodating conflicting intuitions. I then show how our judgments about blame and guidance can be handled by this machinery.
Introduction
What does logic tells us how about we ought to reason? If P entails Q, and you believe P, should you believe Q? There seem to be cases where you should not, for example, if you have evidence against Q, or the inference is not worth making. So we need a theory telling us when an inference ought to be made, and when not. I will argue that we should embed the issue in an independently motivated contextualist semantics for 'ought'. With the contextualist machinery in hand we can give a theory of when inferences should be made and when not.
Section 2 explains the background and the main problems connecting logic with norms of reasoning. Section 3 explains the two parameters we need for contextualism about 'ought'-a set of live possibilities and a standard. Section 4 discusses the objection from belief revision (this and the other problems will be explained in Sect. 2) and argues that it can be solved by using the set of live possibilities, as can the preface paradox (Sect. 5) and the problem of excessive demands (Sect. 6). Section 7 discusses the problem of clutter avoidance and argues that it can be solved by using the rele-B Darren Bradley<EMAIL_ADDRESS>1 Philosophy Department, Leeds University, Woodhouse Lane, Leeds LS2 9JT, UK vant standard. Section 8 discusses the implications for blame and guidance. Section 9 concludes.
Background
What is the relation between logic and reasoning? For example, suppose an agent believes that P. Suppose also that Q is a logical consequence of P, but leave open whether the agent believes that Q is a logical consequence of P. 1 Should the agent infer that Q? (To fill out the example, P might be 'it's raining and if it's raining then it's wet' and Q might be 'it's wet'.) A useful starting point is that logic 'prescribe[s] universally how one ought to think' (Frege 1893(Frege /1903(Frege /2009). This suggests that the agent ought to believe Q. We might try to capture the idea with the following norm: 2
Strong Normativity Thesis
For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S ought to believe Q (The reason for calling it 'strong' will emerge just below). 3 Various purported counter-examples have been given (Harman 1986, Field 2009, MacFarlane ms) based around four main problems: 4
Belief Revision
The agent might have strong evidence against Q. If so, they should surely revise their belief that P, rather than believe Q. 5 The Preface Paradox Suppose S rationally believes each of the assertions in his book, P1, P2…Pn. Let Q stand for the conjunction, P1 &P2…&Pn. Q is entailed by the author's beliefs. But surely, since the author regards himself as fallible, he should not believe the conjunction of all his assertions. 6 1 I follow the literature in assuming that logical consequence is not dependent on thought, reasoning, or minds (see Prawitz 2005). Other than that, any variety of logical consequence can be plugged in. See Steinberger (2019b) for discussion. Modus ponens is the standard example, but what I say is intended to generalize to other deductive principles, and, mutatis mutandis, to inductive principles. 2 I take conditionals like this to be material conditionals. It is a narrow-scope requirement. 3 Part of the reason is to distinguish it from Steinberger's (2019a) similar but not obviously identical 'Normativity Thesis'. 4 One problem I won't discuss is the explosion caused by inconsistent beliefs (Allo 2016, Steinberger 2016. I don't think contextualism can solve this problem. I suspect the solution is to limit the antecedent of the norm to mental states more basic than beliefs, such as experiences, but that is a topic for another paper. Also, the problems above are based on cases where the agent fails to make a valid inference. There are mirror-image cases where agents do make inferences which are invalid e.g. Pascal's wager. I leave the extension of this framework to such cases for future work. 5 I take this to be the same as the 'bootstrapping problem' (see Broome 2013, Sect. 5.3;Gibbons 2013, p. 32), or at least has the same solution. . 6 See Makinson (1965). This is similar to the lottery paradox (Kyburg 1961) but I leave a discussion of the lottery paradox for another occasion.
Excessive Demands
Some consequences of an agent's beliefs are too complicated for them to work out. For example, Fermat's Last Theorem follows from the rules of arithmetic. But surely most humans who know the rules of arithmetic have no obligation to believe Fermat's Last Theorem.
Clutter Avoidance Some consequences of an agent's belief are too uninteresting to be worth working out. For example, an agent might be able to infer that either grass is green or Elvis lives on the moon, using disjunction introduction. But surely they have no obligation to make such an inference, and might be irrational for doing so.
One reaction to these problems is to weaken the Strong Normativity Thesis. To see how this might be done, it is helpful to review MacFarlane's taxonomy of three choicepoints: i. the consequent of the conditional (if S believes P then S ought to believe Q) ii. both the antecedent and the consequent (if S ought to believe P then S ought to believe Q) iii. the whole conditional (S ought to believe that if P then Q))?
For each choice-point, the options described move roughly 7 from more demanding to less demanding. For example, a norm that says agents are required to believe Q is more demanding than a norm that says agents merely have permission to believe Q. The Strong Normativity Thesis takes the first, and most demanding, on all three choicepoints. It says that agents have a requirement rather than a permission or reason, that the requirement is to believe rather than merely not disbelieve (which includes suspension of belief), and that the requirement attaches to only the consequent. 8 The existing literature largely considers weakening the Strong Normativity Thesis by moving to these other choice-points. 9 But I don't think these choice points are the right places to weaken the link between logic and reasoning. 7 I add 'roughly' because it's not obvious that having permission is stronger than having a reason. 8 See Schroeder (2004), Kolodny (2005) and Finlay (2014, pp. 52-53) for arguments against the widescope solution. See Titelbaum (2015) for references and discussion of the logical relations between wide and narrow scope norms. 9 See Steinberger (forthcoming a, c) for discussions based on these choice-points. I think we need a different way to weaken the Strong Normativity Thesis-the key is that 'ought' is context-sensitive, and the Strong Normativity Thesis is true only with a particular sense of 'ought'. I will explain this in the next section, then show how the counter-examples are avoided.
Eight quick clarifications (impatient readers can skip to the next section): First, I take reasoning to be a process of transitioning between beliefs. Beyond that I remain neutral on what reasoning is. 10 Second, I remain neutral on the existence and nature of other epistemic norms in the area e.g. norms of belief (Fassio 2019), norms that agents should collect more evidence (Friedman forthcoming) etc.
Third, I remain neutral on whether norms of reasoning are fundamental or derived from more fundamental synchronic norms (Hedden 2015).
Fourth, I will focus on deductive inferences rather than inductive inferences. I think my account can be extended to inductive inferences, but do not do so here.
Fifth, I take 'reasoning' to mean the same as 'inferring'. The latter is useful for talking about individual inferences, which is a more natural locution than 'an individual act of reasoning'.
Sixth, I will use 'correct', 'bad' and 'good' only as normative terms, and use 'valid' for logical relations.
Seventh, I will assume that there can be epistemic reason to believe, and to make (or not to make) an inference to a belief. 11 I will assume that an inference is a type of action, so there can also be practical reason to make (or not to make) an inference. I will remain neutral on whether there can be practical reasons to believe and on whether epistemic reasons are fundamental or are ultimately grounded in practical reason. 12 Finally, I take there to be a close connection between 'ought', 'should', 'good' and 'reasons'. Specifically, I assume that 'what one should do' is synonymous with 'what one ought to do', 'what one has most reason to do' and 'what is good' (Shafer-Landau 2005; Broome 2013; Berker 2018). I will focus on contextualism about 'ought', but I take this to have straight-forward implications for contextualism about other normative terms (Finlay 2014). 13 I remain neutral on which if any is fundamental.
Two parameters
Suppose Napoleon, an eighteen century general, and Heimson, a twentieth century schizophrenic, utter the same sentence: 'I am Napoleon'. There is a sense in which 'I' 10 Valaris (2019) argues that 'reasoning' is ambiguous between cases where the agent believes the propositions and cases where the agent is working out the consequences of propositions under supposition. I follow the literature in talking about cases where the agent believes the propositions, but my arguments apply to cases where the propositions are merely supposed. 11 See Singer and Aronowitz (forthcoming) for the view that there can be epistemic reason to do pretty much anything. 12 See Cowie (2014), Woods and Maguire (forthcoming). means the same thing in both utterances. This type of meaning can be thought of as a rule picking out whoever is speaking; this is the character (Perry 1979;Kaplan 1989). And there is a sense in which 'I' means different things in each utterance, Napoleon and Heimson respectively; this is the content. So the content of any utterance of 'I' depends on a parameter-the speaker. We can make the parameter explicit by adding to the text who 'I' is relative to e.g. 'I-Napoleon' or 'I-Heimson'.
An analogous view regarding 'ought' has become increasingly popular. 14 In fact it is plausible that there are at least two 15 parameters needed to fix the content of a sentence including 'ought'-a standard and a set of live possibilities. In this section I will explain the view, and also separate the core commitments from stronger positions we need not be committed to.
Propositions/possible worlds
The first parameter is a modal base which determines a proposition or set of live possible worlds. 16 The live worlds are those compatible with the modal base. If the modal base is empty then all worlds are live. As the modal base grows, the set of live worlds is restricted. On the standard theory of modals (Kratzer 1981), 'it must be that p' means, roughly, that in all the live worlds, p.
This parameter is often called the 'information set', but using information here is too restrictive, for two reasons. The first reason is that information is naturally taken to imply truth. However, it will be important that agents can make good inferences from false beliefs. 17 We can allow that the modal base consists of the beliefs of the subject of the sentence, or the speaker, or some third party, or the collective beliefs of a group, or the propositions known by any of the former, or any of these plus a number of fixed propositions, and endless further options.
The second reason is novel. I think information is too restrictive because the parameter needs to vary with the agent's abilities, not just their information. What one ought to do depends on what possibilities you can bring about in the future. 18 To motivate this, suppose you are on the beach and see someone struggling in the water. Whether you ought to dive in depends on whether you can swim. 'You ought to dive in given that you can swim' is true while 'you ought to dive in given that you cannot swim' is false. In a context in which you can swim, performing the rescue yourself is a live With the assumption that making an inference is a type of action, 19 the value of the live possibilities parameter can depend in part on which inferences the agent can make (Fig. 1). We will also make the standard assumption that when 'ought' occurs in the consequent of a conditional, the antecedent of the conditional is added to the modal base. Consider 'if S believes P then S ought to believe Q'. The 'ought' has a modal base which includes 'S believes P'. We can make this explicit e.g. 'if S believes P then S ought-given-S-believes-P to believe Q'.
Standard
The second parameter is a standard or goal which determines an ordering of the live possible worlds. Plausibly, 'S ought to A' is true iff S A's in every live world at the top of the ranking. 20 The standard need not be one that the subject cares about. 21 If I say 'you ought to start with the cutlery on the outer edge', the standard might be the rules of etiquette. The more explicit sentence is 'by standards of etiquette, you ought to start with the cutlery on the outer edge'. This sentence can remain true even if you don't care about etiquette. This allows us to say to the psychopath 'you shouldn't kill people'; the full sentence is 'by standards of morality, you shouldn't kill people', and this is true even if the psychopath doesn't care about morality. 22 For our purposes we only need to distinguish two standards: those corresponding to the epistemic ought and the practical ought. 23 We can get a grip on the epistemic ought by thinking about contexts where the conversation concerns some epistemic standard such as having true beliefs. Typical sentences might be 'you ought to be uncertain' or 'we ought to expect defeat'.
Again, the standard need not be one that the subject cares about, so we need not assume that agents care about any epistemic goals. For example, someone who is told how a film ends ought (in the epistemic sense) to believe what they are told, even if they do not care how it ends, and even if they don't want to know how it ends. 24 The full sentence might be 'by epistemic standards, you should believe that this is how the film ends'.
There is disagreement about what the epistemic standard is. Leading contenders for epistemic standards include having beliefs that are (a) true (b) justified (c) knowledge. 2526 The differences between these positions won't matter here, so I will remain neutral. And we can remain neutral on whether the standard (e.g. truth) is constitutive of belief or whether something can be a belief without having such a standard. 27 This brings us to the practical ought. 28 We can get a grip on the practical ought by thinking about normal contexts where the conversation concerns what is best to do. Typical sentences might be 'you ought to stay in school' or 'should I boil or steam the vegetables?'. Call the standard associated with the practical ought the practical standard.
There is disagreement about what the practical standard is. Humeans hold that the practical standard is a function of one's desires e.g. the standard might be to maximize a weighted set of desires. Non-Humeans might hold that the practical standard is to maximize value. There are further debates about whether the practical standard is to maximize actual value or expected value, and whether expected value is determined by beliefs or evidence. 29 We can remain neutral on these issues. 30 We can also remain neutral on whether there are further parameters which determine the content beyond standards and propositions. For example, Carr (2015) argues that 'ought' must be relativized to a decision rule. This may be so, but it will not play a role below. 24 See Kelly (2003). 25 See Chignell (2018). 26 I set aside epistemic norm pluralism (Hughes 2017;Kopec 2018). 27 See Wedgwood (2002), Boghossian (2003) 28 See Williams (1965), Harman (1973) and Geach (1982), Broome (2013, pp. 12-24), and Schroeder (2011) 30 We can also remain neutral on whether there is a metaphysically privileged normative 'ought' which expresses genuine normative authority. Worsnip writes: "we should be careful to separate the question of whether (e.g.) the law …has genuine normative authority from whether there is a robustly normative usage of the legal 'ought'. The former requires the law to actually possess normative authority, whereas the latter only requires there to be speakers who take the law to possess normative authority. So even if only a handful of the above 'oughts' reflect a genuine source of normativity, many more of them might nevertheless be robustly normative usages of 'ought'." (Worsnip 2019a, page numbers not yet available; see also Worsnip 2020) The former question is metaphysical, the latter is semantic. We need only make the latter assumption that there are robustly normative usages of 'ought'. Now we have this machinery on the table, I will argue that the problems regarding the norms of reasoning can be resolved. There are numerous precisifications of the Strong Normativity Thesis, some of which are true and some of which are false.
Objection from belief revision
Suppose S believes P, believes P entails Q, but S has strong evidence against Q. It seems that S should not come to believe Q. But this is difficult to accommodate if our principle has a claim in the consequent about what the agent should believe e.g.
Strong Normativity Thesis
For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S ought to believe Q This is the problem of belief revision. 31 To solve this problem (and the next) we need to distinguish the normative status of a belief from the normative status of an inference. Crucially, an agent can make a good inference from a bad (e.g. unjustified) belief. 32 Our question concerns reasoning, so we want to bracket the question of whether the initial belief was justified and focus on the question of whether the inference was good. So the solution to the problem of belief revision is to say that the inference to Q is good but the belief that Q is not.
What role does contextualism play here? It helps specify a modal base relative to which the inference is good. So we modify the Strong Normativity Thesis in two ways-we replace 'believe' with 'infer' and we make explicit the modal base:
Modified Strong Normativity Thesis
For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S ought-given-P-entails-Q-and-S-believes-P to infer Q This allows us to judge that the inference to Q is good qua inference, while remaining neutral on the epistemic status of the initial belief that P, and consequently remaining neutral on the epistemic status of a belief that Q.
Someone might object that it is overall justification for the belief that Q that we are really interested in. If so, an account that brackets the rest of an agent's epistemic states is unhelpful.
The first response is to flat-footedly reply that our question is about reasoning, not belief. But even if our concern were about overall justification of belief, our framework would be helpful-we would just have to identify some initial epistemic state (e.g. basic beliefs, or evidence 33 ), then plug in the agent's total initial epistemic state for P: 31 One response is to weaken the consequent from the obligation operator to the permissible operator (Broome 2013, p. 219). Another is to endorse wide scope norms (Broome 1999). 32 See Broome (1999, pp. 418-419) 'In your reasoning, you can take as premises beliefs and intentions you have no reason to have, and even beliefs and intentions you ought not to have. The nature of your reasoning is unaffected by whether or not you ought to have the beliefs and intentions it is premised on.' 33 For basic beliefs see BonJour (1985), for evidence see Feldman and Conee (1985).
Modified Strong Total Normativity Thesis
For all agents S, and propositions P and Q: If S's initial-epistemic-state entails Q, then S ought-given-S's-initial-epistemicstate to believe Q So this reasoning framework can be placed into a bigger story about rational belief. But rational belief raises numerous tricky issues such as internalism, defeaters and inductive reasoning which go beyond the scope of this paper.
Terminology: In future, rather than repeating the whole antecedent appended to 'ought', I will just write 'ought A '.
The preface paradox
In this section I will argue that the same response, that of relativizing 'ought' to a possibilities parameter, solves the preface paradox:
Preface Paradox
Suppose S rationally believes each of the assertions in his book, P1, P2…Pn. Let Q stand for the conjunction, P1 & P2…&Pn. Since the author regards himself as fallible, he should not believe the conjunction of all his assertions (Q).
Thus S believes P1, P2…Pn and that they entail Q, but S should not believe Q. The problem is usually taken to be that of explaining why the author should not make the inference to Q.
But there is a sense in which the author should make the inference. If we move from talk of belief to talk of inferences, and set the live possibilities parameter to the proposition that S rationally believes P1, P2…Pn, then we can hold that the inference to Q is correct after all:
Modified Strong Normativity Thesis
For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S ought A to infer Q So there is a sense in which S ought to infer Q.
Someone might object that this misses the point, arguing that the Preface Paradox shows that the agent should not make the inference. But why not? The standard answer begins with the observation that each of P1, P2…Pn has partial justification, and then concludes that justification for Q might fall below some threshold. 34 But the level of justification of each of P1, P2…Pn is not at issue. We are assessing the deductive inference from P1, P2…Pn to Q (not the belief that Q) and thereby setting aside the justificatory status of P1, P2…Pn. It is irrelevant whether each of P1, P2…Pn has only partial justification, or is even completely unjustified. We are asking whether it is correct to infer Q from the set of beliefs that P1, P2…Pn, and indeed it is.
It might be useful to draw an analogy with Lewis's (1980)
Excessive demands
The problem of excessive demands is that we sometimes cannot work out the consequences of our beliefs. All the theorems of mathematics follow from our beliefs about arithmetic, but surely we are not required to infer them.
This conflict between logic and the norms of reasoning can be resolved by again invoking the live possibilities parameter. Above we used the belief part of the live possibilities; here we use the actions part of the live possibilities, invoking the assumption that an inference is a type of action.
From any belief there are an infinite number of valid inferences that could be made, of which some are simple and some are complicated. Let's first focus on the infinite set of valid inferences. There is a sense of 'ought' which includes all valid inferences in the possibilities parameter. (Bayesians will be familiar with this, as it is what 'rational' usually means in the Bayesian literature. 35 ) We can make this ideally rational 'ought' explicit by using 'ought-rationally'. And we can make explicit the sense of 'ought' which is limited to inferences some particular agent is able to make with 'ought-actually'. We get a false principle if we combine ought-actually with all the valid inferences:
False Requirement (FR)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S ought A -actually to infer Q This implies that S ought-actually to believe all theorems of mathematics. This is the root of the problem of excessive demands.
But the objection is side-stepped if we use ought-rationally:
Rational requirement (RR)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S ought A -rationally to infer Q 36 The objection is also side-stepped if we use ought-actually and add to the antecedent that the inferences are those S is able to make, which we can call 'S-available inferences'.
Non-rational requirement (NR)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S ought A -actually to infer Q Thus the live possibilities parameter solves the problem of excessive demands by providing a reading of the Strong Normativity Thesis on which one is not required to infer all the theorems of mathematics (NR), and it also explains the intuition that there is a sense in which you should infer all the theorems of mathematics (RR).
To fill this out, we can imagine three different sentences which can be inferred from P.
Q1: An obvious inference that any reasoner can make Q2: A difficult inference that a logic student can make Q3: A superhuman inference that no human can make Let w3 be the world where S infers Q3, Q2 and Q1; let w2 be the world where S infers Q2 and Q1; and let w1 be the world where S infers only Q1. Worlds are ordered (vertically) by how well they achieve the epistemic goal (Fig. 2). Agents ought to make true the best live world. Relative to ought-rationally, S ought to make w3 true. But if w3 is not live then S ought-actually make only w2 true. And if S is unable to do difficult logical reasoning then only w1 is live.
One complication concerns which inferences the agent is able to make, as there is some flexibility about what is held constant. Suppose the agent is tired, and this restricts the inferences they do make. What should we hold fixed when assessing what inferences they are able to make? If we hold fixed that the agent is tired, we will get one set of available inferences. If we allow them a nap we will get a bigger set of inferences. If we allow them to take a mathematics course, we will get an even bigger set of inferences. I don't want to take a stand on this, a topic which has been discussed in the literature on ought-implies-can. 37 I suspect that 'able' is also context-sensitive, which would make 'available inferences' context-sensitive. And any vagueness in 'available' will be matched by vagueness in 'ought'. The truth of NR just requires that the demands of ought-actually do not extend beyond the available inferences. 38 We have explained how excessive demandingness can be avoided by positing the relatively undemanding norm of NR. But we'll see that NR might still be too demanding.
Clutter avoidance
NR (and RR) still face the problem of clutter avoidance. They seem to imply that I am obligated to believe all of the infinitely many trivial logical consequences of my beliefs. This looks implausible. Steinberger (2019, p. 11) writes: Not only do I not care about, say, the disjunction 'I am wearing blue socks or Elvis Presley was an alien' entailed by my true belief that I am wearing blue socks, it would be positively irrational for me to squander my meagre cognitive resources on inferring trivial implications of my beliefs that are of no value to my goals.
Many philosophers have concluded that there must be a no-clutter norm, 39 but these cause serious problems. 40 I think the problem of clutter avoidance can be solved by invoking the parameter of the standard. I will argue that in normal contexts you ought not to make trivial inferences, yet we can identify contexts in which you should make trivial inferences.
The sense of 'ought' in which you ought to infer all the trivial logical consequences of your beliefs is the epistemic sense (e.g. the standard of having all and only true beliefs). 41 We can make this parameter value explicit with 'epistemically-ought':
Non-rational Epistemic-Requirement (NER)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S epistemically-ought A -actually to infer Q 37 See Frankena (1958) and Schwarz (2020) for a contextualist account of 'can'. 38 Someone might object that this is trivial or analytically true. Yes. We are trying to find true principles which vindicate the intuitions behind plausible but overly strong principles. We should expect some of these true principles to be trivial or analytic. 39 See Harman (1986) Goldman (1986), Christensen (1994), Williamson (1998), Ryan (1999), Feldman (2000, Wallace (2001), Sainsbury (2002) (I used available inferences and 'ought A -actually'. For completeness, note that the 'rationalized' version is also true, where we remove the restrictions to available inferences:
Rational Epistemic-Requirement (RER)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S epistemically-ought A -rationally to infer Q) If NER and RER seem implausible, it might be because for humans there is always some cost in time, energy or computing power to making an inference. But imagine a creature for whom there was no cost e.g. angels with infinite computing power. If they are at all interested in truth, knowledge or justification, then they would instantaneously make all the inferences from their beliefs. And we could explain the rationality of their doing so in terms of the epistemic ought. Although humans are not like this, I think it is natural to invoke such ideals.
For further support, Christensen (2004, pp. 165-166) gives the following example: Efficiency seems to enter into the evaluation of car designs in a fairly simple way: the more efficient a car is, the better. Now suppose someone objected to this characterization as follows: "Your evaluative scheme imposes an unrealistic standard. Are you trying to tell me that the Toyota Prius hybrid, at 49 mpg, is an "inefficient" car? On your view, the very best car would use no energy at all! But this is technologically impossible…the very laws of physics forbid it!" Christensen points out that this objection fails to undermine our ideal of efficiency, concluding that there is room for unattainable ideals even in the most pragmatic endeavours, and that we can recognize the normative force of ideals whose realization is far beyond human capacities. 42 Moving on, the sense of 'ought' in which it is not the case that you ought to infer all the trivial logical consequences of your beliefs is the practical sense:
False Practical-requirement (FPR)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S practically-ought A -actually to infer Q 43 As long as S has limited cognitive capacity and reasons to do things other than believe truths (e.g. to eat, to reproduce) as all known agents do, then FPR will have counterexamples. The problems of clutter avoidance involve such counter-examples. , 4445 42 This sentence borrows from Christensen's phrasing on p. 165 and p. 167. 43 This might be true using practically-ought-rationally, if ideally rational agents can make every inference instantly with no cost. But my point is that we can find a reading which vindicates the intuitions behind the no-clutter objection. 44 McHugh and Way (2018) makes a similar point in terms of attributive uses of 'good': 'Any 'intuition' we have that such beliefs are worthless is likely to concern some form of goodness [all things considered] rather than goodness qua [belief i.e. epistemic goodness]' p. 24. 45 It remains an open question what you practically-ought to believe-which inferences practically-should you make? It will depend on your views about the practical standard and the contingent facts about reaching We can see in Fig. 3 that relative to the practical standard, the best world could be w1, where only obvious inferences are made and the agent can spend their resources doing something else.
For practical ought claims to be true, the agent must have practical goals such that it is worth making the inferences. So the true norm is something like:
Non-rational Practical-requirement (NPR)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, and P supports Q via-S-available-inferences, and it is worth S making the inferences then S practically-ought A -actually to infer Q (For completeness, note that the 'rationalized' version is also true, where we remove the restrictions to available inferences:
Rational Practical-requirement (RPR)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, and it is worth S making the inference then S practically-ought A -rationally to infer Q) I suggest that NER, RER, NPR and RPR express the link between logic and reasoning.
Ideals, guidance, appraisal
Norms can be used for i) expressing ideals, ii) for guidance and iii) for making appraisals. 46 But different norms seem to be required for each role. I want to show that our intuitions about the divergence of norms for ideals, guidance and appraisal can be accounted for by the two parameters.
Footnote 45 continued this standard in your situation (e.g. fulfil desires for Humeans, maximize value for non-Humeans etc.) We should not expect general answers to these questions. 46 This is the focus of Steinberger (Steinberger 2019a). Compare Kiesewetter (2017, p. 13) who distinguishes first-person guidance, second-person advice and third-person criticism. I suggest that talk of first-person, second-person and third-person perspectives are ways of making salient different values for the possibilities parameter.
Ideals
Let's start with ideals, which are closely related to standards. Think of the ideal norm as expressing the best way of achieving a given standard, making no allowance for any limitations of an agent or other standards the agent might have. We'll focus on the epistemic standard e.g. believing all and only truths, so the relevant ought is epistemic-ought. As any limitations of the agent are irrelevant for the ideal norm, we need ought-rationally. Putting this together the ideal norm is: 47 For all agents S, and propositions P and Q: If P entails Q, and S believes P, then S epistemically-ought A -rationally to infer Q For precedent, compare the utilitarian thesis that an act is right if and only if it maximizes happiness. Faced with the objection that this norm fails to provide guidance, utilitarians can maintain that their principle expresses the ideal norm relative to the moral standard, even if we cannot always follow it. 48 There is a controversy worth mentioning before we go further. What is ideal reasoning for an agent who falsely believes that the relevant rule is invalid? For example, suppose S has been told by a confused teaching assistant that modus ponens is invalid. This is misleading higher order evidence. Should agents reason in line with their false beliefs? Some say no, that misleading higher order evidence should be ignored in first-order reasoning (level-splitters and right reasons theorists 49 ). Others say yes (conciliationists). There is an analogous debate in ethics. Some hold that those with misleading higher order evidence about the ethical rules should (morally) ignore that misleading higher order evidence. 5051 I have my own views on this controversy (Bradley 2019), but this framework allows us to remain neutral. At the end of Sect. 4 I argued that we can bracket the rest of the agent's epistemic states, and in particular the question of whether P is justified, and focus on the inference from P to Q. Similarly, we can bracket any of the agent's epistemic states that might defeat the inference i.e. make the inference from P to Q incorrect. The advice of a confused teacher would thereby be bracketed. Thus, I leave open the question of how, if at all, contextualism interacts with the debate about higher level evidence. 47 One possible counter-example is an epistemic Pascal's wager, where the agent will be rewarded with lots of true beliefs if they fail to infer Q (Berker 2013). I set these cases aside here and address them in (ms). 48 See Bales (1971), Railton (1984), Jackson (1991). 49 See Horowitz (2014). 50 Compare Harman (2011). 51 Whatever line you take here will filter down to norms of guidance and appraisal. For example, if the ideal is that misleading higher order evidence is to be taken into account in first-order reasoning, then it is natural to judge agents who do not take it into account epistemically criticism-worthy. And if the ideal is that misleading higher order evidence is not to be taken into account in first-order reasoning, then it is natural to judge agents who do take it into account epistemically criticism-worthy. And there is room for mixed verdicts. One might hold that the ideal says that misleading higher order evidence is not to be taken into account in first-order reasoning, but agents who do are excused i.e. not criticism-worthy.
Whatever the ideal is, we can now ask how the norms of guidance and appraisal diverge from it.
Guidance
We expect that agents can be guided by norms, but ideal norms cannot always serve as norms of guidance. For example, a norm might say 'if the exam asks for the capital of Portugal, then write 'Lisbon". This expresses the ideal, but cannot guide an agent who doesn't know it. (Perhaps better: Doesn't believe it). In ethics, utilitarians accept that their theory needs to say something about guidance, and they offer norms that can be used to guide e.g. maximize expected utility. In both cases, the natural solution is to hold that norms which can guide agents are restricted to refer only to beliefs and abilities the agent has.
Let's again focus on the epistemic standard. S can only be guided by inferences available to S, so I suggest that the guidance norm is:
Non-rational Epistemic-Requirement (NER)
For all agents S, and propositions P and Q: If P entails Q, and S believes P, and P supports Q via-S-available-inferences, then S epistemically-ought A -actually to infer Q.
Appraisal
What is it to be blameworthy for violating an epistemic norm? 52 Here is a useful principle adapted from Kauppinen (2018): Epistemic Blameworthiness S is blameworthy for violating an epistemic norm if and only if it appropriate, other things being equal, to hold the subject accountable by reducing epistemic trust, insofar as she lacks an excuse.
I'm going to assume that Epistemic Blameworthiness is roughly correct. My aim in this section is to map our intuitions about what counts as an excuse onto the contextualist framework.
Distinguish two types of excuse for failing to make a valid inference. 53 Agents can be excused by being unable to make the inference, or by having no sufficiently good reason to make the inference. These excuses correspond to the two parameters. Let's go through them. (I leave open that there might be other types of excuses. I give sufficiency conditions for excuses. Blameworthiness requires no excuses, so I give necessary conditions on blameworthiness. 54 ) First, S might be unable 55 to make the inference because it is too complicated, and could thereby be excused. 56 In the contextualist framework, they still infer as they ought-actually to. So if the inference they fail to make is not one they ought-actually to make then they have an excuse.
A complication is that we might reduce our epistemic trust in an agent precisely because they are unable to make the valid inference. For example, it might be an inference that we can make, and which we expect others to make, so S's inability to make that inference reduces our epistemic trust in S. The effect of this complication is to expand the live possible worlds to an intermediate level. For example, consider an agent who can only infer Q1, producing w1. Although they cannot infer Q2 and thereby produce w2, we expect them to be able to, while we do not expect them to infer Q3 and produce w3. So the best live world is w2, and S ought to produce it. Call this middling sense 'ought-competently'. S might fail to infer Q3, but if S infers as they ought-competently then they have an excuse (Fig. 4). 57 Second, S might have no sufficiently good reason to make the inference (because S has non-epistemic goals), and would thereby be excused. In the contextualist framework, they still infer as they practically-ought to. Once non-epistemic goals are added, the ordering of worlds can change, and the best world might be one in which the agent does not make the inference e.g. when the inference is trivial. In Fig. 5, S would be excused for failing to arrive at w2, as the best world is w1; in failing to arrive at w2 or w3, S infers as she practically-ought to. So if the inference they fail to make is not one they practically-ought to make then they have an excuse. 53 Can there be an epistemic excuse for making an invalid inference? This raises epistemic Pascal's Wager issues which I set aside here. 54 A plausible further necessary condition is that the speaker endorses the standard (Worsnip 2019a). 55 Mapping to the debate about epistemic conditions on responsibility, this might be equivalent to the 'control condition' (Rudy-Hiller 2018). 56 Perhaps some inferences could always be made. If all agents are able to infer according to modus ponens then we get what Broome (2000) calls 'strict liability' for simple logical relations: "The relation between believing p and believing q [a logical consequence of p] is strict. If you believe p but not q, you are definitely not entirely as you ought to be" (85). With a more complicated inference which an agent cannot make, perhaps they ought to have been able to make the inference e.g. perhaps they should have taken a logic course. (This is the 'tracing condition'; see Vargas 2005). Then they might be blameworthy for not taking the logic course, and they might be blameworthy for the downstream consequences e.g. being unable to make the inference. It seems to me that consideration of such facts creates a context in which it is true to say that they could have made the inference. . 57 Perhaps a similar point applies to forgetting. If S forgets p then they are unable to believe/infer that p. We would reduce our trust in them only if we expect them to remember p. Putting these together, if the inference they fail to make is either not one they practically-ought to make or not one they ought-competently make, then they have an excuse. Contrapositively, if agents are blameworthy for failing to make a valid inference then they fail to infer as they practically-ought-competently. So the norm of blame for reasoning is: 58 For all agents S, and propositions P and Q: If P entails Q, and S believes P, and P supports Q via-S-competent-inferences, and it is worth S making the inferences, then S practically-ought A -competently to infer Q
All valid inferences from a set of beliefs
Inferences agent is blameworthy for not making i.e. inferences agent practically-oughtcompetently to make Inferences agent has sufficient practical reason to make Inferences agent is able (or expected to be able) to make
Fig. 6 A Way to be Epistemically Blameworthy
Let's try a case: Melted Ice-cream Alessandra has gone to pick up her children at their elementary school. It is hot, but she leaves the ice-cream she has brought for her children in the car. Although able to infer that the ice-cream will melt, she does not do so. By the time they return the ice-cream has melted. 59 Intuitively, Alessandra is epistemically blameworthy. We would reduce our epistemic trust in Alessandra if we learnt that she failed to realize that the ice-cream would melt. Our framework delivers this verdict if the inference to the belief that the icecream would melt is one she is both able to make and has sufficient practical reason to make. And indeed both conditions are satisfied. Alessandra has enough inferential competence to be able to work out that the ice-cream would melt, and has sufficient practical interest in the ice-cream not melting. 60 She practically-ought-competently to have inferred that the ice-cream would melt, but she does not, so is epistemically blameworthy (Fig. 6). Alessandra is excused if we make either of two modifications to the story. If we modify the story to one in which her full attention on something other than the icecream is a matter of life and death then Alessandra is not epistemically blameworthy. For example, suppose she is a doctor and as she parks she sees that there has been an accident and only her full attention for several hours will save the life of a child. In such a context, a melting ice-cream is trivial in the same sense that it is trivial to infer that I am wearing blue socks or Elvis is alive. She does not have practical reason to make the inference, so is not blameworthy. 61 Alessandra is also excused if the inference to the belief that the ice-cream would melt is not one she is able to make, nor one we would expect her to make. This requires a bit more imagination, but we could imagine that it is a typically cold day in the Arctic 59 Adapted from Sher (2009, p. 24). I've changed the dog to an ice-cream to take morality out of it. 60 Compare Lillehammer (2019). 61 See Schroeder (2012) for related examples where there is good reason not to deliberate. Similar issues arise for the question of when it is rational to reconsider a belief (see Paul 2015). Fragmented agents (Stalnaker 1984) can perhaps be thought of as agents who do not make the inference which would connect the fragments.
Circle where the ice-cream would normally not melt, but the car is parked in a place where heat from concave neighbouring buildings is focussed. Alessandra knows the contingent facts, but does not have the mathematical abilities necessary to work out that the ice-cream would melt. She ought-rationally to make the inference, but we would not expect her to be able to make the inference, so she is not blameworthy.
Conclusion
I have argued that many controversies about the norms of reasoning can be resolved by an independently motivated contextualist semantics for 'ought'. The problems of belief revision and the preface paradox can be solved by relativizing to a set of propositions, the problem of excessive demands can be solved by relativizing to a set of available inferences, and the problem of clutter avoidance can be solved by relativizing to a standard. These parameters can also illuminate questions about which norms are relevant to ideals, guidance, and blame. 62 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 11,270 | 2021-01-19T00:00:00.000 | [
"Philosophy"
] |
Comparative Study of the Grid Side Converter’s Control during a Voltage Dip
e modeling and control of a wind energy conversion system based on the Doubly Fed Induction Generator DFIG is the discussed theme in this paper. e purpose of this system was to control active and reactive power converted; this control is ensured thanks to the control of the two converters. e proposed control strategies are controlled by PI regulators and the sliding mode technique. In the present work a comparison of the robustness of the 2 controls of the grid side converter (GSC) during a voltage dip is shown. e simulation is carried out using the Matlab/Simulink so ware with a 300 kW generator.
Introduction
In recent years, the wind energy has become the fastest growing renewable energy source in the world. is is mainly due to the fact that it has received thorough attention and has been considered as a way of fighting climate change. Control of the speed of the wind turbine is generally used to improve the energy production [1].
Several structures are used to control speed, structures based on asynchronous machine, synchronous machine and Doubly Fed Induction Generator known as DFIG. e DFIG's structure is the most used, thanks to the advantages it gives. is structure is composed of a wound rotor induction generator where its stator is directly connected to the grid and its rotor is connected to the grid through two power converters [2]. Several lines of research in literature shows the classical control of the power converters; the first one rotor side converter (RSC) controls the DFIG, and the second one grid side converter (GSC) controls the DC link's voltage. e control can be ensured using different techniques as the PI regulators, the Backstepping technique, direct power control, direct torque control and the control by sliding mode, which will be the object of this work [3,4].
In PI, control strategy has been investigated; the synthesis of this technique is purely algebraic and uses the pole compensation based on a numerical method [1], investigating a polynomial RST controller. is method is a sophisticated one and based on pole placement technique. Sliding Mode Control (SMC) controllers have been implemented in many areas because of their excellent properties, such as insensitivity to external perturbation and parameter variation [1]. ese wind generators, like most decentralized generators, are very sensitive to grid disturbances and tend to disconnect quickly. Indeed, faults in the power system, even very far from the generator, can result in short-term voltage disturbances, called voltage dips, which can lead to the disconnection of the wind system. e need to ensure the continuity of service of the WECS in the voltage dips event is all the stronger as the penetration rate in the network is high [5][6][7]. e aim of this paper is to compare the GSC's controls with PI and with the SM technique during a voltage dip.
Wind generators, like most decentralized generators, are very sensitive to network disturbances and tend to disconnect quickly during a voltage dip or when the frequency changes.
ese disconnections lead to production losses that can aggravate the situation on a network, already weakened by the incident and thus have negative consequences. It is therefore necessary to avoid this instability in the production of wind energy to ensure continuity of service [8].
e challenge is to satisfy the continuity of service during a voltage dip. is paper is structured as follows: first, the topology of the system studied is presented in the second section. en, the modeling of the turbine, the doubly fed induction generator, the power converters and the filter is shown in the third, fourth and fi h sections, respectively. e sixth, seventh and eighth sections present to the controllers of the power converters using the PI regulators and the sliding mode techniques. e ninth section shows the voltage dip types. e last section is dedicated to the simulation results carried out using the Matlab/Simulink so ware, followed by a conclusion.
The Topology of the Studied System
e most suitable technology is the one based on the double-feed asynchronous machine with wound rotor, whose speed variation is done by means of the power converters located at the rotor circuit and the stator, which is connected directly to the grid ( Figure 1).
Aerodynamic Conversion.
e wind velocity that passes through a surface is expressed as follows: where: : air density; : wind-swept turbine surface; its expression is as follow: e turbine power according to the Betz theory is given by: 퐶 훽, 휆 : aerodynamic efficiency of the turbine o en referred to as a power factor. It is a specific coefficient to each wind turbine; it depends on the specific speed and the orientation angle of the blades .
where e turbine torque is defined by: e role of the gear box is to adapt the rotation speed of the turbine to the rotation speed of the generator. Its gain is given by: Applying the fundamental relation of the dynamics, the generator tree is modeled by the following equation: : the total inertia given by:
The DFIG Modelling Used in a WECS
e schematic representation of a DFIG in the three-phase reference is given in Figure 3.
e DFIG is represented in the park frame by the following equations: e electrical equations are: e magnetic equations are: e active and reactive stator's powers are: For vector control of DFIG connected to a reliable grid (balanced three-phase system), the Park reference linked to the rotating field is chosen. By adopting the hypothesis of a stator resistance as negligible (given the power of the DFIG), and that the stator flux is constant (while is constant) and oriented along the axis [10].
Interaction with the wind Gear box I nteraction with the machine 3: Representation of the DFIG in three-phase plan [9].
Journal of Energy 4
The RSC's Control Using the PI Regulator
It exists 2 methods exist using the PI regulator: is technique consists of directly and independently regulating the active and reactive stator powers produced to those of references, using a single regulator on each axis. e control is given by correcting the difference between the measured and the reference power, the regulator used is a PI controller.
(ii) Indirect control without power loop: is control does not consist in directly regulating the powers as the previous control but is based on the indirect regulation of the measured rotor currents which are controlled with the reference currents which are expressed as a function of the stator powers of reference imposed on the machine. (iii) Indirect vector control with power loop: is command consists in regulating the stator powers and the rotor currents in cascade, for this we will set up two control loops on each axis with an integral proportional regulator for each, one regulating the power and the other the current.
In this paper the last one was chosen and its principal scheme is illustrated by e sliding mode (SM) technique is developed from the variable structure control in order to solve the disadvantages of the other nonlinear control system designs namely the PI controller. Sliding mode is a technique that consists of initially defining a surface, the system that is controlled will be forced to that surface, and the system behavior is said to slide to the desired balance point. [7,[11][12][13].
e SM technique is mainly carried out in three complementary steps defined by: (21) e control function will satisfy reaching conditions in the following form: where: is the equivalent or nominal control is determined by the system model. the sliding control: it consists of the sign function of the sliding surface 푆(푥), multiplied by a constant . e main feature of this control as mentioned before is to drive the error to a "switching surface" 푠(푥). When the system is in "sliding mode", the system behavior is not affected by any modeling uncertainties and/or disturbances [14].
e Switching Surface Choice.
e sliding surface 푠(푥) can be chosen in general as a hyperplane passing through the origin of space for stabilization reasons, the sliding surface that is a scalar function should be chosen such that the variable to be adjusted slides on this surface. So its expression is as follows: (i) is positive gain that will interpret the bandwidth of the desired control. (ii) 푒(푥) is the variable's error to be regulated. (iii) is the relative degree; it is the smallest positive integer representing the number of times to derive in order to display the command.
e Control's Conditions for
Existence. e conditions of existence and convergence are the points that allow the different dynamics of the system to converge towards the sliding surface and to remain there independent of the perturbation. It exists 2 approaches: e direct approach: it consists on: e Lyapunov's approach, it consists of choosing a Lyapunov's candidate function 푣(푥) > 0 (scalar positive function) and a vector control that will decrease its derivative.
Journal of Energy 6
We have to redo the same calculation to find the control vector of the reactive power.
A er all this calculation the RSC controller's principle scheme using the SM is illustrated by Figure 6, and more detailed in (43) (ii) Control reference reactive power to zero to ensure a unit power factor.
In fact, controlling the GSC is like controlling the active power by keeping the DC link voltage constant, and setting the reference reactive power to zero so as not to impair the quality of the grid (unit grid power factor).
e GSC's Control Using PI Regulators.
is method is little used because of the disadvantages it brings, its Control scheme of the GSC using the SM controller. principle is illustrated in Figure 8. It consists of synthesizing PI regulators.
e GSC's Control Using the SM Technique.
is technique consists of developing a control law based on the sliding mode, so just follow the steps explained previously. e principle scheme is illustrated in Figure 9.
To elaborate the control laws of the two SMDC Link and SM Line current blocks, Equations (25) and (26) are used, and the same steps already explained before are followed.
Increasing currents can cause the over-size of the rotor side converter to support this extra current while the decrease of the voltage of the continuous bus can cause a disconnection of the wind turbine.
The Simulation Results
e simulation was carried out with MATLAB/Simulink, in order to validate the control strategies studied in this work. Simulation tests are realized with a 300 KW generator coupled to a 398 V/50 Hz grid and for a fixed wind speed because it is assumed that the duration of the fault is so short that the speed remains constant. e machine's parameters are given next in the Tables 1 and 2. e different quantities are expressed in reduced unit (P.U), as an example the power in reduced unit is expressed as follows:
The Voltage Dips
A grid fault is physically, a short circuit occurring somewhere in the network, a voltage dip (voltage dips) being the repercussion of this fault on the voltage. A voltage dip is a sudden decrease in the supply voltage to a value below a threshold value, followed by its recovery a er a short time [6].
ere are different types of voltage dips as shown in the It can be observed that before the arrival of the default the voltage exactly follows its instruction for both types of commands, once the default has arrived the bus voltage is disturbed in both cases, except that for the command with PI, the voltage gives strong oscillations for a very deep default, that last as long as default is present. While for the control with SM the voltage is shi ed from that of reference-this shi , is even larger than the depth of the default, is important. e voltage dip is applied from 푡 = 500 ms and lasts up to 푡 = 1000 ms, for different depths: 20% and 40%. Figure 11 shows the stator voltages with the voltage dip at 푡 = 500 ms. Figure 12 represents the stator currents that increase at the onset of the fault (explained in part 8). Figure 13 shows the rotor currents for different depths: 20% for Figure 13(a) and 40% for Figure 13(b). It can be clearly noticed that the deeper the fault is, the higher the current increases.
(50) 푃 = 푃 푃 . Figures 16 and 17 show the active power developed by the DFIG. It is assumed that the power set point changes at time 푡 = 600 ms and the remainder until the end of the simulation, it can be seen that for the control with PI the power is disturbed at the moment of the application of the hollow at significant depth, and the remainder until the disappearance of the latter, as for the command with SM with a very deep drop, we only notice spikes at the appearance and the disappearance of the drop, while the power perfectly follows his instructions.
A summary of the comparison results is presented in Tables 3 and 4. 11. Conclusion e purpose of this paper is to develop the control law using the sliding mode for both converters (GSC and RSC). e study is based on a comparison between a system whose GSC is based on a conventional PI controller and a second made by sliding mode taking into account the voltage dips to highlight the performance.
Finally, the simulation results showed that the control of the GSC using sliding mode, and during a voltage dip, is more efficient than the control using PI regulator.
Nomenclature
: turbine speed , : the axis stator voltages , : the axis stator current , : the stator and axis fluxes , : the dq axis rotor voltages , : the axis rotor current , : the rotor and axis fluxes , : stator and rotor resistances , : the supply and rotor angular frequency 푎,푏,푐 : are the single voltages from the converter 1,2,3 : are the MLI commands applied to the switches of the converter : is the DC voltage that comes from the DC link 푡1,2,3 : are the three-phase system of the source (the grid) 푎,푏,푐 : are the single voltages from the converter 푡1,2,3 : are the line currents coming from the source.
Data Availability e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 3,404.8 | 2020-02-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Data-driven Evolutions of Critical Points
In this paper we are concerned with the learnability of energies from data obtained by observing time evolutions of their critical points starting at random initial equilibria. As a byproduct of our theoretical framework we introduce the novel concept of mean-field limit of critical point evolutions and of their energy balance as a new form of transport. We formulate the energy learning as a variational problem, minimizing the discrepancy of energy competitors from fulfilling the equilibrium condition along any trajectory of critical points originated at random initial equilibria. By Gamma-convergence arguments we prove the convergence of minimal solutions obtained from finite number of observations to the exact energy in a suitable sense. The abstract framework is actually fully constructive and numerically implementable. Hence, the approximation of the energy from a finite number of observations of past evolutions allows to simulate further evolutions, which are fully data-driven. As we aim at a precise quantitative analysis, and to provide concrete examples of tractable solutions, we present analytic and numerical results on the reconstruction of an elastic energy for a one-dimensional model of thin nonlinear-elastic rod.
Evolutions of critical points
Many time-dependent phenomena in physics, biology, social, and economical sciences as well as iterative algorithms in machine learning can be modelled by a function x : [0, T ] → H , where H represents the space of states of the physical, biological, social system, or digital data, which evolves from an initial configuration x(0) = x 0 towards a more convenient state or a new equilibrium. The space H can be a conveniently chosen Hilbert space. This often implicitly assumes that x evolves driven by a minimization process of a potential energy E : [0, T ] × H → R. In this preliminary introduction we consciously avoid specific assumptions on E , as we wish to keep a rather general view. Inspired by physics for which conservative forces are the derivatives of the potential energies, one can often describe the evolution as satisfying a gradient flow equation of the typė where ∇ x E(t, x) is some notion of differential of E (in the simplest case ∇ x may represent the Frechét derivative of the energy E ; in other cases it might already take into consideration additional constraints which are binding the states to a certain sets, i.e., x(t) ∈ K(t) ⊂ H ). Physical systems naturally tend to minimize the potential energy. For this fundamental reason the study of steady states in physical systems or critical points of the energy is of utmost relevance, given the expected frequency for such states to occur. However, once a critical point x * is reached, i.e., ∇ x E(t * , x * ) = 0, the dynamics is not supposed to further progress, unless some of the constraining conditions are changing, leading to a modified energetic profile. In this case, the evolution would restart and tend again by gradient flow to another critical point satisfying the new constraints. In view of the relevance of critical points, it is often of interest to exclusively observe their dynamics, rather than record the transitions between them. If we imagine now to collapse to an instant the time of realization of the -microscopically in time -gradient descent evolution, we could interpret the dynamics -macroscopically in time -as the instantaneous hopping from a critical point to another critical point. This time reparametrization can be rather conveniently realized as limit for ε → 0 of a singularly perturbed version of (1.1) for a rescaling parameter ε > 0 and a choice of x 0 fulfilling the criticality condition ∇ x E(0, x 0 ) = 0. In view of the vanishing parameter ε , the trajectories would tend in the limit to have unbounded velocity (in the rescaled time) and therefore classical compactness arguments, such as Ascoli-Arzelá Theorem, would fail to characterize limit trajectories for ε → 0 . Luckily recent works [1,27,28,30] explored ad hoc compactness methods along solution trajectories x ε under suitable smoothness assumptions on the energy E and certain generic conditions, so-called transversality conditions [20,2,30], on the sets of critical points C(t) = {∇ x E(t, x) = 0} (compare assumptions (E1)-(E4) below). The more restrictive assumption of all is perhaps the request for the state space H of being of finite dimension, i.e., H = R d .
We are informed of work in progress [3], which will relax this latter request to arbitrary Hilbert spaces, but in this present paper we will restrict ourselves to the available compactness results in [1]. Therefore, we will assume throughout this paper that indeed H = R d , which is enough for most numerical applications. In fact, any problem with infinite dimensional state space would eventually need to be discretized and reduced to finite dimensions in order to be numerically computable. The main compactness result in [1] may be summarized as follows Theorem 1.1 (Agostiniani and Rossi, 2017). Let ε n → 0 , let x n 0 → x 0 in R d , and let x εn be the solution of (1.2) associated to ε n and to the initial condition (x n 0 ) . Then, for all 1 ≤ p < ∞ , there exists a trajectory x ∈ L p ((0, T ), R d ) and a positive Radon measure ν ∈ M + b (0, T ) such that the following properties hold: (a) up to a subsequence, x εn → x in L p ((0, T ), R d ) for every p ∈ [1, +∞) , and pointwise for all t ∈ [0, T ] ; (b) for every t ∈ [0, T ] the pointwise limit function x(·) constructed in (a) admits left and right limits x − (t) and x + (t) , respectively, and x ± (t) ∈ C(t) ; (c) the set J := {t ∈ [0, T ] : ν({t}) > 0} is at most countable and coincides with the set of discontinuity points of x(·) ; (d) for every s, t ∈ [0, T ] it holds
Learning the energy from observation of the dynamics
The evolution of critical points t → x(t) obtained by Theorem 1.1 fulfills in particular the energy conservation principle (1.3) and it is fully driven and explained by the energy E itself. In some relevant cases the energy that governs a system can be derived theoretically or accurately measured experimentally, as it happens in the first principle physics; in most of the cases, the energy needs to be approximated by solving the inverse problem of fitting the data. Model selection and parameter estimation methods are employed to determine the form of the governing energy. Data-driven estimations are needed, for instance, in training algorithms in machine learning [9,14,15,17], and in data assimilation for models in continuum mechanics [21,10] computational sociology [7,23,22] or economics [4,11,16]. However, even the problem of determining whether time shots of a linear dynamical systems do fulfill physically meaningful models, in particular have Markowian dynamics, is computationally intractable [12]. For nonlinear models, the intractability of learning the system corresponds to the complexity of determining the set of appropriate candidate functions to fit the data. In order to break the curse of dimensionality of learning dynamical systems, one requires prior knowledge on the system and the potential structure of the governing equations. For instance, in the sequence of recent papers [29,25,26] the authors assume that the governing equations are of first order and can be written as sparse polynomials, i.e., linear combinations of few monomial terms.
In this work we aim at bridging, in the specific setting of deterministic evolutions of critical points, the well-developed theory of mean-field equations with modern approaches of approximation theory and machine learning. We provide a mathematical framework for the reliable identification of the governing energy from data obtained by direct observations of corresponding time-dependent evolutions. We would like to obtain results which ensure the learning of energies without the need of more restrictive assumptions than (E1)-(E4). Moreover, the approximation of the energy from a finite number of observations of past evolutions allows to simulate further evolutions, which are then fully data-driven.
First of all, we need to formalize what we mean by observations of time evolutions. In this paper we will assume that we are allowed to observe multiple realizations of evolutions of critical points for the same energy function E , starting from different critical points. For that, we need to further modify the model by a suitable correction ( 1.4) so that, whatever is the choice of x 0 , the system starts from an equilibrium. This simply means assuming that an additional force is added at the beginning to allow the state x 0 to be an equilibrium. Then we allow ourselves to draw at random independently several instances of the initial conditions x 1 0 , . . . , x N 0 , . . . according to a fixed probability distribution µ 0 ∈ P c (R d ) with compact support. For each of the picked initial conditions we can finally observe corresponding evolutions of critical points t → x i (t), for i = 1, . . . , N, . . . . We need then to devise a constructive method to infer the energy E from the observed trajectories. Our approach goes through five fundamental theoretical results: 1. Compactness of controlled evolutions of critical points. In the equation (1.4) a correction force has been added to ensure an arbitrary initial datum x 0 to be an equilibrium. As two trajectories x(t),x(t) originating from distinct initial equilibria x 0 ,x 0 , respectively, may intersect at any t ∈ [0, T ] and to promote a unique flow along characteristics (see Remark 2.5 and Remark 3.2), we modify further the model to take the form of an augmented controlled system: The particular the choice of f ≡ 0 and u 0 = ∇ x E(0, x 0 ) yields back (1.4). Our first result, Theorem 2.8, is the generalization of the compactness Theorem 1.1 to the controlled system (1.5). Although the system is not anymore in the form of gradient flow and Theorem 1.1 cannot be directly applied, we show that the techniques in [1] can be adapted to (1.5) without introducing significant technical issues. 2. Mean-field limit of evolutions of critical points. While the initial condition x 0 is distributed as µ 0 , we need then to clarify how the trajectories of critical points x(t) are distributed at any time t ∈ [0, T ]. Informally, we should explain how the initial probability distribution µ 0 gets transported along trajectories of critical points to the probability distribution µ(t) at any time t ∈ [0, T ], so that x(t) ∼ µ(t). We approach this issue under the modeling assumption that the evolutions of critical points are the result of singularly perturbed limit of systems of the type (1.5). In fact, for ε > 0 established results in gradient flow theory [6] allows to describe the evolution of any system of the type (1.5) (assume here for simplicity f ≡ 0 and u 0 = ∇ x E(0, x 0 )) by considering solutions η ε ∈ AC([0, T ], P c (R d × R d )) of mean-field equations of the type (see (3.5) In Section 3.2 we take advantage of the newly established compactness argument Theorem 2.8 and the superposition principle introduced in [6, Theorem 8.2.1] to derive Theorem 3.6 and Proposition 3.7 to describe the probability valued trajectory t → η(t) (below we use equivalently also the notation η t = η(t) ) representing the time dependent distribution of evolutions of critical points as a suitable form of limit of t → η ε (t) for ε → 0. The main characterization of the limit is given by which shows that the first marginal of η(t) is supported on critical-type points. 3. Mean-field limit of the energy balance. We further show that the evolution t → η t is also fulfilling in a suitable sense a generalization of the energy balance (1.3), Theorem 3.10, which explains how the energy E(t, x(t)) is actually distributed at the time t ∈ [0, T ] for the initial condition x 0 ∼ µ 0 . The result is obtained by a simple, but also thoughtful, reformulation of the energy balance using a Lebesgue charaterization of left and right limits and the use again of the compactness argument Theorem 2.8. To our knowledge, Theorem 3.6, Proposition 3.7, and Theorem 3.10 are the first form of mean-field limit of evolutions of critical points available in the literature. 4. A variational model for the energy learning. Inspired by the characterization (1.6), we formulate the problem of learning the true energy E responsible of driving the dynamics from observations of evolutions of critical points as the minimization of the functional on a suitable compact class in W 2,∞ of competitor energiesÊ . To make our approach fully constructive we actually assume to observe only a finite number N of evolutions of critical points and use a finite dimensional set V N W 2,∞ of competitorsÊ . By a Γ -convergence argument for N → ∞, we derive in Theorem 4.11 an approximation result of the true energy E , which was driving the observed dynamics. 5. Data driven evolutions of critical points. Once the energy is learned, it is then possible with the estimated energyÊ to simulate further evolutions. Corollary 4.14 guarantees that the simulated evolutions, which are fully data-driven, approximate "true" evolutions that would have been generated by using the original energy E .
The abstract framework described by the steps 1.-5. is actually fully constructive and numerically implementable. As we aim at a precise quantitative analysis, and to provide an example of tractable solutions, we approach the learning of the governing energy of the evolution for specific models inspired by continuum mechanics. In particular, in Section 5 we present analytic and numerical results on the reconstruction of the nonlinear elastic energy for a one-dimensional model of thin elastic rod.
Let us stress that this particular example is by no means the unique possible application of our general framework and we envisage many other possible applications for data-driven models in physics, biology, social, and economical sciences as well as training algorithms in machine learning.
Let (X, d) be a separable metric space. We denote with M b (X) the set of bounded Radon measures on X and with M + b (X) the subset of positive bounded Radon measures. The symbol P(X) stands for the set of probability measures on X , P c (X) indicates the set of probability measures with compact support in X , and P 1 (X) denotes the set of probability measures with bounded first moment, i.e., measures µ ∈ P(X) such that Let (Y, d ) be another separable metric space, r : X → Y a Borel map, and µ ∈ M b (X) . We define the push-forward r # µ ∈ M b (Y ) of µ through r by the relation r # µ(B) := µ(r −1 (B)) for every B Borel subset of Y . For every µ, ν ∈ P 1 (X) , the 1-Wasserstein distance W 1 (µ, ν) is defined by (see, e.g., [6, Section 7.1]) where Γ(µ, ν):= {γ ∈ P(X × X) : (π 1 ) # γ = µ and (π 2 ) # γ = ν} and π i : X × X → X , π i (x 1 , x 2 ) = x i for i = 1, 2 . We also recall that if µ, ν ∈ P c (X) it holds Finally, given an interval I ⊆ R , we denote with BV (I) the space of functions of bounded variations in I , that is, the space of L 1 loc functions v : I → R whose distributional derivative Dv belongs to M b (I) . We refer, for instance, to [5,Section 3.2] for further details.
Main assumptions and motivations
We start by fixing the main assumptions, which we will hold valid for the rest of the paper. Let T > 0 and let us consider an energy functional E : [0, T ] × R d → R satisfying the following conditions: contains only isolated points.
Remark 2.1. Let us briefly comment on the above assumptions. Hypotheses (E2) is typical in the framework of evolutions of critical points or rate-independent systems, and it is useful to prove the boundedness of trajectories exploiting Gronwall's type arguments. Condition (E3) implies the more common compactness of sublevels of the driving energy E . The need of an explicit bound from below will be clarified below (see in particular (2.1), Lemma 2.4, and Proposition 2.6). Finally, assumption (E4) has been considered, e.g., in [1,27,28,30], in the uncontrolled case u = 0 of equation (1.2). This kind of hypothesis has been proven, at the state of the art, to be quite useful to show compactness of trajectories of (1.2) in the limit as ε → 0 . In this paper, we focus on the perturbed system (2.1), where a control u ∈ R d is added. In order to be able again to show compactness of trajectories of (2.1) as ε → 0 , we need the stronger requirement (E4). We refer to Section 2.2 for a discussion on the compactness issue. Roughly speaking, we need that the energy E(t, ·) has no affine regions, for every t ∈ [0, T ] . We remark that (E4) and its corresponding assumption (E 3 ) in [1] are both technical and likely equivalently "artificial". It is indeed clear that (E4) implies (E 3 ) . On the other hand, if E does not satisfy (E4) for some u , then the linear perturbation E(t, x) − u · x does not satisfy (E 3 ).
Here we are interested in studying the system (1.4). As already mentioned in the introduction, the reason for adding the term ∇ x E(0, x 0 ) to the usual gradient flow system (1.2) is twofold. On the one hand, in the limit as ε → 0 we want to avoid jump discontinuities at time t = 0 and ensure that x 0 is an equilibrium from the very beginning. Since the limits of trajectories of (1.4) are expected to satisfy ∇ x E(t, x(t)) = ∇ x E(0, x 0 ) , jumps at t = 0 will not appear. On the other hand, the drift ∇ x E(0, x 0 ) can be exploited to add randomness to (1.2). This can be done simply by assuming that the initial data are distributed according to a certain probability measure µ 0 ∈ P(R d ) . In fact, in what follows, we aim first at obtaining the mean-field description of (1.4) for fixed ε > 0 (Section 3, standard), and then pass to the limit in the mean-field (or continuity) equation as ε → 0.
Remark 2.2. On the one hand, we notice that for every initial datum x 0 ∈ R d there exists unique a solution to (1.4). On the other hand, we could show easily some examples of energies E satisfying (E1)-(E4) and such that, for two different initial data x 0 ,x 0 ∈ R d , the corresponding solutions of (1.4) cross each other at some time t ∈ (0, T ). For this reason, it is more convenient to study (1.4) and its mean-field limit in terms of pairs curve-initial datum (x(·), x 0 ) , in order to ensure uniqueness of transport along characteristics, see Remark 2.5 below.
In view of the above comments, we consider the more general system We further assume Assumption (E5) ensures well-posedness and continuity of solutions from initial data of In the following lemma, we collect the properties of E .
Lemma 2.4. The following facts hold: for every x ∈ R d and every u ∈ K ; Proof. Property (b) follows from the smoothness of E . Statement (a) follows from (E2) and (E3). Indeed, By construction of E , we have that Choosing δ > 0 so small that δ/C 3 < 1/2, we get that Therefore, we deduce (2.2) as soon as u ∈ K .
Proposition 2.6. Let (x 0 , u 0 ) ∈ R d × R d and let ε > 0 . Let (x ε (·), u ε (·)) be the solution of (2.1). Then, the following facts hold: It is useful to observe that this bound implies is differentiable. The energy balance in (a) follows by chain-rule, recalling the time derivatives as in (2.1). We report below the explicit computation as it will be quite useful several times below. First of all we notice that from (2.1) By integration we obtain (a). We address now (b). Being u ε (t) uniformly bounded in terms of initial datum u 0 , applying Lemma 2.4 and arguing as in (2.3) we get that for some C 1 , C 2 > 0 independent of ε . By Gronwall lemma, In view of (E3) and of the boundedness of u ε , we get that x ε (t) is bounded in R d uniformly with respect to ε and t ∈ [0, T ] Hence also (b) is proved. Finally, property (c) is a consequence of (a) and (b) .
Remark 2.7. The boundedness of the trajectories (x ε (·), u ε (·)) in Proposition 2.6 can be made independent of the specific initial datum if we consider initial data (x 0 , u 0 ) in a fixed compact subset K • of R d × R d . This assumption will be tacitly applied from now on.
Compactness of trajectories
In this section we prove the compactness of trajectories (x ε (·), u ε (·)) fulfilling (2.1) as ε → 0 . In particular, we show how to adapt the arguments of [1] in order to take into account also the control u. In fact the second equation in (2.1) does not obey a gradient flow, hence does not allow a direct application of Theorem 1.1. For the sake of simplicity, from now on we set Λ := C([0, T ]; R d ) and, for p ∈ [1, +∞) , In order to describe the energetic behavior of a limit of (x ε (·), u ε (·)) for ε → 0, for every t ∈ [0, T ] , every x 1 , x 2 ∈ R d , and every u ∈ R d we define the cost function c t (x 1 , x 2 ; u) as otherwise, where A t x1,x2,u is the set of admissible transitions from x 1 to x 2 at time t where C(t, u) is as in assumption (E4). Notice that the cost function (2.6) and the class of admissible transitions (2.7) are modified versions of corresponding of cost and admissible transitions in [1, Definition 2.2]. As clearly stated in the following theorem, the cost (2.6) describes the energy dissipated by the limits of (x ε (·), u ε (·)) at jump points.
With the notation introduced above, we can now state the main compactness result of this section. It generalizes Theorem 1.1 (see [1,Theorem 1]) for controlled systems of the type (2.1).
and let (x εn , u εn ) be the solution of (2.1) associated to ε n and to the initial condition (x n 0 , u n 0 ) . Then, there exists a pair (x, u) ∈ Γ ∞ T × Λ and a positive Radon measure ν ∈ M + b (0, T ) such that the following properties hold: (a) up to a subsequence, x εn → x in Γ p T for every p ∈ [1, +∞) , and pointwise for all t ∈ [0, T ], and u εn → u uniformly in Λ ; (c) for every t ∈ [0, T ] the pointwise limit function x(·) constructed in (a) admits left and right limits x − (t) and x + (t), respectively, and x ± (t) ∈ C(t, u(t)); (e) for every s, t ∈ [0, T ] it holds To prove Theorem 2.8 we follow the steps of [1]. For the reader convenience, we show the main changes in the proofs, but we may refer to [1] for concluding details, which do not require modifications. We start with the analysis of useful properties of the cost function c t (see Proposition 2.10 and Remarks 2.11 and 2.13 below). Lemma 2.9. Let K be a compact subset of R d , u ∈ R d , and let t ∈ (0, T ) be such that Then, there exists α > 0 such that Proof. It follows from the continuity of the function We state and prove now a result, which generalizes [1, Proposition 4.1].
Then, the following facts hold: Proof. Let us show (a). First, we notice that, by (2.2) and by an application of the chain rule, θ n (s) is Indeed, initial and final points θ n (t i n ) converges to x i , and thus are bounded. Moreover, by chain rule, we have that for every s In view of Lemma 2.4 and of the assumption (2.9), we deduce that θ n (s) is uniformly bounded in R d . Let us denote by Θ the compact subset of R d containing θ n (s) , s ∈ [t 1 n , t 2 n ]. Let us assume by contradiction that x 1 = x 2 . Thanks to condition (E4), the set C(t, u) ∩ Θ is finite. If this were not the case and (x i ) i∈I would be an infinite family of critical points, then we could extract from it a converging subsequence x k → x ∈ C(t, u) ∩ Θ, in view of the continuity of ∇ x E , and x would not be isolated, violating (E4). Hence, there exists δ > 0 such that Applying Lemma 2.9, we may assume, up to taking a smaller δ > 0 , that For n sufficiently large we have that t i n ∈ [t − δ, t + δ] and u n (s) ∈ B(u, δ) for every s ∈ [t 1 n , t 2 n ]. By definition of K δ and by the previous properties, we have that the set {s ∈ [t 1 n , t 2 n ] : θ n (s) ∈ K δ } = Ø and there exist s 1 , s 2 ∈ {s ∈ [t 1 n , t 2 n ] : θ n (s) ∈ K δ } such that s 1 = s 2 and θ n (s i ) ∈ ∂B(x i , δ) for i = 1, 2 . Therefore, by (2.13) we have This contradicts the hypothesis (2.9). Let us now prove (b). Let δ , K δ , and e δ be as in (2.11)-(2.13). Up to extracting suitable subsequences, we can set We reparametrize the time interval in the following way: we first define the strictly increasing function , where s 1 n := s n (t 1 n ) and s 2 n := s n (t 2 n ). We setθ n (s):= θ n (r n (s)) andũ n (s) := u n (r n (s)). In particular,θ n (s i n ) → x i andũ n (σ) → u uniformly for σ ∈ [s 1 n , s 2 n ] . Moreover, by change of variables, (2.14) Notice now that r n (s) = (1 + |∇ x E(r n (s),θ n (s)) −ũ n (s)||θ n (r n (s))|) −1 .
hence |∇ x E(r n (s),θ n (s)) −ũ n (s)||θ n (s)| = |∇ x E(r n (s),θ n (s)) −ũ n (s)||θ n (r n (s))||r n (s)| = |∇ x E(r n (s),θ n (s)) −ũ n (s)||θ n (r n (s))| 1 + |∇ x E(r n (s),θ n (s)) −ũ n (s)||θ n (r n (s))| ≤ 1 , andθ n has finite speed on A δ . The construction of the limiting function works from now exactly as in the proof of [1, Proposition 4.1] taking into account thatũ n (s) is uniformly close to u. Let us explain informally how the construction work and we refer to the above mentioned reference for more details: the sequenceθ n is equibounded and, in view of the finite speed it is also equicontinuous. Up to a further linear time reparametrization, it admits by Ascoli-Arzelà Theorem a limit θ ∈ C([0, 1]; R d ) such that θ(0) = x 1 , θ(1) = x 2 . Moreover, again in view of the finite speed, this limit cannot travel to an infinite number of critical points of minimal distance 2δ . Hence, it visits at most a finite number of them and θ ∈ A t x1,x2,u .
Remark 2.11. From Proposition 2.10 it follows that We now state an autonomous modification of Proposition 2.10, in which the time parameter t is fixed. The result corresponds to [1,Proposition 4.5].
n ,u . Then, the following facts hold: Proof. The proof can be carried out as in [1,Proposition 4.5] working with the energy E(t, ·, u) with fixed parameters t and u.
Remark 2.13. As a consequence of (b) of Proposition 2.12, whenever c t (x 1 , We show now two results useful to describe the energetic behavior of the limits of sequences (x εn , u εn ) .
Proposition 2.14. Let us set Then, the following holds true: (a) there exists a positive Radon measure ν ∈ M + b (0, T ) such that, up to a subsequence, ν n ν is at most countable and coincides with the jump set of E .
Proof. In view of Proposition 2.6 and estimate (2.5), ν n is bounded in mass, uniformly with respect to n , and up to a subsequence, ν n ν weakly * in M + b (0, T ) . If we consider the function F n (t) given by then F n is a sequence of bounded non-increasing functions. Hence, by Helly Theorem, it admits, up to subsequence, a pointwise limit F ∈ BV (0, T ) . From (E2) and Proposition 2.6 (b) the func- is uniformly bounded with respect to t ∈ [0, T ] and it admits a weak * limit G ∈ L ∞ (0, T ) . Therefore, the function E(t): . The rest of the proof goes precisely as in [1,Proposition 5.2]. In particular, Lemma 2.15. Under the assumptions and notations of Proposition 2.14, let t i n → t , i = 1, 2 , leṫ u n (s) = f (u n (s)) and u n ⇒ u uniformly for s ∈ [t 1 n , t 2 n ] , and let where J is as in Proposition 2.14 (d).
Proof. For every τ > 0 The thesis follows from Proposition 2.10.
We are now ready to prove Theorem 2.8.
Proof of Theorem 2.8. Let us denote In view of (c) of Proposition 2.6, Since I is at most countable, we may fix a suitable subsequence such that x εn (t) → x(t) for every t ∈ I , for some limit x(t) ∈ R d . Clearly, we already have that u εn ⇒ u uniformly in [0, T ] , due to the continuity with respect to initial data of the equationu = f (u) .
For t ∈ [0, T ] \ I we definẽ Let us show thatx(t) is well defined for every t ∈ [0, T ] \ I . It is clear that, at least along a subsequence, the limit in (2.18) exists for every sequence s k in A converging to t . We have to prove that it is unique.
for every k , so that we may find a suitable subsequence ε n k such that x εn k (t i k ) → x i as k → ∞. Applying Lemma 2.15 and recalling that t / ∈ J , we get that We notice that, by construction and by continuity of . It suffices to show it for t ∈ [0, T ] \ I , since, by construction, the convergence is already satisfied in I . Assume that for some t / ∈ I we have x εn (t) →x ∈ R d . Fix a sequence t k ∈ A converging to t . In particular, x(t k ) → x(t) by definition. Again, we may fix a subsequence ε n k such that x εn k (t k ) → x(t) . Applying Lemma 2.15 and recalling that t / ∈ J , we get thatx = x(t) . Hence, . Moreover, this implies that the function G determined in (c) of Proposition 4.4 actually coincides with Let us now show that x admits left and right limits for every t ∈ [0, T ] . Let us focus on the existence of x + (t) . Let t 1 k , t 2 k t , and let us assume, without loss of generality, that t 1 k < t 2 k . Up to a subsequence, we may further assume that . Furthermore, by the convergence of x εn (t i k ) to x(t i k ) and of u εn (t i k ) to u(t i k ) as n → ∞ for every k ∈ N and for i = 1, 2, we have that we can construct a suitable subsequence ε n k such that Hence, rewriting the energy balance (2.4) for every k and passing to the limit as k → ∞ we get that This implies that x 1 = x 2 , and the right limit It is now straightforward to see from (c) of Proposition 2.14 that the energy balance . For the opposite inequality, in view of Proposition 2.12 and Remark 2.13 we assume that ,u(t) the optimal transition between x − (t) and x + (t) . By chain rule we have that This concludes the proof of the theorem.
3 Mean-field limit of evolutions of critical points 3.1 Mean-field limit for ε > 0 In this section we deduce the mean-field limit of the ODE system (2.1). Although this is by now a standard procedure, see, e.g., [6,Chapter 8], we show here the details of the passage to the mean-field limit in order to stress the dependence on the auxiliary control variable u introduced in (2.1). Let Remark 3.1. In agreement with the initial value problem (1.4), we could, for instance, imagine that the measure η 0 ∈ P c (R d ×R d ) takes the form η 0 := (id, ∇ x E(0, ·)) # µ 0 for some µ 0 ∈ P c (R d ) . This represents exactly the case where the initial control u 0 is ∇ x E(0, x 0 ) .
which corresponds to the weak form of (3.2) Remark 3.2. Continuing the discussion of Remark 2.2, we want here to further justify the choice of working in the space P where also the control parameter u has its own distribution, and not in P(R d ) , where only the space variable x would be described by a probability measure. Let us indeed consider the simpler setting of (1.4) with the initial data In order to obtain an integral formula as in (3.1), one could try to define a flow X ε,t (x 0 ) that associates to each x 0 the value at time t of the solution of (1.4) starting at x 0 at time t = 0, and plug its inverse into the last term in (3.3). However, as noticed already in Remark 2.2, this is not possible, since for distinct initial data x 0 ,x 0 it could happen that the two trajectories cross each other at time t . Hence, we can not deduce a continuity equation for the sole distribution of x 's.
We want to pass to the limit in (3.2) as N → ∞ . We notice that, in view of (b) of Proposition 2.6, the support of η N ε,t is bounded in R d × R d uniformly with respect to t ∈ [0, T ] , N ∈ N, and ε > 0. In order to identify a continuous in time limit η ε ∈ C([0, T ]; P(R d × R d )) of η N ε , in the following lemma we estimate the Wasserstein 1-distance between η N ε,ti and η N ε,t2 for t 1 , t 2 ∈ [0, T ] (equicontinuity).
for some positive constant C ε depending on ε > 0 but not on t 1 , t 2 , and N .
Proof. Since η N ε is an empirical measure, we simply have that where we have used the system (2.1), the boundedness of x i ε , u i ε , and the hypotheses (E1)-(E5) on E and on f .
in the sense of distributions.
Remark 3.5. According to [6, Section 8.1], we can pick the measure Therefore, η ε solves (3.5) in the sense of distributions. By [6, Section 8.1], the solution of (3.5) is unique. Thus, the whole sequence η N ε converges to η ε uniformly with respect to W 1 .
Mean-field limit for ε → 0
In order to derive a mean-field limit of evolutions of critical points, we wish to take advantage of the superposition principle introduced in [6, Theorem 8.2.1]. Accordingly, let us define the probability where we consider the flow Y ε also as a function of time. By definition of Π ε , for every For every bounded continuous function ϕ : [0, T ] × R d × R d → R , we may consider (3.6) also integrated in time. In particular, by Fubini, x(t), u(t)) dt dΠ ε (x 0 , u 0 , x(·), u(·)) . (3.7) We notice in particular that the function from Γ p T × Λ to R (x(·), u(·)) → T 0 ϕ(t, x(t), u(t)) dt is continuous.
Theorem 3.6. There exists Π ∈ P(R d × R d × Γ p T × Λ) such that, up to a subsequence, Π ε narrowly converges to Π . Moreover, every accumulation point Π of Π ε satisfies Proof. By definition, for every ε ∈ (0, 1] the support of Π ε is contained in the subset of where we denote with cl(A) the closure of A . In view of Theorem 2.8 the above set is compact in Γ p T × Λ with respect to the strong topology of L p and the uniform convergence in Λ . Therefore, by Prokhorov theorem there exists a measure Π ∈ P(R d × R d × Γ p T × Λ) such that, up to a subsequence, Π ε narrowly converges to Π. Moreover, for every Taking ϕ(t, x, u) any bounded continuous extension of |∇ x E(t, x) − u| 2 outside K • (see Remark 2.7) as a test function and recalling Proposition 2.6 (c), we get where Y 1 ε and Y 2 denote the two components of the flow Y ε associated to system (2.1). Notice that Y 2 does in fact not depend on ε . Combining (3.9) and (3.10) we deduce (3.8).
Proof. The measure η ε,t ⊗ L 1 |[0,T ] is (up to a rescaling) a probability measure with compact support in R d × R d × [0, T ] independent of ε > 0. In view of the structure of η ε,t ⊗ L 1 |[0,T ] , we have that there exists a Borel family {η t : t ∈ [0, T ]} of measures in P(R d × R d ) such that, up to subsequences, η ε,t ⊗ L 1 narrowly converges to η t ⊗ L |[0,T ] . From equality (3.6) and from Theorem 3.6 we deduce formula (3.11). In a similar way we get (3.12).
As for (3.13), given φ and ϕ as in the statement of the theorem, we simply test the continuity equation (3.5) Passing to the limit as ε → 0 in the previous equality we obtain (3.13).
Mean-field energy balance
While Theorem 3.6 explains that evolutions of critical points are distributed essentially as η t , we would like to clarify in this section how the energy E(t, x(t)) is distributed at time t ∈ [0, T ] given that t → x(t) is an evolution of critical points as derived in Theorem 2.8 with random initial data (x 0 , u 0 ) distributed as η 0 . Ideally we would expect that E(t, x(t), u(t)) is distributed as E(t, ·, ·) # η t . Unfortunately, the lack of smoothness of the trajectories t → x(t) does not allow to obtain such a mean-field description of the energy, but we are able to derive below a slightly weaker form of it, which leverages again the superposition principle and Lebesgue point description of left and right limits.
15)
wherex(·) is a representative of x(·) with left and right limitsx ± (t) at any t ∈ [0, T ], and ν (x(·),u(·)) is the positive measure of Proposition 2.14 (a), Proof. As mentioned in the proof of Lemma 3.9, from Theorem 3.6, for any (x(·), u(·)) ∈ supp(Π * ) there exists a sequence ((x εn (·), u εn (·))) n of solutions of (2.1) converging to (x(·), u(·)) in Γ p T × Λ for any vanishing sequence (ε n ) n . In particular, x εn (t) converges up to subsequences to x(t) for almost every t in [0, T ] . However, from Theorem 2.8 (a)-(c) there exists yet one more subsequence (not relabeled), which converges pointwise to a trajectoryx ∈ Γ p T , possessing left and right limitsx ± (t) at any t ∈ [0, T ] . Hence,x and x coincide almost everywhere. In particular, the following integrals must coincide Therefore, by Theorem 2.8 (d), we have where ν (x(·),u(·)) is the positive measure of Proposition 2.14 (a). From Lemma 3.9 we are allowed to consider the integration of these identities with respect to Π * (or Π), to eventually obtain (3.15). We conclude noticing that, by Carathéodory extension theorem, the integrated measures define a positive Radon measure, which we denote V = Remark 3.11. It would be very tempting to write but, unfortunately, the function t → E (t,x ± (t), u(t)) is only measurable and it would not be possible to obtain such a pointwise identity; a further integration in time may be needed in order to express the identities in terms of integrations with respect to η . Nevertheless, the identities (3.15) hold true pointwise for all 0 ≤ s ≤ t ≤ T ; it is perhaps a bit more abstract energy balance principle as one may have expected, but it is also a quite concise description of the distribution of the energy.
Learning of energies and data-driven evolutions
In this section we focus on the problem of the reconstruction of the energy function E , assuming that we have observed a certain large number N of evolutions x i : [0, T ] → R d , i = 1, . . . , N , obtained as limit of solutions x i ε of the singularly perturbed gradient flow (2.1) as ε → 0 . The energy reconstruction will be recast in terms of a minimum problem of a suitable discrepancy functional J η , very much alike the left-hand-side of (3.12). The functional depends explicitly on a measure η t ⊗ L 1 |[0,T ] , which is limit, along a subsequence, of the measures η N ε,t ⊗ L 1 |[0,T ] as N → ∞ and ε → 0 , where η N ε,t ∈ P(R d × R d ) is the empirical measure centered at the ε -evolutions. In what follows, we propose and analyze a constructive and numerically implementable procedure which allows us to approximate E in a finite dimensional setting up to an arbitrarily small error, in a suitable sense.
Learning as a variational problem
Following the lines of [7], we fix two constants M, R > 0 and we consider the function class Our particular choice of R, M is the following: , and every ε > 0 . In view of hypothesis (E1) of Section 2, we have that E ∈ C 1,1 loc ([0, T ] × R d ), so that to the given R corresponds M = M (R) such that E ∈ X M,R . We notice that R and M do not depend on ε .
Having in mind the numerical implementation, we are interested in computing good approxima-tionsÊ N of E belonging to suitable finite dimensional subset V N of X M,R , N ∈ N . In particular V N is a suitable ball of a finite dimensional subspace of W 2,∞ loc , e.g., a suitable finite element subspace. For the sequence (V N ) N ∈N we make the following uniform approximation assumption. We say that (V N ) N ∈N has the uniform approximation property with respect to η if for everyÊ ∈ X M,R there exists a sequenceÊ N ∈ V N such thatÊ N →Ê in W 1,∞ (supp(η)) as N → ∞.
Remark 4.2. The role played by the measure η will be better clarified in the following discussions. We anticipate here that, roughly speaking, the support ofη represents the region of R d × R d that has been explored by the observed evolutions. For this reasons, it is natural to assume the above uniform approximation property only on supp(η) . In the numerical simulations we will make an extensive use of this property, since we will employ suitable space refinements only on the regions of R d × R d visited by some evolution. We refer to Section 5.2 for further details.
We now introduce the key functionals for our reconstruction procedure. For every N ∈ N and every ε > 0 , let us fix N pairs (x i 0 , u i 0 ) ∈ supp(η 0 ) distributed according to η 0 and let us consider the corresponding solutions (x i ε , u i ε ) : [0, T ] → R d × R d of the ODE system (2.1). As in Section 3, we define the empirical measure η In Proposition 3.4 we have shown that in the limit as N → ∞ the sequence η N ε converges uniformly with respect to W 1 to a curve η ε ∈ C([0, T ]; P(R d × R d )) solution of the continuity equation (3.5).
Accordingly, we define the functional We notice that, with our choice of R and M , supp(η N ε,t ) ∪ supp(η ε,t ) ⊆ B R × B R for every ε , every N , and every t .
Finally, for a Borel family Notice that this functional is simply designed to naturally measure the discrepancy occurring in equation (3.12).
. . , N , be the solutions of the system (2.1). Assume that for i = 1, . . . , N x i ε converges to a quasi-static evolution x i in Γ p T for every 1 ≤ p < ∞ and u i ε converges to u i in Λ . Let us consider the empirical measure η N t : = 1 N N i=1 δ (xi(t),ui(t)) ∈ P(R d × R d ) and set The numerically implementable algorithm to approximate the energy E is based on the following finite dimensional optimization problem:Ê N := arg min E∈V N J N (Ê) (4.8) As (4.8) defines a sequence of variational problems, we wishes to show thatÊ N → E in a suitable sense, by using variational convergence arguments, such as Γ -convergence. As it is a standard notion, we refer to [8,13] for more details.
Approximation by Γ-convergence
Our construction is guided by the following (essentially commutative) diagram of limits: The following results clarify the limits appearing in the diagram. We start from the bottom of the diagram, showing the uniform convergence of J ηε to J η for ε → 0.
Let us now continue by describing the approximation provided by the upper limits of the diagram (4.9).
Proposition 4.5. Let δ > 0 and N ∈ N be given. Let ( T ×Λ, for i = 1, . . . , N , be the solutions of the system (2.1). Assume that for i = 1, . . . , N x i ε converges to a quasi-static evolution x i in Γ p T for every 1 ≤ p < ∞ and u i ε converges to u i in Λ . Let us consider the empirical measure η N Then, there exist ε N > 0 and two positive constants C 1 , C 2 (independent of δ and N ) such that for everyÊ ∈ X M,R and every 0 < ε ≤ ε N J N,ε (Ê) ≤ C 1 (J N (Ê) + δ + ε) and J N (Ê) ≤ C 2 (J N,ε (Ê) + δ + ε) . Remark 4.6. We notice that the hypothesis on the strong convergence of x i ε to x i is not too restrictive in view of the compactness result shown in Theorem 2.8. In fact, as a modeling assumption, we prescribe here that the observed quasi-static evolutions x i are limit of the singularly perturbed dynamic described by (2.1).
Proof of Proposition 4.5. In view of the convergence hypothesis on x i ε , there exists ε N > 0 such that In view of (c) of Proposition 2.6, we have for some positive constant C independent of N and ε . Thus, for everyÊ ∈ X M,R we have that Thanks to our choice of R , we have that Hence, for every t ∈ [0, T ] and every i = 1, . . . , N , In view of the previous estimates, assuming that 0 < ε ≤ ε N we continue in (4.12) with In a similar way we can show the second inequality in (4.11).
In the next proposition we show a uniform estimate of the distance between J N,ε and J η , which explains the central diagonal Γ -limit of (4.9). Proposition 4.7. Let {η t : t ∈ [0, T ]} be a Borel family in P(R d × R d ) with uniformly compact support and such that (4.6) is satisfied. Let η : (4.14) For I 1 , we have that As for I 2 we write where we have used the inequalityη(K) ≤ T . Proof. The Γ -liminf inequality follows directly from Proposition 4.7. In a similar way, the Γ -limsup inequality is a consequence of Proposition 4.7 and of Definition 4.1, which ensures that for everyÊ ∈ X M,R there exists a sequenceÊ N ∈ V N such thatÊ N →Ê in W 1,∞ (supp(η)), whereη is as in (4.2).
In the next two propositions we discuss the convergence of minimizers of the functionals J N,ε , J N to minimizers of J η . Proof. By definition of X M,R a sequence of minimizersÊ N of J N,ε N is weak * -compact in X M,R , and we denote withÊ the weak * -limit of a suitable subsequence ofÊ N . In view of Proposition 4.7, J N,ε N (Ê N ) converges to J η (Ê). From the minimality ofÊ N and the uniform approximation property satisfied by the subspaces V N , we can easily prove thatÊ is a minimizer of J η in X M,R .
Theorem 4.11. Let δ > 0 , ε N > 0, and η N ε N ,t , η N t ∈ P(R d × R d ) be as in Proposition 4.5. Assume that there exists a Borel family Then, (Ê N ) N converges, up to a subsequence, to someÊ δ ∈ X M,R satisfying for a positive constant C independent of δ . Moreover, there exist a further Borel family {η t : t ∈ [0, T ]} ⊆ P(R d × R d ) and a furtherÊ ∈ X M,R such that, up to a subsequence, η δ converges narrowly to η : Proof. LetÊ N,ε N be a solution of min Pairing the inequalities (4.11) of Proposition 4.5 for J N,ε N and J N we get that (4.21) In particular, the constants C 1 and C 2 do not depend on δ and N . By definition of X M,R , the sequenceÊ N converges, up to a not relabeled subsequence, to someÊ ∈ W 2,∞ ([0, T ]×B R ) in the W 1,∞norm. Up to an extension, we may assumeÊ ∈ X M,R . By Proposition 4.7, J N,ε N (Ê N ) converges to J η δ (Ê) as N → ∞. In view of Proposition 4.10 we have that, up to a further subsequence,Ê N,ε N converges in W 1,∞ ([0, T ] × B R ) to a minimizer of J η δ . Hence, applying again Proposition 4.7, we deduce that J N,ε N (Ê N,ε N ) → 0 . Thus, applying Corollary 4.8 to (4.21) and taking into account the convergence ofÊ N toÊ , we deduce (4.19). The second part of the proposition follows immediately from (4.19), noticing that the measures η δ have uniform compact support in [0, T ] × R d × R d and are therefore compact with respect to narrow convergence.
Remark 4.12. Let us briefly comment on the result of Theorem 4.11. As we noticed in (4.5), by Proposition 3.7 we have that the measure η := η t ⊗ L 1 |[0,T ] is concentrated on the set Therefore, for everyÊ ∈ X M,R we can write Hence, equality (4.20) in Theorem 4.11 can be reformulated as This of course implies that ∇ xÊ (t, x) = ∇ x E(t, x)η -a.e. in [0, T ]×R d , that is, we are able to reconstruct the spatial gradient of the energy function E in the region of [0, T ] × R d that has been explored by the quasi-static evolutions x i (·) which, when the number of observed evolutions N is very large, is well approximated by the support of the measureη . This is indeed a natural constraint, since we have no information about the region of [0, T ] × R d that has not been explored by a quasi-static evolution. Even if the measureη is pretty much resulting from an abstract construction, since it has been obtained by applying a compactness argument to the sequence of measures η δ constructed in the first part of Theorem 4.11, we anyway claim that our approach to energy reconstruction is entirely constructive and numerically implementable. Let us briefly explain why. In the last part of Theorem 4.11 we have shown that η is the limit as δ → 0 of the sequence η δ := η δ t ⊗ L 1 |[0,T ] . The measure η δ satisfies This implies that ∇ xÊδ is itself a good approximation of ∇ x E in the space L 2 ([0, T ]×R d ,η δ ). Moreover, the first part of Theorem 4.11 gives us another important piece of information: η δ andÊ δ can be approximated in a "finite dimensional-finite number of evolutions" setting in which, indeed, we work only with a finite number N of observed evolutions, which completely determine the empirical measure η N ε N ,t , and we solve the minimum problem (4.18) on a suitable finite dimensional subspace V N of X M,R . Remark 4.13. We note that we could also obtain a result similar to Theorem 4.11 in nature, in which the roles of J N,ε N and J N are exchanged. Let δ > 0 and ε N be as in Proposition 4.5, and assume that the sequence of measures η N := η N t ⊗ L 1 |[0,T ] converges narrowly to η δ := η δ t ⊗ L 1 |[0,T ] and that (V N ) N ∈N satisfies the uniform approximation property with respect to η δ . Then, denoted withÊ N ∈ V N a solution of min we have that, up to a subsequence,Ê N converges to someÊ δ ∈ X M,R satisfying for a positive constant C independent of δ . Moreover, in the limit as δ → 0 , η δ converges narrowly to some η : The proof of this result is still based on the arguments of Propositions 4.5 and 4.7.
Such an approximation result will be used as a practical proxy for (4.18) in the numerical experiments in Section 5.2.
Data-driven evolutions of critical points
In the previous section we obtained compactness results, which explain the approximation of the energy E by data-driven energiesÊ ∈ W 2,∞ ([0, T ] × B R ) and, for δ > 0,Ê δ ∈ W 2,∞ ([0, T ] × B R ) constructed in Theorem 4.11. In this section we show pointwise error estimates on the singularly perturbed evolutions generated using the data-driven energiesÊ N ,Ê δ , andÊ , with respect to evolutions of critical points of the original energy E .
Corollary 4.14. LetÊ ,Ê N , and η be as in Theorem 4.11, and letη ∈ M + b ([0, T ] × R d ) be defined as in (4.2). Then, for every ε > 0 there exists N = N (ε) ∈ N large enough such that the solution (x N ε , u ε ) of the system (2.1) with initial condition (x 0 , u 0 ) and energyÊ N fulfills the error estimate where (x ε , u ε ) denotes the solution of (2.1) with initial condition (x 0 , u 0 ) and energy E and dist(·, K) is the usual distance from a set K ⊆ R d .
Remark 4.15. Let us comment on formula (4.22). Although the error estimate does not a priori guarantee the data-driven evolutionx N ε to be close to x ε , we anyway expect it to happen in most of the applications. Indeed, as shown in the numerical experiments in Section 5.2, increasing the number of observed evolutions results in the enlargement of supp(η). This means that even if, according to Theorem 2.8, the distance of x ε from a quasi-static evolution of critical points t → x(t) has no clear rate of convergence, the distance of x ε from the whole supp(η) , union of the orbits of all the observed quasi-static evolutions, can be expected to satisfy the condition dist(x ε (·), supp(η)) L 1 (0,T ) = 0 . In particular, we refer to Section 5.2 for some numerical examples of data-driven evolutions. Here, we conclude by noticing that (4.22)-(4.23) imply thatx ε −x ε tends to zero uniformly in [0, T ]. Therefore, we deduce from Theorem 2.8 that, along a suitable subsequence ε n → 0,x Nn εn → x in L p (0, T ) for p < +∞ , where N n = N (ε n ) . Finally, in the particular case dist(x ε (·), supp(η)) L 1 (0,T ) = 0 we even have that Proof of Corollary 4.14. LetÊ δ ∈ W 1,∞ ([0, T ] × B R ) be as in (4.19) of Theorem 4.11, so thatÊ δ →Ê as δ → 0 andÊ N →Ê δ for N → ∞ in W 1,∞ ([0, T ] × B R ). In particular, for every ε > 0 there exist δ > 0 small enough and N large enough such that where y : [0, 1] → R is a displacement map and E 0 is a suitable nonlinear elastic energy, fully determined by a potential function a : R → R . We consider here quasi-static evolutions y : [0, T ] × [0, 1] → R of critical points y(t) of E 0 subjected to time-dependent boundary conditions y(t, i) = f i (t) , for i ∈ {0, 1} , their mean-field descriptions, and the learning of the potential function a . As the theory we developed in this paper applies to finite dimensional states, see Section 1.1, we approach the problem in a space-discrete setting.
To simplify the notation, we introduce the discrete gradient operator D : R d+2 → R d+1 defined by With this notation at hand, we can rewrite E(t, x) as We also write explicitly the expression of ∇ x E(t, x): {i=1,...,d+1} . We notice that ker(D T ) = (1, . . . , 1) ⊆ R d+1 . We finally point out that for the control parameter u in (2.1) we will not consider any dynamics, that is, we fix f ≡ 0 in (1.5) and u(t) will be constantly equal to its initial value u 0 := ∇ x E(0, x 0 ) .
In order to apply the abstract scheme developed in the previous sections, we have first to check that the energy function (5.3) satisfies properties (E1)-(E4). In the following lemma we rigorously show that E fulfills (E1)-(E3), while in Remark 5.2 we discuss the generic validity of condition (E4). Proof. Property (E1) is clearly satisfied in view of (a1) and of the regularity of f 1 and f 2 .
By (a2) and by regularity of f 1 and f 2 , we have that Since a ≥ 0 , we can continue the previous estimate with for some positive constant C independent of t and x . Thus, (E2) holds.
As for (E3), by (a3) we have that By convexity of the function s → |s| p for p > 1 and by Young inequality we get A similar inequality holds for |f 2 (t) − x d | p . Hence, (5.5) becomes At this point, it is easy to see that there exists a positive constant c such that In fact, for j = 1 and j = d the inequality is obvious. For 1 < j < d we notice that This concludes the proof of (E3).
Remark 5.2. Let us comment on the validity of property (E4). In the framework described above, as we are assuming f = 0 in (2.1), it is actually enough to have that C(t, u) contains only isolated points for every u ∈ R d with u = ∇ x E(0, x 0 ), The validity of (5.6) is related to the so called transversality conditions (see, e.g., [2,30]). Indeed, in [2] the authors first show that the transversality conditions for an energy E imply that the set of critical points C(t):= {x ∈ R d : ∇ x E(t, x) = 0} contains only isolated points. In [2, Theorem 1.3] (see also [2,Corollary 3.7]) they also prove the genericity of the transversality conditions. In our setting, the latter result states that, assuming a ∈ C 4 (R) and f 1 , f 2 ∈ C 4 (0, T ) , there exists a set N ⊆ R d × R d×d of Lebesgue measure zero such that for every Ax · x satisfies the transversality conditions, so that the set {x ∈ R d : ∇ x E(t, x) = 0} contains only isolated points.
In the present work we have been considering the energy E(t, x, u) : = E(t, x) − u · x , which already modifies E by the additive linear term −u · x , where u: = ∇ x E(0, x 0 ) , x 0 being the initial condition of x(·) . Hence, assuming that the distribution of x 0 has a non-degenerate (say, for instance, of positive Lebesgue measure) support and ∇ x E(0, ·) is non-degenerate, we deduce that condition (5.6) is in general satisfied, up to a further generic quadratic perturbation of E .
In view of Lemma 5.1 and of Remark 5.2, from now on we will assume that E in (5.3) satisfies properties (E1)-(E4). Hence, we can apply the theoretical results of Section 4 to our energy E in (5.3). Since E is completely determined by the monovariate function a, we slightly modify the notation of Section 4 to this new setting, rewriting the identification problem in terms of a . In fact, while approximating a highdimensional (multivariate) function E directly incurs in the curse of dimensionality [24] in general, our model is actually parametrized by a lower dimensional function a , making the learnability/approximation problem computationally tractable. Of course, this imposes a further modeling constraint. Accordingly, for fixed M, R > 0 , instead of the space X M,R in (4.1), we consider The choice of M, R > 0 can be performed similarly to Section 4, simply noticing that the boundedness of x implies the boundedness of D e t (x) , with a bound that depends on the boundary data f 1 (t) and f 2 (t).
We consider a sequence (V N ) N ∈N of finite dimensional subspaces of A M,R , for which the uniform approximation property of Definition 4.1 reads now as follows. and let where π i : R d+1 → R stands for the projection on the i -th component. We say that (V N ) N ∈N has the uniform approximation property with respect to η if for everyâ ∈ A M,R there exists a sequenceâ We now rewrite the functionals (4.3)-(4.5) in terms ofâ, a ∈ A M,R making use of formula (5.4). As already mentioned, here we consider time independent controls u = ∇ x E(0, x 0 ) . Hence, given a distribution µ 0 ∈ P c (R d ) of initial conditions x 0 , the corresponding distribution of (x 0 , u 0 ) reads as η 0 := (id× ∇ x E(0, ·)) # µ 0 ∈ P c (R d ×R d ) . For every N ∈ N and every ε > 0 we fix N pairs (x i 0 , u i 0 ) ∈ supp(η 0 ) distributed according to η 0 and we consider the corresponding solutions (x i ε , u i ε ) : [0, T ] → R d × R d of the ODE system (2.1). Given the empirical measure η N ε,t : = 1 In the limit as N → ∞ the sequence η N ε converges uniformly with respect to W 1 to a curve η ε ∈ C([0, T ]; P(R d × R d )). Therefore, for everyâ ∈ A M,R we set For a Borel family {η t : t ∈ [0, T ]} ⊆ P(R d × R d ) and for everyâ ∈ A M,R we set then we can also express J η in the equivalent form We now adapt the main results of Section 4, namely, Proposition 4.7 and Theorem 4.11.
for some positive constants D 1 , D 2 depending on D , d , M , T , f 1 , and f 2 .
Following the lines of the proof of Proposition 4.7 and using (5.4), we get where D 1 = D 1 ( D , d, M, T ) > 0 .
As for I 2 we write This concludes the proof of the proposition.
Proof. It is enough to follow step by step the proof of Theorem 4.11 taking into account that the results of Proposition 4.5 still hold in the present framework.
Numerical results
In this section we present numerical experiments, which show the practical efficiency of the optimization (5.11) in recovering the potential function a from observation of a finite number of evolutions of critical points. In particular, we highlight some practical issues and the impact of various parameters of the problem on the reconstructions. First of all, we recast the problem in a discrete and numerically efficient implementation. Afterwards, we focus on how the available information -corresponding to the number of experiments or measurements per experiment -impacts the quality of reconstructions. Finally, we show that the choice of the constant M as in A M,R is in a sense generic, as sufficiently large M (for other parameters fixed) allows for appropriate reconstructions. Finally, we compare simulations of data-driven evolutions generated by the empiricalâ with those generated by the true potential a. We show the remarkable accordance of the results.
Efficient numerically implementable formulation
The following experiments are realized by a common numerical implementation and are applied to the toy mechanical example of Section 5. As space of competitors, we consider V Λ := â ∈ A M,R |â is piecewise quadratic on a given grid Λ .
(5. 16) We observe measurements at times 0 = t 0 < · · · < t Ne = T with stepsizes ∆ m = (t m+1 − t m−1 )/2 and gridpoints p 1 < · · · < p K of Λ with stepsizes∆ k = p k+1 − p k . For an appropriate increasing sequence of grids Λ : = Λ(N ), the corresponding sequence of spaces V N : = V Λ(N ) has the uniform approximation property on compact sets. We consider an initial data distribution µ N 0 drawn from a d -dimensional normal distributions with uniform standard deviation. For any initial data x i 0 in the support of µ N 0 , we solve the system (1.4) for trajectories x i ε for fixed ε > 0. As time-discrete approximation of the energy functional J N,ε in (5.9) we consider obtained by replacing the integral in time with a sum of point evaluations, which would correspond to assuming solutions, control, and boundary conditions to be piecewise constant in time. We assume that the arguments ofâ as in (5.17) are distributed according to a discrete version ofη in Definition 5.3, which encodes the available information to recover a . As previously stated, V N needs to be designed in order to approximate A M,R . In order to provide additionally a form of numerical stabilization and preconditioning, we choose the grid Λ adaptively with respect to the distributionη . In particular we consider denser meshes in regions of the support whereη has large density and coarser grids in regions of low density, thereby exploring the entire support ofη .
As functional defined in (5.17) solely depends on derivativesâ , we seek forâ ∈ V N which consists of piecewise linear functions such thatâ ∈ V N . In particular we consider the expansion where {φ λ : λ = 1, . . . , D(N )} is a set of suitable basis functions of V N , and a := (â 1 , . . . ,â D(N ) ) denotes the corresponding coefficient vector. From this information it is immediate by integration to identify a up to additive constants on connected components of the support ofη . Note, however, that it is not possible to relate additive constants at different connected components of the support ofη . In case the initial distribution has connected support, since the forward processes are continuous, also the support ofη consists of connected components (at most one for each time t m ). Thus, one can assume that for a sufficient amount of experiments and connected support of the initial distribution there are only few connected components. It should be noted thatJ N,ε (â) withâ written as above can be written as a quadratic functional Here the data vector Y corresponds to Therefore the assembly of M corresponds to formulation of the interpolation matrix B i,m and the "componentwise" application of D T and can be done iteratively. In particular, the system is sparse with at most 4 entries per row, and thus the approach can be applied even with a large number of measurements and experiments.
Minimizing the functionJ N,ε (â) over V N is a quadratic optimization problem. However, we require the ansatz to be conform, i.e., ensure the inclusion V N ⊂ A M,R . This constraint requires â W 2,∞ (I R ) ≤ M forâ ∈ V N . To enforce this constraint numerically, note that a and a are bounded by the maximal and minimal values of the corresponding coefficient vectors a and a as follows: One considers the gradient operator corresponding to the grid Λ , so that a = D Λ a is the coefficient vector of the piecewise constant function corresponding to a . Combining (5.19) and (5.21), we can consider as a discrete version of the reconstruction problem, Note that allowing two different bounds M 1 and M 2 offers more flexibility, while serving the same purpose in the theoretical setting of creating compactness. In particular this allows to target a more specifically by stricter bounds in order to avoid oscillatory behavior. Moreover, note that due to the structure of M, a can only be reconstructed up to a constant vector, meaningâ can only be determined up to constant analogously to the considerations leading to (5.15).
For the sake of simplicity we further restrict the optimization to competitors withâ (0) = 0 and we assume that a (0) = 0 as well.
Moreover, we notice that, because of discretizations in time inJ N,ε and the use of non equivalent constraints, the minimizer of (5.22) does not precisely coincide with the minimizer of the original minimization problem. It is however reasonable to think that it indeed approximates the true solution of J N,ε , which in turn approximates the true energy due to Γ -approximation.
Being (5.22) a least squares problem with norm constraints, a variety of optimization algorithms are applicable. For the results presented in this work we used the CVX toolbox [19,18], which is well suited as all functions and operations can be written as convex functions and constraints.
Linear elasticity -a trivial example
We start with the standard potential a(y) = y 2 2 , which is considered in the context of linear elasticity. As this potential is uniformly convex and contained in V N , one expects the reconstruction to work better compared to more complex potentials. Figure 1 depicts the approximation of a and a for a quadratic potential, and shows very accurate approximation. The approximation of a appears almost exact everywhere, while the approximation of a loses accuracy at the boundaries of the observed interval, due to summation of minor (systematic) errors. Nonetheless a is overall best approximated in regions whereη has higher density. Thus elastic potentials can be identified very well using this approach. However, for more complex potentials one may not expect to obtain always such a good reconstruction, and in the following sections we consider the impact of various parameters on the reconstruction of non-quadratic potentials.
Impact of the amount of information on the reconstruction quality
From a numerical perspective, solving (5.22) is a least squares problem, and with more available information, one would expect to increase the reliability of reconstructions. This amount of information in our setting mainly depends on two factors -the amount of measurements N e made for every experiment and the number N of observed experiments. Thus, we want to demonstrate the effect of increasing amount of information, in particular verifying that for sufficient amount of information one can accurately recover solutions, while too little information yields unstable recovery.
In Figure 2 all parameters of the reconstruction are fixed except for the amount of measurements N e . One observes that results get more reliable and noise is reduced with an increasing number N e of measurements and for a sufficient amount of information one can precisely reconstruct a on the support ofη , whose density is approximately depicted below by the histogram of observed information. In the reconstructions with little information, the solutions are oscillating, and, although following the overall trend of the true energy functional, do not perfectly capture its behavior. One can also see that in regions without information, and correspondingly with no or few nodes, the approximation is crude and can not be trusted, but in regions with much information a rather accurate reconstruction can be found. Of course, in many practical applications the amount of sampling in time might be limited by technical limitations. The resulting issue of lack of information can be offset by considering a larger number of experiments. In fact the main result of the provided theory is that for N → ∞ -considering ever more experiments -one can reconstruct a increasingly well. In order to approximate with V N the space A M,R in the sense of the uniform approximation property, it is necessary to increase adaptively the amount of nodes D(N ) . A trivial way to do this is considering a linear relation between N and the amount of nodes D(N ), i.e., D(N ) N . Figure 3 shows that an increased number of experiments and nodes improves significantly the quality of the reconstruction.
In comparison, we show that the improved reconstruction is not solely the result of a finer grid, but rather a consequence of more available information. Therefore, we considered in Figure 4 the same experiment as in Figure 3, but with the number of nodes D(N ) = 300 independent of N . While the reconstruction even for a single experiment is not particularly bad, one can see that it is not very smooth, representing a smaller degree of confidence in the solutions. For increasing number N of experiments, the results become smoother. Moreover, regions not significantly visited by a single experiment, and therefore not well supported by the mesh (e.g., left side of the plot), get better represented for a larger number of experiments as they might get explored more thoroughly. However, note that there appear to be regions which does not get -or does get very rarely -visited independently of the number N of experiments as the density ofη is zero or very small there, and therefore no reasonable reconstruction can be obtained at such locations.
Suitable W 2,∞ constraints
The theory in this work does not yet provide a method for choosing M (or M 1 , M 2 in our numerical model). It is clear that M 1 and M 2 too small will significantly limit the available class of competitors, and therefore one can not expect to capture the true a if M 1 and M 2 are much smaller than a ∞ and a ∞ , respectively. On the other hand, M 1 and M 2 finite is necessary to ensure compactness from a theoretical perspective, so it is not obvious what the impact of too large M 1 and M 2 is. However, for suitable data, one would expect that for M 1 , M 2 >M sufficiently large have no real impact on the reconstruction. Figure 5 depicts the effect of different constraints M 2 on a ∞ . One can see that for too small M 2 the reconstruction follows the overall trend, but can not replicate local fluctuations, with more detail captured by increasing M 2 . In particular, note that there is no difference between solving with M 2 = 20 and M 2 = 1000 since a ∞ ≤ 20 , where a is the derivative of the true solution of the minimization problem (5.22) without constraints and therefore the constraint has no effect. We further stress that this does not imply that the constraint M 2 is irrelevant, as for poor or incomplete data the least squares problem can become highly unstable (e.g., due to overfitting), and constraints can limit this effect.
On the other hand, the constraint M 1 bounds the overall values of a . For too small M 1 the reconstruction corresponds to a projection of the true energy function to the corresponding bound. This effect can be observed in Figure 5.
Data-driven evolutions
Given a, ε, x 0 , u 0 , and f , the system (1.5) can be solved to generate the evolution of x ε . While we created or observed evolutions generated by the true a and used these trajectories in the previous sections to identify a, a practical reason for determiningâ ≈ a is that this in turn can be used for simulations of system (1.5), e.g. instead of performing further real-life experiments. This section discusses the quality of such numerical simulation showing that indeed suitable evolutions can be replicated, as theoretically analyzed in Section 4.3.
We start by considering the situation with linear elastic potential a(y) = 1 2 y 2 discussed in Section 5.2.2, where high fidelity approximation of a byâ is achieved. The left side of Figure 7 depicts the corresponding trajectoriesx ε and x ε generated byâ and a, respectively, with an initial datum (x 0 , u 0 ) taken from the distribution η 0 . These trajectories are basically identical, which is to be expected in view of the good reconstructionâ of a , and since we chose the initial data from µ 0 , and the correspondingη is supported on a sufficiently large domain.
When considering the more challenging example with highly nonlinear potentials of Section 5.2.3 for N e = 55 , N = 60 and D(N ) = 4N , the recovery is slightly less precise, in particular since the support ofη is no longer connected in this case. On the right of Figure 7 we show that indeed in this situation we can not perfectly replicate the evolutions, nonetheless the overall behavior of the trajectory remains intelligible.
The are distant from the support ofη . Thereâ is not reliable creating further errors. However, the resulting trajectories appear quite acceptable and in particular they show no extreme outliers where values may diverge or act too wildly.
In summary, the presented simulations confirm the theoretical findings about the robust recovery of various potentials a or a from observations of evolutions of critical points. The reconstructionsâ are such that further simulations of trajectories are faithful. | 19,394.2 | 2019-11-01T00:00:00.000 | [
"Mathematics"
] |
Positive Geometries for all Scalar Theories from Twisted Intersection Theory
We show that accordiohedra furnish polytopes which encode amplitudes for all massive scalar field theories with generic interactions. This is done by deriving integral formulae for the Feynman diagrams at tree level and integrands at one loop level in the planar limit using the twisted intersection theory of convex realizations of the accordiohedron polytopes.
I. INTRODUCTION
Over the last few years, the study of scattering amplitudes has revealed a number of surprising connections with mathematics.Crucially, deep ties to geometry, topology and combinatorics have been established, which have led to the discovery of new ways of computing these quantities.
In this work, we focus on building upon the seminal developments in the last few years, namely the positive geometry program due to Arkani-Hamed et al. [7], and the twisted intersection theory of Mizera [2].In these works, it was seen that for a wide class of theories built out of trivalent vertices, the planar Feynman diagrams are encoded by the geometry of a polytope known as the associahedron.This was extended to massless scalar theories with generic interactions in [17], in which a polytope known as the accordiohedron was introduced.In this article, we propose a broad generalization of this line of research by applying the technology of intersection theory to the accordiohedron polytopes.
We seek to address two open questions in the literature.These are as follows.So far, attention has been restricted to the handling of massless interacting particles.The reason for this is the specific realization of the associahedra as convex polytopes, which puts severe restrictions on the masses of the interacting particles.Here, we extend the positive geometry program to all scalar theories while utilizing a convex realization of accordiohedra that removes this restriction on the mass, and are thus able to treat without any difficulty the interactions between particles of arbitrary mass.
As far as the positive geometry program is concerned, loop effects have been difficult to incorporate.Technical restrictions have forced us to only deal with φ 3 interactions among massless particles at one loop level.We rectify this by proposing a class of accordiohedra which describe interactions between particles in any scalar theory at one loop, in the planar limit.Our construction also allows us to handle different kinds of Feynman diagrams separately, for example, allowing us to treat tadpoles and bubble diagrams distinctly.
Let us briefly discuss what has been done in the paper and the organization of the text.What has been accomplished is a generalization of the positive geometry framework to take care of massive particles as well.This has been done in section II.Following this, in section III we have also described a simple example indicating that the story can be pushed to at least one loop order in arbitrary theories and point out the problems involved in higher loop cases.In doing so, we rectify a problem that has been ignored in the literature, namely the handling of symmetry factors in Feynman diagrams.
In this section, we describe how the twisted intersection theory of accordiohedra can be used to compute scattering amplitudes for generic scalar theories involving massive particles.
Much of the work on positive geometries for scalar theories beyond φ 3 has been done quite recently.For the case of φ 4 and φ p interactions, the relevant papers are [12] and [16] respectively.The formalism for studying generic theories was worked out in [17].Conspicuously, the analysis in these papers worked specifically for massless particles.
In this section, we illustrate how the positive geometry formalism can accomodate massive particles through a development of the intersection theory governing amplitudes in massive scalar theories with φ 3 + φ 4 interactions.It will turn out that this is the right arena to generalize the study of polytopes controlling these amplitudes for massless particles to massive ones.To do this, we make use of the accordiohedron data first presented in [17] and the method of realizing these as convex polytopes reviewed in [37].To keep the discussion simple, let us restrict ourselves to the case of six particle scattering.This particular process gives rise to two classes of accordiohedra, namely squares and pentagons.Let us begin with the square, which is obtained from the dissection (13,46).The accordiohedron vertices are labelled by {(13, 46), (24,46), (26,35), (13,35)} 1 .Accordingly, the codimension one boundaries are labelled by the partial dissections {( 13), (46), (26), (35)}.This is illustrated in Figure (1).
The next task is to find a suitable convex embedding of this polytope as a hyperplane arrangement in CP 2 , which is rendered possible due to the generic form of the polytopal realization reviewed in [37].The hyperplanes for an accordiohedron are obtained by comparing the diagonals labelling the facets with the reference dissection.Starting with the facet (13), we have to compare it to the reference dissection (see Figure (2)).
Figure 1.Two-dimensional accordiohedron for the reference dissection (13,46).The reference dissection is on the upper right.
We see from the Figure ( 2) that the dissection (13) intersects the reference (13) and forms an inverted Z (see Figure 1 of [38]) and does not intersect (46) at all.Using the rules reviewed in [37], we can write down the facet (13) (denoted by f 1 ) as: (x e 13 + y e 46 ) • ( e 13 + 0 e 46 ) ≤ 1 =⇒ x ≤ 1. (1) Here, we have used a basis for CP 2 with basis vectors ê13 and ê46 .x and y are the respective values of the inhomogeneous coordinates.Using the same rules, we can now write down the facets (46), (35), and (26) (denoted by f 2 , f 3 , and f 4 ) as: Clearly, these hyperplanes bound a square.Now, we can shift our interest to the configuration space which is the reference manifold with four hyperplanes above and at On this space 2 We have not explicitly indicated the hyperplane at infinity, which is formally present.The residue at infinity can be computed by a simple change of variables.It does not however affect our computation of the intersection numbers.
X , we define the twist, We have used the standard notation to describe generalised Mandelstam variables i.e.X ij is equal to (p i + p i+1 + ... + p j−1 ) 2 .These can be visualized as chords of an n-gon for an n-particle scattering process.Consequently, the dissection (13, 46) will translate into the diagram having poles as X 13 and X 46 go on-shell.It can be seen from this picture that there are n(n−1)
2
−n such variables, which is precisely the dimensionality of the space of Mandelstam variables for an n-particle process.
Let us also note the meaning of the notation m 2 ij .For purposes of maximal generality, we assume that each channel of the scattering process has a different massive pole.m 2 ij is the squared mass of the particle propagating along the channel (ij).Now, for the case of a theory with a single kind of particle, all the m 2 ij will be equal to m 2 , where m 2 is the mass of the particle.Since we can work out the intersection theory for arbitrary masses, we note that this formalism can be applied for amplitudes such as those in thermal field theories as well, where the m 2 ij can be identified with Matsubara frequencies.Thus, we can deal with a fairly wide class of theories using this framework 3 .With this laid out, we can compute the contribution to the scattering amplitude from this polytope by computing the self intersection number of the following form, This can be seen from the formula for intersection numbers, which was first used in the context of scattering amplitudes in [2].In our case, we are interested in the self intersection number of ϕ (13,46) , for which it is sufficient to note that the intersection number is localized on the vertices of the accordiohedron.Schematically, for a given accordiohedron of dimension n, if the vertices are labelled by V I , the self intersection number of the corresponding form would be given by, where α i is the weight attached to f i .In our case, an application of this formula to ϕ (13,46) gives, A similar approach can be taken for the pentagon arising from the six particle amplitude in this theory shown in Figure (3).We will focus on the reference dissection (13,14), which gives rise to a pentagon.The accordiohedron of this reference is labelled by the vertices {(13, 14), (24,14), (24,26), (26,36), (13,36)}.The facets may be read off from the set of vertices; they are {( 13), (36), ( 26), ( 24), ( 14)}.Using the rules for finding the embedding, we have the following facets (denoted by These constraints give rise to the shaded convex polygon in Figure (3).The kinematical data associated to the amplitude is carried by the twist, which we choose as, to compute the amplitude, which becomes, (10) We have indicated that the twist and form are defined for the particular accordiohedron in question by using the subscript (13, 46) to denote the reference dissection.
These calculations show that arbitrary mass choices can be made perfectly consistent in the polytopes formalism, even though this aspect is not manifest in the conventional embedding in the kinematical space.It is then obvious that the natural arena for massive scalar theories is twisted intersection theory with a careful convex realization of accordiohedra, which allows us to study scalar theories with arbitrary masses.
We note here that for theories in which a number of massive states can be exchanged in the Mandelstam channels, the amplitude will be given over a sum of intersection numbers; no single intersection number can give all the amplitudes summed over.The two index masses simply provide a general scheme to consider any massive pole structure.
III. INCORPORATING LOOP EFFECTS
A proper discussion including loops while considering scattering amplitudes from the positive geometry viewpoint has been met with some hurdles.For one thing, it has been difficult to make swift progress beyond one loop Feynman diagrams, due to the technical difficulties in dealing with moduli spaces of genus two surfaces.To be precise, these surfaces are not known to be tiled by any regular polytope, making the analysis somewhat tricky.Some progress has been reported at genus one, for which the reader can consult [13,14,39].
In addition to the general technical issue of looking at moduli spaces, there is a more mundane issue with including loop interactions.Generically, the integrands appearing in Feynman loop diagrams come with symmetry factors, which encode various degeneracies arising from the large number of ways in which contractions can be performed.
Due to these reasons, it may be more efficient to look at specific classes of Feynman diagrams depending on the nature of renormalization and see if these classes can be described in the polytope framework.To be more concrete, let us consider the case of four-particle scattering (in the planar limit) in φ4 theory.Here, we receive contributions from two classes of diagrams, namely from diagrams which cause mass renormalization and from diagrams giving rise to coupling constant renormalization.
In order to recast these as intersection numbers, we follow the algorithm that we will now describe.In the field theory limit, which is what we are interested in for the time being, loop interactions are encoded by the complete nodal degeneration of the moduli space M g,n , which is M 0,n+2g .Given the 2g auxiliary insertions, denoted by σ ±,i , with i running from 1 through g, each pair can be sandwiched between a pair of the original insertions as shown in Figure (4).All possible ways of doing this constitute the tiling of the moduli space.
Let us specialize to the case of the particle scattering described earlier.Specifically, let the auxiliary points be placed between particles 1 and 2. Furthermore, these two insertions are associated with momenta and − .If we now look at only the terms giving rise to mass renormalization, we have two diagrams as shown in Figure ( 4), These two diagrams are obtained from the dissections (12) and (+3).Here, (+3) indicates a diagonal between the vertex σ + and 3. Using these dissections, the technology of accordiohedra and intersection theory may be applied to obtain the stripped integrand, namely the integrand with the loop momentum stripped.
We first find the accordiohedra for the two dissections.For (12), the only compatible dissection is itself.
This gives an open accordiohedron, in which the second boundary is pushed to infinity.However, for (+3), the accordiohedron is {( 12), (+3)}.Thus, the weights are 0 and 1 respectively.We can realize this as CP 1 − {0, 1, ∞} with the twist, where m, µ are the masses of virtual particles flowing through the respective channels and the hyperplane at infinity has been indicated.Now, the self intersection number of ϕ (+3) = d ln x x−1 gives, If the loop momentum is introduced, we get, which is the correct loop integrand.Indeed, this can be absorbed as a renormalization of mass after all channels are taken into account.For this of course, we have to analytically continue past the mass shell, which the intersection theory does not preclude.Extending this to loop levels higher than one has a technical issue, namely the fact that stripping away all the loop momenta as i is not generically possible, due in large part to the fact that Riemann surfaces of genus g ≥ 2 can degenerate in very complicated ways to give rise to nodal Riemann spheres 4 .The extension of the results obtained here to higher loop order remains an interesting open problem.
IV. GENERIC INTERACTIONS
In this section, we briefly describe how the procedure developed above may be applied to generic theories.Let us first note that the main object of importance is the so called accordiohedron, constructed out of a given set of dissections, which label a particular scattering process.Most importantly, these scattering processes can be arbitrarily complicated, so long as the dissections are properly classified and treated appropriately.
Consider for example the rather complicated kinds of polytopes considered in [16], in which the accordiohedra for arbitrary φ p interactions were obtained.Here, dissections of p+n(p−2)-gons into p-gons label the collection of all planar Feynman diagrams in an n particle scattering process.Accordingly, the collection of these dissections may be used to obtain the corresponding accordiohedra, which may then be realized as convex polytopes using the methods used here, which were reviewed in [37].
At the same time, we must also bear in mind that there is a practical hurdle to all of this.Leaving aside the computationally intensive aspect, we also remind ourselves that accordiohedra are not generically unique, and a number of distinct accordiohedra usually need to be appropriately weighted and resummed in order to obtain the final amplitude.In our case, this will entail appropriately weighting the corresponding twisted intersection numbers.
From this discussion, the takeaway is simply that the formalism itself can be applied rather straightforwardly, even if cumbersome, such that the real roadblock is to ensure that a self consistent collection of weights can be obtained.Indeed, determining whether or not these weights can be found consistently was an important aspect considered during the work that led to [17], with some very decent progress also discussed in [16].In all the cases considered so far, the weights can be determined consistently.Furthermore, in [17], it was found that there are at most as many equations determining the weights as there are weights, consequently implying that at least one self consistent solution may be found.
To conclude this section, we remark that the previous points indicate that the procedure outlined in this paper can be carried out for arbitrarily complicated interactions, which although technically challenging at higher points, will always be possible in principle.
V. DISCUSSION
In this article, we have developed a framework to handle interactions among scalars in the planar limit which may be arbitrarily complicated from the point of view of twisted intersection theory.Furthermore, we have noted that the formalism presented circumvents some of the rules that are placed on more traditional amplituhedron methods, chiefly among which is the restriction to massless particles.The convex embedding allows for arbitrary choices of mass as well as moving off the mass shell.Among other things, this allows us to treat tadpoles and bubble diagrams with relative ease 5 .Furthermore, we have been able to bring loop amplitudes, at least up to one loop level into the discussion as well while taking care of symmetry factors.
It seems that there are some aspects of this work which can be easily extended.Firstly, in order to keep track of symmetry factors at the loop level, we have by hand restricted to specific subsets of dissections giving rise to loop diagrams according to the nature of renormalization (e.g.mass renormalization and coupling constant renormalization in φ 4 are treated separately.).It remains to be seen whether the symmetry factors and all loop diagrams can be consistently reconciled with one another in the polytopes picture.This seems unlikely, but will surely constitute an interesting future investigation.
Secondly, it may be interesting to extend our analysis past the realm of scalar theories into richer domains, such as effective field theories (EFT).Historically, the CHY formalism has provided ample insights into EFTs which can be obtained by dimensional reduction of gravity and Yang-Mills.Now, the technology developed here to understand more generic vertices might give us room to look at more exotic EFTs.This is a long-term goal that we hope to pursue in the future.
Figure 2 .
Figure 2. The comparison of the dissection (13) (denoted with dashed a line) with the reference (13, 46) (denoted with bold red lines).
Figure 4 .
Figure 4.The propagator corrections in the φ 4 theory at one loop. | 4,121.6 | 2020-06-27T00:00:00.000 | [
"Physics"
] |
Design of Improved BP Decoders and Corresponding LT Code Degree Distribution for AWGN Channels
This paper presents the performance of a hard decision belief propagation (HDBP) decoder used for Luby transform (LT) codes over additive white Gaussian noise channels; subsequently, three improved HDBP decoders are proposed. We fi rst analyze the performance improvement of the sorted ripple and delayed decoding process in a HDBP decoder; subsequently, we propose ripple-sorted belief propagation (RSBP) as well as ripple-sorted and delayed belief propagation (RSDBP) decoders to improve the bit error rate (BER). Based on the analysis of the distribution of error encoded symbols, we propose a ripple-sorted and threshold-based belief propagation (RSTBP) decoder, which deletes low-reliability encoded symbols, to further improve the BER. Degree distribution signi fi cantly a ff ects the performance of LT codes. Therefore, we propose a method for designing optimal degree distributions for the proposed decoders. Through simulation results, we demonstrate that the proposed RSBP and RSDBP decoders provide signi fi cantly better BER performances than the HDBP decoder. RSDBP and RSTDP combined with the proposed degree distributions outperformed state-of-the-art degree distributions in terms of the number of encoded symbols required to recover an input symbol correctly (NERRIC) and the frame error rate (FER). For a hybrid decoder formulated by combining RSDBP with a soft decision belief propagation decoder, the proposed degree distribution outperforms the other degree distributions in terms of decoding complexity.
Introduction
The Luby transform (LT) codes proposed in [1] are the first practical fountain code that performs well on reliable communications over a binary erasure channel (BEC). Successful hard decision belief propagation (HDBP) decoding is possible whenð1 + εÞk encoded symbols are available, where ε is the overhead of decoding. With the advantage of being rateless, LT codes have been introduced in broadcast services and noisy channels [2]. The performance of LT codes over additive white Gaussian noise (AWGN) channels has been investigated in [3]. To improve decoding performance, soft information is used in a soft decision belief propagation (SDBP) decoder, which is used as the decoding algorithm over noisy channels [4].
Different strategies have been proposed to improve the performance of LT codes over AWGN channels. A Gauss-Jordan-elimination-assisted belief propagation (BP) decoder was proposed to address the premature termination of BP decoding [5]. However, it is only practical for short LT codes. Generally, an SDBP decoder begins when all encoded symbols are available. Therefore, in greedy spreading serial decoding, encoded symbols are processed at once, and messages propagated greedily to improve the convergence speed [6]. However, the increase in decoding complexity was demonstrated in [5,6]. A cross-level decoding scheme that combines LT codes with low-density parity check (LDPC) codes was proposed [7]. Although this method provided an effective decoding scheme, it required additional bit decoding from the LDPC, thereby increasing the decoding complexity. The piggybacking BP decoding algorithm, which decreases the decoding overhead and decoding delay, was proposed for repeated accumulated (RA) rateless codes [8]. However, it is only useful for RA rateless codes. A parallel soft iterative decoding algorithm was proposed for satellite systems [9]. Similar to the study in [7], it is only effective when combining LDPC codes with LT codes in the physical layer. A lowcomplexity BP decoder was proposed to improve performance by deleting low-reliability symbols at the cost of a slight transmission efficiency loss [10]. The BP-based algorithm is combined with the log likelihood ratio-(LLR-) based adaptive demodulation (ADM) algorithm to further reduce the decoding complexity [11]. The maximum a posteriori probability-based ADM algorithm was proposed to improve performance by discarding incorrect bits [12]. An adaptive decoding algorithm was proposed to reduce the decoding complexity by reducing the number of active check nodes [13], which degraded the performance of LT codes. In [10][11][12][13], the decoding complexity was reduced at the expense of increasing overhead because unreliable symbols were deleted. The trade-off between performance and decoding complexity was analyzed in [14]. Reducing the decoding complexity is important for the practicability of LT codes over noisy channels. However, the decoding complexity of the SDBP decoder remains high.
Several degree distributions have been proposed for LT codes over AWGN channels. An optimization process is formulated to design a new degree distribution, which improves the performance of LT codes over AWGN channels [15]. Three types of check-node degree distributions are proposed to improve the performance of systematic LT codes over AWGN channels [16]. A novel optimization model was proposed to design degree distributions over AWGN multiple access channels [17]. A ripple-based design of the degree distribution for AWGN channels was proposed in [18]. However, designing a good degree distribution and improving the performance in HDBP decoding over noisy channels remain an open problem.
Compared with SDBP decoding, HDBP decoding significantly reduced decoding complexity, which is extremely important for battery-powered equipment. The use of HDBP decoding can effectively reduce the decoding complexity of the hybrid decoding scheme, in which SDBP decoding will be invoked when HDBP decoding fails. Herein, the performance of HDBP decoding is analyzed, and improved HDBP decoders and their corresponding degree distributions are proposed. First, we investigate the ripple size throughout the decoding process and argue that sorting encoded symbols in ripple improves decoding performance; subsequently, we propose a ripple-sorted BP (RSBP) decoder. Based on the RSBP decoder, we discovered that with more encoded symbols available before decoding started, the decoding performance improved. Hence, we propose an improved BP decoder known as a ripplesorted and delayed BP (RSDBP) decoder. Based on the analysis of the distribution of error encoded symbols, we argue that low-reliability encoded symbols should be deleted to improve decoding performance and propose a ripple-sorted and threshold-based BP (RSTBP) decoder. Second, by analyzing the random walk model, we propose a method to generate a set of candidate ripple-size evaluations. A ripple-based design of degree distribution known as the generalised degree distribution algorithm (GDDA) is used to generate the degree distribution [19]. Based on the Monte Carlo method, the optimal degree distribution for a specific BP decoder is achieved. Simulation results demonstrated that our proposed RSBP and RSDBP decoders outperformed the BP decoder in terms of the bit error rate (BER) performance. Additionally, RSDBP and RSTDP combined with the proposed degree distributions outperformed state-of-the-art degree distributions in terms of the number of encoded symbols required to recover an input symbol correctly (NERRIC) and the frame error rate (FER). For the hybrid decoder formulated by combining Pseudocode of LT encoding Input: input symbols X = ðx 1 , x 2 , ⋯, x k Þ, degree distribution ΩðdÞ Output: an encoded symbol c 1: initialize an encoded symbol c=0 2: select a degree d from [1, k] according to ΩðdÞ 3: select d different input symbols from X and add to a neighbor set V 4: for input symbol v in Vdo 5: c=c XOR v 6: end for 7: returnc Algorithm 1.
Pseudocode of hard decision BP decoding Input: encoded symbols received from channels Output: recovered input symbolsX 1: initialize ripple R as an empty queue 2: initialize recovered input symbolsX as an array 3: initialize waited encoded symbols Y as an array 4: whilesizeof ðXÞ < kdo 5: receive an encoded symbol y from channels 6: XORsðX, yÞ 7: degreeðyÞ == 1?pushðR, yÞ: pushðY, yÞ 8: whilesizeof ðRÞ > 0do 9: dequeue an input symbolx from R 10: pushðX, xÞ 11: for encoded symbol y in Ydo 12: XORðy Wireless Communications and Mobile Computing RSBP with an SDBP decoder, the proposed degree distribution outperformed state-of-the-art degree distributions in terms of decoding complexity. The remainder of this paper is organised as follows. In Section 2, a review of the system model and the encoding and decoding of LT codes are provided. In Section 3, the performance of HDBP decoding is analyzed. In Section 4, our RSBP, RSDBP, and RSTBP decoders are presented. In Section 5, the performance of the proposed decoders is analyzed. In Section 6, a method to generate the optimal degree distribution for a specific BP decoder is proposed. In Section 7, our experimental design is outlined, and the efficiency of the proposed decoders and the proposed degree distribution are demonstrated by experimental results. Finally, our study is summarised as follows.
Background
2.1. System Model. Information messages must be transmitted from the source to the destination over AWGN channels.
Messages are partitioned into blocks, and each block is partitioned into symbols. The input symbols of the LT codes are denoted as X = ðx 1 , x 2 , ⋯, x k Þ, which is a combination of original symbols and a cyclic redundancy check (CRC). Typically, a single input symbol can be one bit or even a packet. For simplicity, one bit is regarded as an input symbol in this study. At the source, a stream of encoded symbols C = ðc 1 , c 2 , ⋯, c N , ⋯Þ is generated from k input symbols. The encoded symbol c j is modulated by the binary phase shift keying and transmitted to the destination independently as s i . At the destination, the output of the AWGN channels for each symbol s j is as follows: where n i~N ð0, N 0 /2Þ, with Nð⋅Þ being the normal distribution. At the destination, ð1 + εÞkencoded symbols are received to recover the k input symbol. Generally, a soft In this study, HDBP decoding was concatenated with SDBP decoding, which can reduce decoding complexity. The encoded symbols with LLR were passed to HDBP decoding, and the output of decoding was verified by a CRC. Decoding is successful if it passes; otherwise, SDBP decoding is invoked.
Hard Decision BP
Decoding. BP decoding is widely used for LT codes, which are implemented in different variants for different channels. HDBP decoding is used in BEC, whereas SDBP decoding is used in noisy channels. We discovered that HDBP decoding concatenated with SDBP decoding can be used in noisy channels, which will be analyzed herein. In HDBP decoding, encoded symbols participating in the decoding process are considered as correct symbols. Hence, simple inversed XOR operations are performed. The pseudocode of HDBP decoding is shown in Algorithm 2, where decoding is performed at once. Decoding is completed when sufficient encoded symbols are received.
Analysis of Hard Decision BP Decoding
For HDBP decoding, suppose n encoded symbols y 1 , y 2 , ⋯, y n are sufficient to recover k input symbols x 1 , x 2 , ⋯, x k . The relationship between the input symbols and encoded symbols can be expressed by a Tanner graph. For example, the Tanner graph of four input symbols and five encoded symbols is shown in Figure 1.
Wireless Communications and Mobile Computing
Generally, ρðy 1 Þ < ρðy 2 Þ, if jLðy 1 Þj > jLðy 2 Þj and vice versa. A decreasing function exists such that Let ρðx i Þ denotes the error probability of the input symbol x i if it is recovered by the encoded symbol y j , which is shown in formula (4).
where Nðy j , x i Þ denotes the neighbors of y j except x i . In HDBP decoding, input symbols are recovered individually in sequence. In Figure 1, ðx 1 , x 2 , x 3 , x 4 Þ is a reasonable sequence of input symbols recovered in decoding, and it is not the only one. For a sequence, we define Qðx i Þ as the set of encoded symbols that are the only neighbors of the input symbol x i at the end of decoding. For the sequence ðx 1 , x 2 , x 3 , x 4 Þ, we have Qðx 1 Þ = fy 1 g,Qðx 2 Þ = fy 2 , y 3 g, Qðx 3 Þ = fy 4 g , and Qðx 4 Þ = fy 5 g. We discovered that x 2 can be recovered by both y 2 and y 3 . LetQ ′ ðx i Þ denotes the set of encoded symbols supported to decode x i . We have Q′ðx 2 Þ = fy 1 , y 2 g if it is recovered by y 2 ; otherwise, Q′ðx 2 Þ = fy 1 , y 4 , y 3 g. Therefore, we have Let ℚ = fQ ′ ðx 1 Þ, Q ′ ðx 2 Þ, ⋯, Q ′ ðx k Þg; the error probability of HDBP decoding is shown in formula (6).
Wireless Communications and Mobile Computing
For a Tanner graph, several different ℚ exist. Our aim is to optimize the supported set of each input symbol to reduce the error probability of decoding. For example, x 2 should be recovered by a supported set with a lower error probability. For example, the Tanner graph with the LLR value is shown in Figure 2. The LLRs of y 2 and y 3 were set as -0.1 and 0.2, respectively. As shown in Figure 2(a), the input symbol is incorrect if it is recovered by y 2 . Otherwise, it is correct if it is recovered by y 3 , which is shown in Figure 2 3.2. Improvement in Error Probability. In HDBP decoding, each input symbol is recovered by 1 + ε encoded symbols on average. In other words, ε input symbols will be recovered by two encoded symbols. This is a valid assumption because the probability of an input symbol recovered by more than two encoded symbols is small. Consider the case in which both encoded symbols y 1 and y 2 have only the neighbor of the input symbol x i at the end of decoding. The error probability of x i is shown in formula (7) if it is recovered by y 1 or y 2 at random.
Otherwise, the error probability of x i is as shown in formula (8) if it is recovered by the encoded symbol with a lower error probability.
Therefore, the error probability of decoding is reduced if the encoded symbol with a lower error probability is selected to recover the corresponding input symbol.
Improved Hard Decision BP Decoders
As shown, the error probability of the input symbol can be reduced by selecting the support set with a lower error probability. In this section, we propose three improved HDBP decoders to reduce the probability of decoding.
4.1. Ripple-Sorted BP Decoder. The structure of the HDBP decoder is shown in Figure 3. First, degree-one encoded symbols are added to the ripple to start the decoding. The symbols in the ripple are processed individually until the ripple is empty. Two methods can be used to reduce the error probability of recovered symbols in the decoding process. The first one is to sort symbols in the ripple. The second one is to sort symbols in the waiting array.
Lemma 1.
Sorting the symbols in the waiting array can be replaced by sorting the symbols in the ripple, and both the RSBP decoder and waiting-array-sorted BP (WSBP) decoder can reduce the error probability of decoding.
Proof. The symbols released in each step depend only on the symbol being processed and the symbols in the waiting array; they are irrelevant to the order of the waiting array. The released symbols are sorted in the WSBP decoder, whereas the released symbols are sorted after being added to the ripple in the RSBP decoder. We assume that two symbols y 1 and y 2 are released simultaneously and jLðy 1 Þj > jLðy 2 Þj without loss of generality. If the remaining neighbor of both y 1 and y 2 is x 1 , Proof. We assume that two symbols y 1 and y 2 exist in the ripple and jLðy 1 Þj > jLðy 2 Þj. The remaining neighbors of y 1 and y 2 are x 1 and x 2 , respectively. It is clear that the error probability of the symbol released in this step is equal to or greater than the error probability of y 1 . Therefore, the symbol with minimal error probability should recover the corresponding input symbol in each decoding step. However, the error probability of the symbol released in the next steps may be less than the error probability of y 2 . If y 3 , jLðy 3 Þj > jLðy 2 Þj is released and added to the ripple when y 1 is processed, the remaining neighbor of y 3 will be x 2 . The input symbol x 2 should be recovered by y 3 . In this case, the performance of the RSBP decoder is better than that of the WSBP decoder. Hence, Lemma 2 is proven.
he design of the ripple size evolution assumes that the ripple size should be remain more than one throughout the decoding process. Therefore, in theory, the performance of the RSBP decoder is better than that of the HDBP decoder. To analyze the performance improvement, the ripple size and waiting array size are analyzed by Monte Carlo simulations. The result is shown in Figure 4 with k = 500 and the degree distribution in [18], where the average ripple size and average waiting array size in each decoding step are calculated by 100000 simulations. The percentage of ripple sizes greater than one exceeds 80%, which means that symbols in the ripple can be sorted based on the absolute LLR value. As shown in Figure 4, the waiting array size is large at the beginning of decoding, which means that the probability of y 1 and y 2 released in the same decoding step is high. Additionally, we discovered that the number of symbols in the waiting array is larger than that in the ripple exception of n r ≥ 499. Therefore, sorting symbols in ripple is more efficient than sorting symbols in a waiting array. The proposed RSBP decoder can be implemented by replacing push (R, y) with pushAndSort (R, y) in Algorithm 2.
Ripple-Sorted and Delayed BP Decoder.
For HDBP decoding, the number of symbols released in each step Figure 12: σ as a function of SNR for k = 500.
Wireless Communications and Mobile Computing
increased with the size of the waiting array. Therefore, the performance increased with the size of the waiting array. For example, as shown in Figure 5(a), the input symbolx 2 recovered byy 2 is incorrect becausey 2 is incorrect. As a result of error propagation,x 3 , x 4 are also incorrect. In Figure 5(b), the decoding process is delayed until sufficient encoded symbols are available. The input symbolx 4 recovered byy 5 is correct becausey 5 is correct; therefore,x 3 , x 4 are correct as well. Consequently, the encoded symboly 2 with a high error probability is redundant. Lemma 3. The more encoded symbols are available before decoding starts, the better is the BER performance of decoding.
Proof. We assume that the input symbol x can be recovered by one of y 1 , y 2 with jLðy 1 Þj < jLðy 2 Þj. If y 1 is processed before y 2 is available, then the error probability of x is reduced if decoding is delayed until y 2 is available. If more encoded symbols are available before decoding starts, the error probability of more input symbols will be reduced. Hence, Lemma 3 is proven.
Based on Lemma 3, we propose our RSDBP decoder, which delays the start of the decoding until kð1 + εÞ encoded symbols are received. The parameter ε depends on k and the degree distribution. For example, ε is set as 0.16 for k = 256 with a degree distribution in [20]. The proposed RSDBP decoder can be implemented by starting the RSBP decoding process until sufficient encoded symbols are added to the waiting array.
Ripple-Sorted and
Threshold-Based BP Decoder. Let P denotes the ratio of error symbols, which increases as the SNR decreases. The BER performance of the RSDBP decoder decreased as P increased. To reduce the probability of incorrect encoded symbols participating in decoding, the encoded symbols with a high error probability should be deleted. The distribution of error symbols can be analyzed using Monte Carlo simulations. For example, 2k (k = 500) encoded symbols are generated and sorted by the error probability, denoted as y 1 , y 2 , ⋯, y 2k , jLðy 1 Þj ≥ jLðy 2 Þj ≥ ⋯ ≥ jLðy 2k Þj. We segmented y 1 , y 2 , ⋯, y 2k into 100 segments. The ratio of error encoded symbols in each segment is shown in Figure 6. As shown, only a small number of error encoded symbols exist, and the ratio of error encoded symbols increased with the segment sequence. Therefore, most error encoded symbols can be deleted from decoding if the tails of the sorted encoded symbols are deleted.
For HDBP decoding, the received encoded symbol y j will be deleted if jLðy j Þj < t, where t denotes the threshold. Otherwise, it will participate in decoding. Let δ denotes the probability that an error symbol will be deleted. For deletion probability δ, the threshold t can be calculated by Monte Carlo simulations, as shown in Algorithm 3.
Let ω denotes the ratio of encoded symbols deleted by decoding, which depends on δ. Figure 7 shows the ratio of deletion as a function of δ. As shown, the ratio of deletion decreased as the SNR increased, whereas it increased with δ.
Therefore, the trade-off between the BER performance and overhead can be adjusted by δ.
Based on the analysis of symbol deletion, we propose a new decoder named the RSTBP decoder, in which encoded symbols with higher error probabilities are deleted from decoding. The proposed RSTBP decoder can be implemented by deleting encoded symbols that exceed the threshold.
Analysis of the Improved BP Decoder
Proof. The ripple size decreases as the decoding process proceeds. Initially, RðkÞ symbols are released and sorted, and the decoding complexity is OðRðkÞ log RðkÞÞ. The remainingð1 + εÞk − RðkÞ symbols will be inserted into the ripple, and the decoding complexity is less than Oððð1 + εÞk − RðkÞÞ log RðkÞÞ. Hence, Lemma 4 is proven.
The computational complexities of the four decoders are shown in Table 1, where d = 1/k∑ k d=1 dΩðdÞ and R max = max ðR k , R k−1 , ⋯, R 1 Þ, and the numerical results of computational complexities obtained by simulations are shown in Table 2. It is noteworthy that the number of XOR operations depends only on the average degree, and the number of SORT operations in RSDBP is the same as that in RSTBP. The number of SORT operations in RSBP is slightly less than that in RSDBP because small input symbols have been recovered before ð1 + εÞk encoded symbols are available.
Lemma 5. For a P and δ, the number of error encoded symbols participating in decoding is
Proof. Let N denotes the number of encoded symbols received, and we haveNð1 − δPÞ = ð1 + εÞk. The error encoded symbols participating in decoding are N e = NPð1 − δÞ. Hence, Lemma 5 is proved.
Wireless Communications and Mobile Computing
Proof. There exist ε pairs of encoded symbols. Since N e is small compared with ð1 + εÞk, the probability of an error encoded symbol pairing with a correct encoded symbol is εN e /ð1 + εÞk. Let ðy 1 , y 2 Þ, jy 1 j ≤ jy 2 j denotes a pair of encoded symbols without loss of generality. If one of y 1 and y 2 is an error encoded symbol, the probability that y 1 is the error encoded symbol exceeds 0.5. Therefore, more than εN e 2 /2ð1 + εÞk error encoded symbols will be considered as redundant symbols. Hence, Lemma 6 is proven.
Definition 7. (error propagation probability). the neighbors of the encoded symbol are selected randomly. Therefore, the probability that an encoded symbol with degree d is affected by an error input symbol that satisfies the constraint We observed that the error propagation probability decreased with the average degree. For example, no error propagation was observed when the average degree was one. Hence, a trade-off occurred between the error propagation probability and overhead.
Definition 8. (number of affected encoded symbols). let d 1 , d 2 , ⋯, d L denote the degrees of L encoding symbols that will recover L input symbols. The number of encoded symbols affected by an error symbol in step L directly satisfies the constraint Lemma 9. (total number of affected encoded symbols). let l 1 , l 2 , ⋯, l NðLÞ denotes the steps affected by the error symbol in steps k-L. The total number of encoded symbols affected by an error symbol satisfies the constraint Proof. Compared with k, the average degree of encoded symbols is small. Hence, ρðdÞ is relatively small. The number of encoded symbols that are affected by an error symbol directly and indirectly is small. Therefore, the double counting problem is disregarded; hence, the lemma is proven.
To validate the performance of LT codes in AWGN channels, we propose a new indicator known as NERRIC, which is defined as follows: where τ denotes the BER of decoding.
The Optimal Degree Distribution for a Specific BP Decoder
Studies regarding the design of an optimal degree distribution for a specific BP decoder over AWGN channels are limited, as previously discussed. Herein, a method for designing a degree distribution for a specific goal is proposed. The ripple size evolution is important for the design of a degree distribution. Random walk was used to model the number of encoded symbols released in each step. We assumed that the number of encoded symbols released in each step was a Poisson distribution.
Lemma 10. (symbol release). let φðmÞ denotes the probability that m encoded symbols will be released in each step. It satisfies the constraint Proof. The number of encoded symbols released is a Poisson distribution. The expectation of this distribution is one. Therefore, Lemma 10 is proven.
Let m max denotes the maximum number of encoded symbols released in a single decoding step. For a fixed m max , Monte Carlo simulations can be used to generate plenty of ripple size evolutions. Each ripple addition is modeled as a random walk with a probability distribution φðmÞ. The ripple size evolution is modeled as follows: where R L and σ L denote the average ripple size and variance of the simulation results in decoding step L, respectively; c denotes a parameter to adjust the ripple size evolution. For m max = 3, the ripple size as a function of decoding step for different c is shown in Figure 8. It is clear that the expected ripple size evolution can be generated by carefully adjusting parameter c. Given the ripple size evolution, the degree distribution can be calculated using the GDDA. The degree distribution is obtained based on formula (18).
where RSEðm max , cÞ denotes the ripple size evaluation determined by parameters m max , c.
Let Ω denotes the degree distribution designed to minimize average overhead. Let Ω ′ denotes another well- 13 Wireless Communications and Mobile Computing designed degree distribution to decrease the average degree at the expense of increasing the average overhead. The average overhead and average degree as a function of parameter c are shown in Figure 9. The BER decreased as the average degree decreased because of two reasons. First, the more encoded symbols participated in decoding, the more encoded symbols recovered the same input symbol, resulting in a decrease error probability of decoding. Subsequently, the error propagation decreased with the average degree. Therefore, the BER is in conflict with the average overhead. Additionally, the average degree directly determines the number of operations during the encoding and decoding processes.
Let GðΩÞ denotes the objective function of the degree distribution Ω. Additionally, the optimal parameters ðm max , cÞ can be converted to a pure optimization problem as follows: The variable c is used for the range [-1,1] and m max for the range [3, ffiffi ffi k p ]. Generally, for a fixed m max , it might appear that a lower value of c would be desirable for decreasing both the average degree and BER at the expense of increasing the average overhead.
Numerical Results
In this section, some simulation results are provided to validate our study. The decoding algorithms were implemented in C++ and executed on a computer with a Xeon E3-1505 M CPU and 16 GB of RAM under Windows10. The degree distributions proposed in [18,20] were used in our simulations, which are denoted as Φ and Ψ, respectively, and our proposed degree distribution is denoted as Ω. The BER as a function of N b /N 0 is shown in Figure 10. The BERs of the RSBP and RSDBP decoders were better than that of the BP decoder, consistent with our analyses. For example, with k = 500 and N b /N 0 = 4:0, the BER of the BP decoder was 0.115, whereas the BER of the RSDBP decoder was 0.082. The computational times of BP, RSBP, and RSDBP are shown in Table 3. As shown, the computational times of the three decoders were similar.
The RSDBP decoder combined with the proposed degree distribution Ω was compared with the other decoders. The degree distribution Ω was designed to optimize σ by selecting the appropriate m max and c; the optimal parameters and average degree of Ω are shown in Table 4, whereas the average degrees of Φ and Ψ are shown in Table 5. As shown, the average degree of Ω is smaller than those of the others. Figures 11 and 12 illustrate the NERRIC σ achieved by different decoders and different degree distributions with k = 256 and k = 500, respectively. It is clear that the RSBP 14 Wireless Communications and Mobile Computing and RSDBP decoders outperformed the BP decoder, which is consistent with the theoretical analysis. The improvement decreased as the SNR increased because barely any error encoded symbols were discovered in channels with higher SNRs. Furthermore, RSDBP combined with the proposed degree distribution outperformed the other methods, and the improvement increased with the SNR. For example, with k = 500 and N b /N 0 = 4:0, the σ of the RSDBP decoder combined with Ψ was 1.241, whereas the σ of RSDBP combined with the proposed degree distribution was 1.217. This is because the optimization goal was to minimize σ, and the probability of error propagation decreased with the average degree. The RSTBP decoder combined with the degree distribution Ω was compared with the other decoders, and the optimal parameters of Ω with different δand k are listed in Table 6. Figures 13 and 14 show σ as a function of SNR for k = 256 and k = 500, respectively. As shown from the figures, the proposed degree distribution Ω yielded better results than the others for both δ = 0:01 and δ = 0:90to minimize σ for optimization. As the SNR increased, the performance of Ω was better than that of the others. This is because the average degree of Ω was smaller. Furthermore, as δincreased, σ decreased more slowly. This is because the number of error encoded symbols decreased as the SNR increased, and the number of encoded symbols deleted at δ = 0:01approached that at δ = 0:90.
In hybrid decoding, the decoding complexity decreased as the FER increased. For RSDBP decoder, the degree distributionΩcan be tuned to achieve a lower FER in a fixed overhead. The optimal parameters of the degree distribution Ω with different ε and k values are shown in Table 7. Figures 15 and 16 show the FER as a function of the SNR with k = 256 and k = 500, respectively. It was observed that the proposed optimal degree distribution outperformed the others for different fixed overheads. For instance, in the case of k = 500, ε = 0:2, and N b /N 0 = 4:0, the FERs of Ψ and Ω were 0.0232 and 0.0138, respectively. This is because a better trade-off between the average overhead and average degree was achieved to reduce the effect of error propagation.
A hybrid decoder can be formulated by combining the RSDBP and SDBP decoders. Figures 17 and 18 show the decoding time as a function of the SNR for k = 256 and k = 500, respectively. It was observed that Ω outperformed the others in terms of the decoding complexity, as Ω was better than the others in the HDBP decoding stage.
Conclusions
Herein, we first analyzed the improvement of BP decoding by introducing a sorting ripple, delaying the decoding process, and deleting low-reliable symbols. Subsequently, we proposed three improved HDBP decoders, namely, RSBP, RSDBP, and RSTBP decoders. We demonstrated that both RSBP and RSDBP outperformed BP decoding in terms of NERRIC although the decoding complexity increased slightly. Compared with the RSDBP decoder, the RSTBP decoder further increased the NERRIC but the average overhead increased. Furthermore, a ripple size evolution-based design of the optimal degree distribution was proposed. Numerical simulations demonstrated that the proposed degree distribution outperformed the others in terms of both the NERRIC and FER. The proposed scheme was not limited to AWGN channels and LT codes. It can be readily extended to noisy channels and Raptor codes. In future work, the energy consumption of LT codes will be investigated to identify a balance among the FER, average overhead, and average degree.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 7,863.8 | 2020-10-20T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Carbon ion acceleration from thin foil targets irradiated by ultrahigh-contrast, ultraintense laser pulses
In this study, ion acceleration from thin planar target foils irradiated by ultrahigh-contrast (1010), ultrashort (50 fs) laser pulses focused to intensities of 7×1020 W cm−2 is investigated experimentally. Target normal sheath acceleration (TNSA) is found to be the dominant ion acceleration mechanism when the target thickness is ⩾50 nm and laser pulses are linearly polarized. Under these conditions, irradiation at normal incidence is found to produce higher energy ions than oblique incidence at 35° with respect to the target normal. Simulations using one-dimensional (1D) boosted and 2D particle-in-cell codes support the result, showing increased energy coupling efficiency to fast electrons for normal incidence. The effects of target composition and thickness on the acceleration of carbon ions are reported and compared to calculations using analytical models of ion acceleration.
In this paper, we report on an experimental investigation of carbon ion acceleration by the TNSA mechanism using one of the currently available highest power (115 TW), ultrahigh-contrast (10 10 ), ultrashort pulse (50 fs) laser systems, operating at average intensities of 7 × 10 20 W cm −2 (on target). This is an order of magnitude higher intensity than typically achieved in previous ion acceleration experiments with similar laser pulse durations and is comparable to intensities achieved using large-scale picosecond-duration laser systems [30]. We chose to investigate acceleration of carbon ions because of the interest in using laser-plasma acceleration schemes as potential future compact sources for carbon ion therapy. We compared ion acceleration for normal and oblique laser incidence angles and measured the scaling of the maximum and total ion energies with target thickness and composition. The results are discussed with reference to one-dimensional (1D) boosted-and 2D particle-in-cell (PIC) simulations, and calculations using the analytical models introduced by (i) Schreiber et al [31] and (ii) Andreev et al [32,33] for ultrahigh-contrast, ultrashort laser irradiation of thin target foils.
The experiment
The experiment was performed using the Astra-Gemini laser at the Rutherford Appleton Laboratory. The laser-delivered pulses with duration, τ L , equal to 50 fs (full-width at halfmaximum (FWHM)), with energy, E L , up to 12 J, at a central wavelength, λ L , equal to 800 nm. A double plasma mirror system was employed, in which one off-axis parabola (OAP) was used to focus the laser pulses onto the plasma mirrors and a second identical OAP was used to re-collimate the expanding beam, as illustrated schematically in figure 1. Use of the double plasma mirror arrangement enhanced the contrast ratio between the pulse peak intensity and the ASE pedestal intensity by a factor of ∼1000. The inherent intensity contrast at 20 ps prior to the peak, for example, was measured, using a third-order scanning autocorrelator, to be ∼10 7 . Use of the double plasma mirror system increased this to ∼10 10 . The overall energy throughput efficiency of the plasma mirrors was 48%, resulting in energies up to 5.8 J on the target. The pulses were focused with an f /2 OAP onto the target at one of two fixed incidence angles, θ L , 0 • (along the target normal) and 35 • with respect to the target normal. For θ L = 0 • , the radius, r L , of the laser focal spot was 1.25 µm (the diameter at FWHM was 2.5 µm, containing 35% of the laser energy). The calculated intensity on target was up to 7 × 10 20 W cm −2 . A λ/4 waveplate was placed after the plasma mirror system on a limited number of laser shots to enable target irradiation with circularly polarized laser pulses (for θ L = 0 • only). Unless otherwise stated, the laser pulses were linearly polarized with the electric field vector in the plane of the laser and target normal axes, i.e. p-polarization for θ L = 35 • .
A range of target materials and thicknesses were irradiated to determine the optimum targets for carbon ion acceleration. These include 'uniform' targets of C, C 3 H 6 (polypropylene, hereafter referred to as CH), C 10 H 10 O 4 (mylar, hereafter referred to as CHO), Al and Au (with Schematic of the experiment arrangement. A double plasma mirror system was used to enhance the contrast of pulses from the Astra-Gemini laser. The diagnostics included a proton beam spatial intensity profile monitor consisting of plastic scintillators, image relay optics and gated CCD cameras and an identical set of Thomson parabola ion spectrometers with MCP detectors in the dispersion plane and EMCCD cameras. carbon as a surface contamination layer) and 'layered' Au-CH targets. The target thickness, L, was varied from 10 nm to 10 µm. The target foils were mounted on a rotating wheel to enable a range of different target types to be simultaneously loaded into the target chamber for each parameter scan.
The charge-to-mass ratio and energy distributions of the accelerated ions were measured using two Thomson parabola ion spectrometers, positioned along the target normal direction for each incident angle, as shown in figure 1. They had a line of sight to the laser focal spot in the plane of the laser beam axis and the target normal axis. The dispersed ions were detected using micro-channel plate (MCP) detectors positioned in the dispersion plane of the spectrometers. The output signal from each MCP was measured using an intensified CCD Andor camera (iXon EM + EMCCD 888). The arrangement was absolutely calibrated on a number of laser shots using a CR-39 nuclear track detector, which is sensitive to ions, but insensitive to electrons and x-rays. Slots machined into the CR-39, which was positioned directly in front of the MCP, enabled a direct calibration of the MCP-CCD detector with respect to the CR-39 for the same laser shots. The spatial intensity distribution of the lower half (just below the plane of the spectrometers and the target normal axis) of the proton beam was measured, for protons with energy above a lower detection limit of 5 MeV (defined by the thickness of a light-shield aluminium filter), using a plastic scintillator and gated CCD imaging system. The aluminium filter stops heavier ions from reaching the scintillator and protects it from the target debris.
Ion acceleration mechanisms
As introduced above, for ion acceleration driven by ultrashort laser pulses in the intensity regime from 10 18 to 10 20 W cm −2 , the TNSA mechanism dominates for targets greater than ∼50 nm in thickness [16]. In this scheme, fast electrons ponderomotively accelerated by the laser pulse at the front irradiated surface of the target propagate through the target and exit the rear, setting up a large electrostatic field (of the order of TV m −1 ) due to the charge separation between the escaping electrons and the ions at the rear surface. The maximum ion energy scales as I 1/2 and the ion beam is directed along the target normal axis [4]. For higher laser intensities or ultrathin targets, other ion acceleration mechanisms become feasible. If the target thickness is reduced to the order of the relativistic plasma skin depth, then the laser field can penetrate the target to the rear surface, enhancing the TNSA mechanism. This is termed the 'laser break-out afterburner' [16,34]. The 'Coulomb explosion' (or 'directed Coulomb explosion') mechanism, in which the laser field expels all electrons from the foil, giving rise to an explosion of the ions due to the repulsive Coulomb force between them, becomes important if the target is thin enough (e.g. <100 nm) [35]. The most effective mechanism for coupling laser energy to ions is predicted to be RPA for which the momentum of the laser is efficiently imparted to the ions [17], [22][23][24][25][26][27][28]. This mechanism, which can work both for ultrathin targets in 'lightsail' mode [25] and thicker targets in 'hole-boring' mode [26], is predicted to be particularly effective for circularly polarized laser pulses, which gives rise to a non-oscillating electrostatic field and therefore a smooth pushing effect by the radiation pressure of the laser pulse. The RPA mechanism is also feasible in the intensity regime accessed in this experiment, but is likely to become more important at higher intensities. A peaked spectrum of ion energies is predicted, with energy scaling approximately linearly with laser intensity, which is much more favourable than TNSA. The beam would be centred on the direction of propagation of the laser pulse.
To identify the dominant ion acceleration mechanisms in this experiment, we begin by considering the energy spectra of the accelerated ions. Figure 2 shows representative example carbon ion energy spectra from Al targets irradiated with linearly polarized laser pulses focused to average intensity equal to 6-7 × 10 20 W cm −2 . The spectra were measured along the target normal axis of 100 nm-thick Al targets irradiated at θ L = 35 • and 0 • . Figure 2 is typical of the energy spectral shape obtained for targets with thickness greater than or equal to 100 nm and is consistent with previous measurements of ions accelerated by the TNSA mechanism [3,36]. The energy distribution shifts to higher energy with increasing charge state, and higher charge state ions exhibit more plateau-like distributions. By contrast, when targets thinner than 50 nm are irradiated, changes to the shape of the spectra at high energy, including the onset of peaks and the detection of ion species with the same maximum velocity, are measured. These spectral changes indicate that TNSA is not the sole mechanism responsible for the ion acceleration in ultrathin (<50 nm) targets and that RPA may start becoming important under these conditions. These observations, which primarily occur with circularly polarized laser pulses and for θ L = 0 • , will be reported in detail in a separate article. In the remainder of the present paper, we focus our attention on carbon ion acceleration by TNSA at ultrahigh laser intensities, using linearly polarized laser pulses. igure 2. Example carbon ion energy spectra measured along the target normal axis at the rear side of 100 nm-thick Al targets for laser incident angles of 0 • and 35 • with respect to the target normal axis. The example spectra measured for θ L = 35 • illustrate the changes with ion charge state. A typical C 6+ energy spectrum measured for θ L = 0 • is included to demonstrate the enhancement of ion maximum energy and flux achieved compared to the corresponding spectrum with θ L = 35 • . In both the cases, the laser pulses were linearly polarized, with energy 5 J (on the target), duration 50 fs and intensity 6-7 × 10 20 W cm −2 .
The effect of laser incidence angle
One of the most striking results of this study is the effect of laser incident angle on ion acceleration. As shown in the example spectra in figure 2, the maximum carbon ion energy observed for θ L = 0 • (equivalent to s-polarized normal incidence) is significantly larger than for 35 • (p-polarized) irradiation for otherwise identical laser pulse energy, duration and very similar intensity. As will be shown in figure 4, this observation is correct independent of target thickness. We note that measurements of the spatial intensity distribution of the beam of protons (with energy above a lower detection limit of 5 MeV) show no evidence of changes to the uniformity or beam pointing for the different angles of laser irradiance. Likewise, we do not measure significant changes to the shape of the ion energy spectra, as illustrated in figure 2.
This result contrasts sharply with measurements reported by Ceccotti et al [15] on the effect of laser polarization on proton acceleration using ultrahigh-contrast (10 10 ), ultrashort (65 fs) laser pulses focused to 5 × 10 18 W cm −2 . Those results show that at this lower intensity range the ion energy is enhanced by the p-component of the laser electric field (via, for example, the Brunel effect [37] or the collisionless absorption process suggested by Gibbon [38]). For θ L = 0 • , because the laser electric field is in the plane of the target, the Brunel effect and other p-polarization-dependent absorption mechanisms cannot take place effectively. The fact that much more efficient coupling of laser energy to ions is observed for normal incidence compared to oblique incidence at the higher intensities accessed in the present experiment, which cannot be accounted for by the small difference in the ponderomotive potential for the two angles investigated, suggests that new angle-dependent absorption processes may be accessed in the ultrahigh intensity regime for ultrashort laser pulses.
To investigate the possible reasons for the higher energies achieved with θ L = 0 • compared to θ L = 35 • , 1D-boosted PIC simulations using a modified version of the code employed in [23,26], [39][40][41] are performed to investigate any changes in the fast electron generation. The simulations are performed with 25 000 cells, with an individual cell size of 2 nm and 200 particles per cell. The target consists of a heavier ion substrate (Z = 1; mass = 3m p , where m p is the proton mass) at a number density of 90n c where n c is the critical density, with a 20 nm proton layer (also at 90n c ) on the rear surface. The acceleration of both ion species is simulated. The laser pulse has a 'sin 2 ' profile with a pulse duration of 50 fs, a wavelength of 0.8 µm and an a 0 (dimensionless amplitude) of 18.0 (equivalent to 7 × 10 20 W cm −2 ). Two simulations are performed, one for a laser incident angle equal to 35 • and the other for normal incidence. The simulations reveal that higher-energy ions are produced at normal incidence, as shown in figure 3. The enhancement factors are similar to that measured experimentally for carbon ions (see figure 2). On investigating the simulation output in more detail, it was found that this difference is due to higher energy fast electrons being produced in the normal incidence case than in the oblique incidence case. 2D PIC simulations using the EPOCH code also show a similar effect of the laser-to-fast electron momentum transfer dependence on incident angle. We anticipate that a theoretical description of this new mechanism will be required to fully explain both simulation and experiment. This will be the subject of a detailed investigation in the future.
Ion acceleration as a function of target thickness
In this section, we discuss the scaling of the maximum C 6+ ion energy (the highest energy carbon ion detected) with target thickness. A summary of the measurements using Al, C and Average of the maximum C 6+ ion energies measured as a function of target thickness for given target materials and angles of incidence (symbols). Lines correspond to predictions using the analytical models of Andreev et al [32] (black line) and Schreiber et al [31] (green line), as described in the main text.
CHO target foils, for both angles of incidence, is presented in figure 4. Each point corresponds to an average of the maximum C 6+ ion energies obtained from a number of laser shots (typically 3 or 4) on each target type and thickness and the error bars correspond to the standard deviation of the maximum energies. While we have chosen here to plot averages of short series of shots for nominally the same conditions, a discussion of the highest ion energies observed for each target thickness is also of interest and will be included in a separate publication.
As shown in figure 4, independent of target material, a similar scaling of the maximum ion energy with target thickness is measured. The rate of increase in the maximum carbon ion energy with decreasing target thickness is very similar to that reported by Neely et al [14] for protons (for Al target thickness above 100 nm) for an order of magnitude lower intensity. The maximum ion energy increases by approximately a factor of 2 when the target thickness is decreased by two orders of magnitude from 10 to 0.1 µm in both the cases. As the target thickness is decreased from 10 to 0.1 µm, the ratio L/r L changes from 8 to 0.08. In the former case, transverse spreading of the fast electron population as it propagates within the target from the front to the rear surface is important in defining the maximum energy of TNSA ions, whereas for the latter case the electrons do not spread over an area much larger than the laser focal spot during their first transit of the target. In the case of targets for which L cτ L /2 (i.e. 7.5 µm), refluxing or recirculation [42] of electrons reflected in the sheath fields formed on both sides of the target is believed to occur, and Mackinnon et al [43] report that transient enhancements of the fast electron density due to recirculating electrons can increase the maximum energy of ions accelerated by TNSA.
To investigate the expected scaling of the maximum ion energy with the target thickness, L, we apply two analytical models: (i) the model presented by Schreiber et al [31] and (ii) the model described by Andreev et al for ultrahigh-contrast laser pulse interaction with thin target foils [32]. The model presented by Schreiber et al [31] is based on the surface charge set up by laser-accelerated electrons on the target rear surface and includes consideration of the radial extent of the charge cloud. The electron density is calculated assuming that the electrons are uniformly filling a circular region at the target rear surface with radius R. This radius is calculated assuming that the electrons are accelerated from the laser focal spot, with radius r L , and transverse the target with thickness L in an angular cone with fixed half-angle θ e (set equal to 20 • ), such that The maximum observable ion energy E max is determined from the expression τ L τ 0 = X 1 + 1 2 where P L is the laser power and P R = 8.71 GW is the relativistic power unit. q i is the charge of the ion. η is the laser-to-fast electron energy conversion efficiency. Measurements of the laser energy reflected from the target (in the specular direction) indicate a total laser energy absorption of ∼30%. We therefore choose a realistic value of η = 0.2 for the total laser energy conversion to fast electrons in the calculation using the Schreiber et al model. The Andreev et al [32] model, by contrast, is based on a self-consistent solution of the Poisson equation for the electric field responsible for ion acceleration and the equation of motion for the ion front. The model presented in [32] uses a rectangular density profile of the target, which is made up of two layers; the bulk of the target and a thin contaminant layer. The fast electron temperature and laser absorption are assumed to be dependent on the target thickness. The model has been calibrated against numerical simulations for angle of incidence equal to 45 • [32]. The maximum observed ion energy, E max , is determined (in cgs units) by where r De = T e0 /4πe 2 n e0 is the Debye radius of hot electrons, Z is the atomic number of the target bulk or contaminant layer, e is the electronic charge and n e0 is the fast electron number density. T e0 (L ,η) is the fast electron temperature and is a function of the target thickness and the fraction of laser energy absorbed into the fast electrons. The electric fields associated with the ion front are and c denote the target bulk and contaminant layers, respectively, and n i denotes the ion density. The model is described in detail by Andreev et al [32]. For the calculations shown in figure 4 the target is assumed to be carbon with a 5 nm contaminant layer of hydrogen. The thickness of the carbon is varied.
Generally both models predict increasing ion energy with decreasing target thickness down to 100 nm, in qualitative agreement with the experimental measurements. An important difference in the models is that whilst the Schreiber et al model results in a saturation of the maximum ion energy as the sheath radius approaches the size of the laser focal spot, the model by Andreev et al predicts an optimum target thickness of ∼80 nm for the parameters of the experiment. This is higher than that predicted by the scaling laws presented by Esirkepov et al [20], which suggest an optimum thickness of ∼20 nm for the laser parameters used. Experimentally, we observe a slight decrease in the average of the measured maximum C 6+ ion energies obtained with 50 nm Al and C targets compared to the corresponding results for the 100 nm-thick targets (for θ L = 0 • ), as shown in figure 4. However, as discussed above, due to the fact that we observe changes to the ion energy spectra with targets thinner than 50 nm, indicative of a transition away from the purely TNSA mechanism, we cannot conclusively state whether there is an optimum thickness for TNSA for the laser pulse parameters investigated. The results suggest that in terms of optimizing the acceleration of carbon ions, the thinnest foils enabled by the laser prepulse conditions down to ∼100 nm should be used.
The effects of target composition
To investigate the extent to which the material properties and composition of the target influence the laser energy transfer to carbon ions, we irradiate a range of targets containing carbon, either in the bulk material of the target (i.e. C, CH and CHO), as an uncontrolled surface contamination layer (on Al and Au metallic foils) or as part of a controlled deposited layer on the target rear surface (Au-CH). This target composition scan is performed at θ L = 35 • . The results for the maximum proton and C 6+ energies (averaged over 3-4 shots typically) are shown in figure 5 for L = 0.1 µm and L ∼ 1 µm (0.8-1.1 µm). We start by comparing proton and C 6+ ion acceleration from the relatively low-density 'uniform' targets C, CH and CHO. The maximum proton energy does not differ significantly for these targets for a given L. However, the C 6+ energy is highest for C targets. The presence of hydrogen in the composition of the target (in addition to the contamination layer) clearly produces a screening effect on the C 6+ ion acceleration [36]. The experimental data points are averages over several laser shots. Lines correspond to predictions using the analytical models of Schreiber et al [31] (green line) and Andreev et al [32] (black line). A simple calculation of the maximum ion energy obtained in an electric field of magnitude 8 TV m −1 for 50 fs (blue line), as described in the main text, is also included.
For the higher-density Al and Au targets, for which the TNSA protons and carbon ions are sourced only from hydrocarbon contamination layers on the target rear surface, a decrease in the maximum energies of both ion species is measured, particularly for the Au target. This is probably caused by the smaller number of those ions in the region of the acceleration field. The effect of adding a controlled 'source' layer (CH) of carbon and hydrogen to the target rear surface is also shown in figure 5. The availability of more hydrogen atoms in the region of the field enhances the acceleration of protons, producing a significant increase (from 4 to 7 MeV) in the averages of the measured maximum proton energies. A small decrease, however, is measured in the maximum carbon energy. These results further demonstrate that the presence of protons limits the maximum energies achievable for heavier ion species due to the screening of the acceleration field.
We conclude that for a given target thickness, carbon ion energies are maximized using a uniform C target and that proton energies are maximized by the use of a hydrogen-containing source layer on the rear surface of a high-Z target. However, this target produces the lowestenergy carbon ions due to proton screening of the acceleration field. Figure 6 shows the measured maximum and total energies, averaged over several shots, of each ion species as a function of ion charge-to-mass ratio (q/m) for the targets discussed in the above section. Despite the differing target thicknesses, materials and compositions, clear trends are observed. An increase of about 2 orders of magnitude in both the maximum ion energy and total ion energy is measured over the range q/m = 0.1-0.5. These two parameters are clearly strongly correlated. In most cases, increased energy coupling efficiency into a particular ion species results in an increase in ion number across the full energy spectrum, with a corresponding increase in the maximum ion energy detected.
Ion charge state distributions
Also shown in figure 6 are calculations of the maximum C q+ , q = 1-6, and proton energy scaling with q/m using the Schreiber et al [31] and Andreev et al [32] models as discussed above. The model calculations assume that all ions are created at the target rear surface. The initial charge state population, which has been shown to affect the energy scaling with ion q/m [36], is not considered. Nonetheless, the Schreiber et al model is found to reproduce the measured scaling very well, albeit the predicted energies are higher than measured for irradiation at 35 • angle of incidence. The energies predicted by the Andreev et al model are closer to the experimental measurements. Both models predict higher energies for low-charge carbon ions than measured, and this is likely caused by additional shielding effects that are not accounted for in the models.
Finally, for comparison, the results of a simple calculation, in which it is assumed that all ion species are subjected to the same electrostatic field of magnitude 8 TV m −1 for 50 fs (=τ L ), are also plotted. In this calculation, the magnitude of the electric field is a free parameter that is chosen by fitting to the measurements. At the rear surface of the target, ions can be produced by either collisional ionization or field ionization by barrier suppression mechanisms. Assuming field ionization to be the dominant ionization mechanism [3,44], the minimum threshold field E Thres. q for the production of an ion of charge q is calculated using where U q is the ionization potential in eV. E Thres. q = 7 TV m −1 for C 6+ ions, which are efficiently produced in all of the laser shots considered. An electric field magnitude equal to 8 TV m −1 is therefore consistent with the ionization states measured. Despite the simplicity of this model, it reproduces the measured q/m scaling over most of the range, as shown in figure 6(a). The departure observed for low q/m ions is likely to result from a screening of the acceleration field acting on these ions by the acceleration of ions with larger q/m, as discussed in detail in [36]. We note that charge transfer can occur as the ions propagate from the source to the detector and that this can influence the charge state distribution measured. However, we do not expect this to have a strong influence as it would result in ions with different charge states with the same maximum energy, which is not observed in the experiment.
Summary
In summary, we report on an experimental investigation of the optimization of carbon ion acceleration driven by ultrahigh-contrast (10 10 ), ultrashort (50 fs) laser pulses focused to an average intensity equal to 7 × 10 20 W cm −2 -about an order of magnitude higher intensity than previous ion acceleration experiments using laser pulses with tens of femtoseconds duration.
A number of conclusions are derived from our investigations of the TNSA-dominated regime.
1. Significantly higher (a factor of between 1.5 and 2) laser energy transfer to ions is obtained for irradiation at normal incidence compared to oblique incidence at 35 • (with respect to the target normal). This result is supported by 1D-boosted PIC simulations, which show similar enhancement factors in the maximum ion energies. The simulations reveal that the difference is due to higher-energy fast electrons produced for normal incidence. This result indicates that at ultrahigh intensities the p-component of the laser electric field has a reduced role in energy absorption, contrasting sharply with measurements made at lower intensities (5 × 10 18 W cm −2 ) [15], and that new absorption processes may be accessed at the ultrahigh intensity, ultrashort pulse regime explored. 2. The maximum energy of ions accelerated by TNSA increases with decreasing target thickness down to a thickness of ∼100 nm for Al and C targets. For thinner targets, changes to the ion energy spectra suggest that the ion acceleration mechanism is not purely TNSA for the laser pulse parameters of the experiment. 3. The highest energy carbon ions at θ L = 35 • are obtained with uniform carbon targets and the presence of hydrogen, either distributed throughout the target or as a layer on the rear surface, reduces the energy coupling efficiency to carbon ions. We note that removing the hydrogen-containing contamination layer from the target rear surface has been shown previously to increase the energy coupling efficiency to ions heavier than protons [3], [45][46][47]. By contrast, a high-Z target with a hydrogen source layer on the rear surface is best for optimizing proton acceleration. 4. There is a strong correlation between the measured maximum and total integrated ion energies and the scaling with q/m can be approximated to first order by assuming all ions are subjected to a constant electric field for the duration of the laser pulse. Departures from the model for low q/m ions suggest partial screening of the electric field acting on these ions by higher q/m species. | 7,004.2 | 2010-04-30T00:00:00.000 | [
"Physics"
] |
Constructing a Software Tool for Detecting Face Mask-wearing by Machine Learning
: In the pandemic era of COVID19, software engineering and artificial intelligence tools played a major role in monitoring, managing, and predicting the spread of the virus. According to reports released by the World Health Organization, all attempts to prevent any form of infection are highly recommended among people. One side of avoiding infection is requiring people to wear face masks. The problem is that some people do not incline to wear a face mask, and guiding them manually by police is not easy especially in a large or public area to avoid this infection. The purpose of this paper is to construct a software tool called Face Mask Detection (FMD) to detect any face that does not wear a mask in a specific public area by using CCTV (closed-circuit television). The problem also occurs in case the software tool is inaccurate. The technique of this notion is to use large data of face images, some faces are wearing masks, and others are not wearing masks. The methodology is by using machine learning, which is characterized by a HOG (histogram orientation gradient) for extraction of features, then an SVM(support vector machine) for classification, as it can contribute to the literature and enhance mask detection accuracy. Several public datasets for masked and unmasked face images have been used in the experiments. The findings for accuracy are as follows: 97.00%, 100.0%, 97.50%, 95.0% for RWMFD (Real-world Masked Face Dataset)& GENK14k, SMFDB (Simulated Masked Face Recognition Dataset), MFRD (Masked Face Recognition Dataset), and MAFA (MAsked FAces)& GENK14k for databases, respectively. The results are promising as a comparison of this work has been made with the state-of-the-art. The workstation of this research used a webcam programmed by Matlab for real-time testing.
Introduction:
As reported publicly, an outbreak of deadly pneumonia occurred in Wuhan City, Hubei Province, China, in December 2019. This form of pneumonia is called SARS-CoV-2 or Coronavirus 1 . Then, the World Health Organization (WHO) names it COVID-19 2 . Since up to now there is no exact cure drug or no vaccine for COVID-19, medical professionals have advised that people avoid any potential infection through a variety of means and methods, such as avoiding travel to highrisk areas, no contact with symptomatic individuals, cleaning all around us, including regular hand washing and use of face masks to prevent the taking of droplets 2 .A face mask is useful for both the prevention of asymptomatic disease and the transmission of disease in healthy people. In other words, the use of face masks by a healthy population in the community has a high percentage of reduction in the risk of transmission of respiratory viruses. Besides, facial masks are considered to be a form of personal protective equipment to prevent the spread of respiratory infections and to be effective in preventing the transmission of respiratory viruses and bacteria 3 .When mask-wearing is considered to be serious, it can contribute to the control of COVID-19 by reducing the emission of infected droplets from individuals 4 .To demonstrate the impact of maskwearing impact as seen in this study 5 , it has been explained that "very weak masks (20 % effective) can still be useful if the transmission rate is relatively low or decreasing." This study also shows that in "Washington, where baseline transmission is much less severe, 80 % of these masks could reduce mortality by 24-65 % (and peak deaths by 5-15%).
While it is clear that wearing a face mask is necessary, variations in general public and community settings have been identified. For example, the U.S. Surgeon General Opinion opposed the procurement of masks for use by healthy people. The explanation for this is to prevent widespread usage of face masks to retain insufficient resources for clinical use in health care settings. Another point found that universal use of face masks in the community has often been discouraged by the claim that face masks do not provide adequate protection against coronavirus infection 6 . However, as has been noted in recent publications, it is reasonable to suggest wearing masks, particularly in crowded and public areas. Generally, most countries during the pandemic have suggested that their citizens wear masks, as described in 6 . For instance, Japan advises people as follows: "The effectiveness of wearing a face mask to protect yourself from virus contraction is thought to be limited. If you wear a face mask in close vicinity, it helps avoid catching droplets coming from others, but if you are in an open-air environment, you don't need to use a face mask 6 . The wearing of a mask may be used in a variety of applications, such as community access control at airports or railway stations. As described above, wearing a face mask has a significant impact on reducing the percentage of infections. Around the same time, certain people would not be compliant with and respecting the safety regulations. Also, it is very difficult to track people manually in the regions. For this purpose, it is necessary to propose an automated face mask detection (FMD) tool to automatically identify someone who does not wear a mask 7 .
The notion is triggered by reading a real-time video via CCTV, and then frame by frame is processed on each face object. After that, a reference model is trained to rely on whether the face is masked or not masked for a future prediction. Figure 1 displays the face samples of the following two classes wearing masks and not wearing masks.
The main purpose of this paper is to detect and increase the detection rate of face masks by using machine-learning techniques. This technique is a histogram orientation gradient (HOG) and a binary classification using the support vector machine (SVM) used in reference 8 ,in which this reference explains a different application (biometric handwritten signature recognition). However, it is similar to the extraction and classification of features, but with a different design and configuration, as well as a different pre-processing that the proposed work needs in order to achieve the results of the challenge. In addition, to show that this machine learning technique can accomplish the task of detecting the face mask professionally and accurately through this article. It should be noted that the proposed pre-processing, HOG, and SVM are considered a contribution knowledge of this article by enhancing detection accuracy compared to state-of-the-art articles. This paper has six parts arranged as follows: Section Two is devoted to a literature review on the identification of facial masks. The design of the research tool methodology is duly elaborated in Section Three. The experiment of this test is then listed in Section Four. The outcome and discussion are discussed in Section Five. Finally, the conclusion is outlined in Section Six, followed by acknowledgment and a list of references.
Literature Review
Previous work-related to face mask detection is critically reviewed in this section. Technically, the identification of the face, whether or not it is wearing a mask, is a process that lies within the field of artificial intelligence. More specifically, machine learning or deep learning. Once the most common stages of machine learning are as follows: input dataset, pre-processing, feature extraction, and decision classification. Accordingly, the existing work on the detection of face masks will be discussed mainly based on the above-mentioned stages. For example, a hybrid approach consisting of locally linear embedding (LLE) with a convolutional neural network (CNN)9, has been used to detect face wearing masks. This work consists of three main modules. First, it combines two pre-trained CNNs to extract candidate facial regions from the input image and represent them with high-dimensional descriptors. After that, the embedding module is implemented to transform such descriptors into a similarity-based descriptor using a locally linear embedding (LLE) algorithm and dictionaries trained on a wide pool of synthesized normal faces, masked faces, and nonfaces. Here, the experiment is conducted using the MAFA dataset with up to 76.4 % accuracy 10 . Another face mask detection work as detailed in 11 used the Simulated Masked Face Dataset (SMFD) to train and test the model. The classification method used here is Transfer Learning from InceptionV3 to classify people who do not wear masks. This approach achieved an accuracy of up to 99.9 % during training and 100 % during testing.
Another recent research on the identification of face masks for the pandemic defense of COVID-19 is clarified in 12 . This research consists of two elements. The first function is extraction using Resnet50 and the second part is designed for the classification process, such as decision tree, support vector machine (SVM), and assembly algorithm. Here, three face masked datasets were used for training and testing, such as the Real-World Masked Face Dataset (RMFD), the Simulated Masked Face Dataset (SMFD), and the Labeled Faces in the Wild (LFW). The best result was recorded using the SVM classifier, which achieved 99.64% test accuracy in RMFD, while it achieved 99.49 % test accuracy in SMFD and 100 % test accuracy in LFW. Another approach for a face mask detector called Retina Face Mask is explained in 13 .Here, the extraction and classifier function is used, consisting of a Pyramid Network function, to fuse high-level semantic information with multi-feature maps. The accuracy obtained is up to 94.5% for recall and 93.4% for precision. Another interesting work for face mask detection is based on the HGL approach for dealing with head pose classification by considering color texture analysis of photographs and line portraits. The HGL method adds the Hchannel of the HSV color space to the face portrait and the grayscale image, and then train the CNN building of the reference model for classification. Here, the MAFA dataset was used to demonstrate the accuracy and efficiency obtained by up to 93.64%, as well as up to 87.17% of the accuracy 14 .
Another deep-learning face mask detection was described as in 15 . The dataset used for the experiment is Real-World-Masked-Face-Dataset (RWMFD) with an accuracy of up to 95%.
As noted in the literature review, several datasets have been created for the training and testing of the model. For the current paper, the proposed methodology of the FMD tool has not been implemented in the literature. Moreover, it can compete with the existing techniques in terms of the accuracy of the detection.
Tool Methodology
The proposed FMD tool depicted in Fig.2, consists of four key separate stages: pre-processing, Viola-Jones face detector 16 , feature extraction, and classification. The product of these four steps is two phases of registration and authentication. The former phase consists of a training activity using the SVM as the process of the enrolment procedures defined in Fig.2 as the SVM reference model. The above called the authentication process or sometimes called the testing process, will capture the queried identity face picture of the device. The same operations that were performed during the enrollment operation should also be applied to the queried face picture. In the classification process, a comparison process is performed between the binary-SVM-model against the queried features vector of the face image. Finally, the decision-making process, based on the configured threshold, determines whether or not the face is wearing a mask. The solid arrow in Fig.2 is referred to as the training (enrollment) path, while the dotted line is referred to as the authentication (testing) path.
Pre-processing
Several image processing techniques are used before the face detector and the feature extraction stage. The explanation for this is to ensure that better image contrast and noise reduction will have a positive effect on the recognition rate. The first procedure is to transform the RGB image into a grayscale image, then the kernel window space media filter [3x3] is used to eliminate noise 17 . After that, the mapping of the intensity values in the grayscale image compares with the new values, which are taken from the saturated bottom 1% and the top 1% of all pixel values in the image. Also, a re-size operation is performed to unify all image sizes, in rows and columns to be [128 x 128]. Some randomly selected users of databases for faces wearing masks and not wearing masks have been shown in Fig. 3, which visualizes the impact of preprocessing operations as images. Where the original images are depicted in the first columns of Fig.3, gray-scale images are depicted in the second column, median filter images are in the third column and finally, the fourth column contains the images after contrast enhancement.
Viola-Jones Face Detector
The Viola-Jones face detection system is a face detection technique introduced in 2001 by Paul Viola and Michael Jones. This technique requires a complete view of the upright front faces to function properly. The purpose for choosing this face detector due to the characteristics of the algorithm is as follows, robust as it has a high true-positive rate with a low false-positive rate, the real-time during work for face detection, which is adequate for the purpose of this paper to differentiate faces from non-faces (as it is part of this paper objective).In terms of the methodology of the Viola-Jones algorithm, it has four stages that are as follows. Firstly, Haar feature Selection is involved. Because all human faces have some similar properties such as the eye region is darker than the upper-cheeks. The nose bridge region is brighter than the eyes. Therefore, these features may be matched using Haar Features. The second stage is creating an Integral Image. The third stage Adaboost Training, and the fourth stage Cascading Classifiers. More details are explained in 16 .
Feature Extraction (HOG)
Extracting features is the method of choosing the most effective details that can be used to represent the samples for classification. In this paper, the Histogram Oriented Gradient (HOG) algorithm 18 , was selected because of its high ability to represent image samples as a feature vector. HOG extracts local shape information from blocks within an image to support several operations such as tracking, detecting and classifying. The effect of HOG is depicted in Fig.4. In this work, HOG was implemented as the following configurations, the Cell-Size is [8x8] pixels. Then the size of the block is
D-Classifier (SVM)
Once the necessary classes to be determined are two, the Support Vector Machine (SVM) is the right choice to choose because it deals well with the problem of the binary classes. The SVM classifies the feature vector by looking at the best hyperplane that can distinguish all the features of one class from those of the other class. In other words, the optimized SVM hyperplane is the one with the maximum margin between the two groups. The maximum width of the slab parallel to the hyperplane that has no internal data points is Margin, more details with the SVM classification of the HOG features are explained in this work 8 .As shown in Fig.5, the support vectors are the data points closest to the separating hyperplane. Figure 5 also demonstrates these concepts, with + indicating data points of type 1 andindicating data points of type -1 separating a hyperplane with a margin 19 .
Figure 5. Support vector machine (SVM) graphic representation of two classes and two dimensions.
The data for training is a set of points (vectors) x j along with their labels y j . For some dimension d , the x j ∊ R d , and the y j = ±1, accordingly the hyperplane is in Eq.(3).
f(x)=x′β+b=0 (3) where β ∊ R d and b is a real number. There are two classes in this paper, the face without a mask labeled as y=1, and the face wearing a mask labeled as y=-1. In terms of training optimization, Sequential Minimal Optimization (SMO) 20 Testing experiment: Several experiments have been conducted on public databases to test the proposed FMD tool for face mask detection. In this paper, five separate datasets were used to determine the accuracy of the proposed method of the FMD tool, each with a separate number of observations. The databases are listed with their specifications as shown in Table 1. Every database was used to train the model and then test it. Some databases only have non-mask faces, such as GENKI-4k. Conversely, other databases contain only face pictures masked, such as MAFA. Thus, in this paper, we combine them in some experiments to train a database model whose masked faces for one class and other databases whose unmasked faces for the other class. In the setup of the experiment, a reference model is trained to predict two classes as follows, a nonmask face labeled +1, and a masked face labeled -1. The threshold used here to make a decision is 0, to avoid the unbiased separation between-1 and +1 expected scores. The well-known method of machine learning output estimation is referred to as the confusion matrix as set out in Table 2. As it is obvious, in the case of this article, the confusion matrix will have two classes, Non-mask and Masked. Confusion matrix parameters are then extracted based on the proposed FMD tool. In each experiment, the result will be based on the confusion matrix to measure the accuracy metric as described in Eq.(4). The metric extracts a successful percentage of the method. The description of the confusion matrix for machine learning is clarified as in 22,23 . TP is considered to be a counter of the unmasked correctly predicted by the proposed FMD tool for unmasked tested samples, TN is considered to be a counter to masked samples for correctly predicted as masked faces. Although FP is a counter to the masked that is expected to be falsely unmasked. For the fourth factor, FN is also a counter for the unmasked expected as wrongly masked by the proposed FMD tool Table 2 The target is to raise TP and TN parameters as high as possible to achieve better accuracy, and these parameters will be computed in a result Section. Also, there is another classification and prediction method named False Accept Rate (FAR) and False Reject Rate (FRR) as explained and used in 29 . However, the accuracy metric defined in Eq.(4) is sufficient for the proposed detection method.
Results and Discussion:
The outcome of the paper will be seen in this section and split into two types. First, to visualize some known samples according to the proposed method. The second result is the recognition rate or the proposed system with several different configurations for the extraction of the HOG feature and the SVM classifier. As can be seen in the tests, the identification of the face mask is invariant with the color of the skin, mask color, face pose, face with or without the hair. It is also known to be a stable system that has been detected. Figure 6 shows some samples of the face attached to a rectangular label, which will be colored as red or green. The red label will tell you that the face is not wearing a mask, while the green label around the face shows that the face is wearing masks. In Fig.7, another person was captured during the implementation of the proposed video detection system. The aim is to show that the rectangular object around the face is changed accordingly during the on and off the mask. Thus, Fig.7 can be shown as a method of image rendering to represent the transition states from the unmasked face to the masked face. The real-time run is performed by first loading a trained SVM model. The prediction is then made by comparing the loaded SVM model to the tested frame that was a snapshot from the CCTV video. As regards the second type of result report in this paper, the accuracy of the experiments conducted with several configurations of the SVM classifier is reported. Eight experiments have been performed with respect to ISDA 21 optimization preparation. Each one is drawn up in Table 3 In addition, the definition in Table 3 includes the size of the test matrix, the kernel function (either linear or 3rd order polynomial), and the corresponding effective accuracy measured according to Eq.(4) with its confusion matrix parameters for each experiment. The size of the training matrix and the size of the research matrix are shown in Tables 3 and 4. These matrixes are arranged as each of the two groups (non-mask and masked face) includes 50 % non-masked face samples and the other 50 % masked faces. For example, the size of experiment 1 in Table 3 is [700x8100]. This indicates the 350 non-masked samples and another 350 for masked samples. Also, in experiment 3 of the associated MFRD database in Table 3 Similarly, in the research matrix, for example, in Table 3, experiment number 3 has a [360 x 8100] testing matrix, which means that 180 samples must be predicted as mask faces, while the other remaining 180 samples must be non-masked faces, and so on for other experiments. The number of samples selected for training and testing is similar to avoid any bias between the unmasked face and the masked face. As a consequence, the number of training matrixes is shown in Table 3. The same experimental specifications were applied as in Table 3, but using another training optimization called SMO. All details of the eight experiments are given in Table 4. As it is elaborated for experiment 7 in Table 4, the best accuracy is up to 97.5% in the case of the MFRD dataset is used with a polynomial function, with only 9 samples have been incorrectly predicted per 360 tested samples as shown in the confusion matrix of experiment 7 in Table 4. In terms of contrast as a whole, SMO training is better than ISDA training due to the lower error rate derived from the experiments. For example, compared to experiment 7 in both Tables 3 and 4, the accuracy for ISDA and SMO is 96.94% and 97.5% respectively. In addition, it is noted that the polynomial kernel function is stronger than the linear kernel function as seen in Table 3 and Table 4.
Open Access
Baghdad Science Journal A comparison of the results between the state-ofthe-art and the accuracy of the proposed FMD tool is carried out to validate the proposed FMD tool. As explained in Table 5, three datasets have been used for a comparison operation. About the RWMFD dataset, our accuracy is up to 97%, which is higher than the second reference in No.1 in Table 5. Next, as far as the SMFDB dataset is concerned, the proposed accuracy is up to 100%, which is similar to the proposed work and higher than the second reference in the literature in No.2. Lastly, the proposed accuracy, which is 95%, is better than the two existing MAFA datasets works in No.4. It is worth mentioning that the proposed FMD tool relies on the face detection algorithm namely Viola-Jones to work properly. In other words, if the face has not been identified, the mask and the unmasked face would not function properly. In addition, illumination and brightness adjustment are very critical for the detection process in a real-time implementation. As shown in Fig.8, the face mask detection used is against face variation in direction and scale. The identified face wearing mask based on the proposed FMD tool was therefore carried out in the same way as shown in Fig. 8.
Conclusion:
A Face Mask Detection (FMD) tool is proposed in this paper as it is considered to be dominant research in the era of the COVID-19 pandemic. This is known to be an attempt to reduce outbreaks of the disease and to restrict it. Technically, the proposed FMD tool consists of a pre-processing, Viola-Jonse face detector, then a HOG extraction feature that has been selected in the research and defined as using block size [2x2], and cell size [8 x 8] with a digital image size [128 x 128] to create a feature vector. The length of the vector function is up to 8100 features. After that, binary-SVM was used for training and testing. Experiments have been conducted to evaluate the proposed FMD tool. Accuracy is as follows: 97.00%, 100.0%, 97.50%, 95.00% for RWMFD & GENK1-4k, SMFDB, MFRD, and MAFA & GENK1-4k for the databases, respectively. In future work, combining another extraction function with the HOG could boost the represented feature vector to improve it. | 5,695.2 | 2021-11-20T00:00:00.000 | [
"Computer Science"
] |
EXISTENCE OF ALMOST AUTOMORPHIC SOLUTIONS TO SOME CLASSES OF NONAUTONOMOUS HIGHER-ORDER DIFFERENTIAL EQUATIONS
In this paper, we obtain the existence of almost automorphic solutions to some classes of nonautonomous higher order abstract differential equations with Stepanov almost automorphic forcing terms. A few illustrative examples are discussed at the very end of the paper.
Introduction
The main motivation of this paper comes from the work of Andres, Bersani, and Radová [8], in which the existence (and uniqueness) of almost periodic solutions was established for the class of n-order autonomous differential equations where f, p : R → R are (Stepanov) almost periodic, f is Lipschitz, and a k ∈ R for k = 1, ..., n are given real constants such that the real part of each root of the characteristic polynomial associated with the (linear) differential operator on the left-hand side of Eq. (1.1), that is, The method utilized in [8] makes extensive use of a very complicated representation formula for solutions to Eq. (1.1).For details on that representation formula, we refer the reader to [9] and [10] and the references therein.
Let H be a Hilbert space.In this paper, we study a more general equation than Eq.(1.1).Namely, using similar techniques as in [14,27], we study and obtain some reasonable sufficient conditions, which do guarantee the existence of almost automorphic solutions to the class of nonautonomous n-order differential equations where A : D(A) ⊂ H → H is a (possibly unbounded) self-adjoint linear operator on H whose spectrum consists of isolated eigenvalues 0 < λ 1 < λ 2 < ... < λ l → ∞ as l → ∞ 1991 Mathematics Subject Classification.43A60; 34B05; 34C27; 42A75; 47D06; 35L90.Key words and phrases.exponential dichotomy; Acquistapace and Terreni conditions; evolution families; almost automorphic; Stepanov almost automorphic, nonautonomous higher-order differential equation.EJQTDE, 2010 No. 22, p. 1 with each eigenvalue having a finite multiplicity γ j equals to the multiplicity of the corresponding eigenspace, the functions a k : R → R (k = 0, 1, ..., n − 1) are almost automorphic with inf t∈R a 0 (t) = γ 0 > 0, and the function f : R×H → H is Stepanov almost automorphic in the first variable uniformly in the second variable.
Indeed, assuming that u is differentiable n times and setting .
, then Eq. (1.2) can be rewritten in the Hilbert space X n in the following form (1.4) z ′ (t) = A(t)z(t) + F (t, z(t)), t ∈ R, where A(t) is the family of n × n-operator matrices defined by (1.5) whose domains D(A(t)) are constant in t ∈ R and are precisely given by Moreover, the semilinear term F appearing in Eq. (1.4) is defined on R × X n α for some α ∈ (0, 1) by , where X n α is the real interpolation space between X n and D(A(t)) given by Under some reasonable assumptions, it will be shown that the linear operator matrices A(t) satisfy the well-known Acquistapace-Terreni conditions [3], which do guarantee the existence of an evolution family U (t, s) associated with it.Moreover, it will be shown that U (t, s) is exponentially stable under those assumptions.
The existence of almost automorphic solutions to higher-order differential equations is important due to their (possible) applications.For instance when n = 2, we have thermoelastic plate equations [14,27] or telegraph equation [31] or Sine-Gordon equations [26].Let us also mention that when n = 2, some contributions on the maximal regularity, bounded, almost periodic, asymptotically almost periodic solutions to abstract second-order differential and partial differential equations have recently been made, among them are [11], [12], [44], [45], [46], and [47].In [8], the existence of almost periodic solutions to higher-order differential equations with constant coefficients in the form Eq. (1.1) was obtained in particular in the case when the forcing term is almost periodic.However, to the best of our knowledge, the existence of almost automorphic solutions to higher-order nonautonomous equations in the form Eq. (1.2) in the case when the forcing term is Stepanov almost automorphic is an untreated original question, which in fact constitutes the main motivation of the present paper.
The paper is organized as follows: Section 2 is devoted to preliminaries facts needed in the sequel.In particular, facts related to the existence of evolution families as well as preliminary results on intermediate spaces will be stated there.In addition, basic definitions and classical results on (Stepanov) almost automorphic functions are also given.In Sections 3 and 4, we prove the main result.In Section 5, we provide the reader with an example to illustrate our main result.
Preliminaries
Let H be a Hilbert space equipped with the norm • and the inner product •, • .In this paper, A : D(A) ⊂ H → H stands for a self-adjoint (possibly unbounded) linear operator on H whose spectrum consists of isolated eigenvalues with each eigenvalue having a finite multiplicity γ j equals to the multiplicity of the corresponding eigenspace.Let {e k j } be a (complete) orthonormal sequence of eigenvectors associated with the eigenvalues {λ j } j≥1 .
Clearly, for each u ∈ D(A) where u, e k j e k j .
Note that {E j } j≥1 is a sequence of orthogonal projections on H.Moreover, each u ∈ H can written as follows: It should also be mentioned that the operator −A is the infinitesimal generator of an analytic semigroup {T (t)} t≥0 , which is explicitly expressed in terms of those orthogonal projections E j by, for all u ∈ H, In addition, the fractional powers A r (r ≥ 0) of A exist and are given by Let (X, • ) be a Banach space.If L is a linear operator on the Banach space X, then: • D(L) stands for its domain; • ρ(L) stands for its resolvent; • σ(L) stands for its spectrum; • N (L) stands for its null-space or kernel; and • R(L) stands for its range.We set Q = I − P for a projection P .If Y, Z are Banach spaces, then the space B(Y, Z) denotes the collection of all bounded linear operators from Y into Z equipped with its natural topology.This is simply denoted by B(Y) when Y = Z.
2.1.Evolution Families.Hypothesis (H.1).The family of closed linear operators A(t) for t ∈ R on X with domain D(A(t)) (possibly not densely defined) satisfy the so-called Acquistapace-Terreni conditions, that is, there exist constants ω ∈ R, θ ∈ π 2 , π , K, L ≥ 0 and µ, ν ∈ (0, 1] with µ + ν > 1 such that (2.1) Note that in the particular case when A(t) has a constant domain D = D(A(t)), it is well-known [6,38] that Eq. (2.2) can be replaced with the following: There exist constants L and 0 < µ ≤ 1 such that It should mentioned that (H.1) was introduced in the literature by Acquistapace and Terreni in [2,3] for ω = 0.Among other things, it ensures that there exists a unique evolution family U = U (t, s) on X associated with A(t) satisfying (a) U (t, s)U (s, r) = U (t, r); ), and a constant C depending only on the constants appearing in (H.1); and (e) ∂ + s U (t, s)x = −U (t, s)A(s)x for t > s and x ∈ D(A(s)) with A(s)x ∈ D(A(s)).It should also be mentioned that the above-mentioned proprieties were mainly established in [1,Theorem 2.3] and [49, Theorem 2.1], see also [3,48].In that case we say that A(•) generates the evolution family U (•, •).
One says that an evolution family U has an exponential dichotomy (or is hyperbolic) if there are projections P (t) (t ∈ R) that are uniformly bounded and strongly continuous in t and constants δ > 0 and N ≥ 1 such that (f) U (t, s)P (s) = P (t)U (t, s); ≤ N e −δ(t−s) for t ≥ s and t, s ∈ R. According to [40], the following sufficient conditions are required for A(t) to have exponential dichotomy.
(i) Let (A(t), D(t)) t∈R be generators of analytic semigroups on X of the same type.Suppose that is finite, and The semigroups (e τ A(t) ) τ ≥0 , t ∈ R, are hyperbolic with projection P t and constants N, δ > 0.Moreover, let This setting requires some estimates related to U (t, s).For that, we introduce the interpolation spaces for A(t).We refer the reader to the following excellent books [6], [23], and [29] for proofs and further information on theses interpolation spaces.
Let A be a sectorial operator on X (for that, in assumption (H.1), replace A(t) with A) and let α ∈ (0, 1).Define the real interpolation space which, by the way, is a Banach space when endowed with the norm . For convenience we further write Moreover, let XA := D(A) of X.In particular, we have the following continuous embedding for all 0 < α < β < 1, where the fractional powers are defined in the usual way.
EJQTDE, 2010 No. 22, p. 6 In general, D(A) is not dense in the spaces X A α and X.However, we have the following continuous injection (2.5) Given the family of linear operators A(t) for t ∈ R, satisfying (H.1), we set for 0 ≤ α ≤ 1 and t ∈ R, with the corresponding norms.Then the embedding in Eq. (2.4) holds with constants independent of t ∈ R.These interpolation spaces are of class J α ([29, Definition 1.1.1]) and hence there is a constant c(α) such that We have the following fundamental estimates for the evolution family U.
[14] For x ∈ X, 0 ≤ α ≤ 1 and t > s, the following hold: (ii) There is a constant m(α), such that In addition to above, we also need the following assumptions: Hypothesis (H.2).The evolution family U generated by A(•) has an exponential dichotomy with constants N, δ > 0 and dichotomy projections P (t) for t ∈ R.
2.2.
Stepanov Almost Automorphic Functions.Let (X, • ), (Y, • Y ) be two Banach spaces.Let BC(R, X) (respectively, BC(R × Y, X)) denote the collection of all X-valued bounded continuous functions (respectively, the class of jointly bounded continuous functions F : R × Y → X).The space BC(R, X) equipped with the sup norm for each u ∈ X.
Definition 2.5.Let p ∈ [1, ∞).The space BS p (X) of all Stepanov bounded functions, with the exponent p, consists of all measurable functions f : R → X such that f b belongs to L ∞ R; L p ((0, 1), X) .This is a Banach space with the norm The collection of all almost automorphic functions from R to X will be denoted AA(X).
Similarly
The collection of all almost automorphic functions from R × Y to X will be denoted AA(R × Y).
We have the following composition result: Then, then the function defined by EJQTDE, 2010 No. 22, p. 8 We also have the following composition result, which is a straightforward consequence of the composition of pseudo almost automorphic functions obtained in [43].
Theorem 2.9.[43] We will denote by AA u (X) the closed subspace of all functions f ∈ AA(X) with g ∈ C(R, X).Equivalently, f ∈ AA u (X) if and only if f is almost automorphic and the convergence in Definition 2.7 are uniform on compact intervals, i.e. in the Fréchet space C(R, X).Indeed, if f is almost automorphic, then, its range is relatively compact.Obviously, the following inclusions hold: where AP (X) is the Banach space of almost periodic functions from R to X. Definition 2.10.[36] The space AS p (X) of Stepanov almost automorphic functions (or S p -almost automorphic) consists of all f ∈ BS p (X) such that f b ∈ AA L p (0, 1; X) .That is, a function f ∈ L p loc (R; X) is said to be S p -almost automorphic if its Bochner transform f b : R → L p (0, 1; X) is almost automorphic in the sense that for every sequence of real numbers (s ′ n ) n∈N , there exists a subsequence (s n ) n∈N and a function g ∈ L p loc (R; X) such that t+1 t f (s n + s) − g(s) p ds 1/p → 0, and The collection of those S p -almost automorphic functions F : R × Y → X will be denoted by AS p (R × Y).
We have the following straightforward composition theorems, which generalize Theorem 2.8 and Theorem 2.9, respectively: Theorem 2.13.Let F : R× Y → X be a S p -almost automorphic function.Suppose that u → F (t, u) is Lipschz in the sense that there exists L ≥ 0 such Theorem 2.14.Let F : R × Y → X be a S p -almost automorphic function, where.Suppose that F (t, u) is uniformly continuous in every bounded subset K ⊂ X uniformly for t ∈ R. If g ∈ AS p (Y), then Γ : R → X defined by Γ(•) := F (•, g(•)) belongs to AS p (X).
Main results
Consider the nonautonomous differential equation where F : R × X α → X is S p -almost automorphic.Definition 3.1.A function u : R → X α is said to be a bounded solution to Eq. (3.1) provided that Throughout the rest of the paper, we set S 1 u(t) := S 11 u(t) − S 12 u(t), where for all t ∈ R.
To study Eq.(3.1), in addition to the previous assumptions, we require that p > 1, 1 p + 1 q = 1, and that the following assumptions hold: Moreover, F is Lipschitz in the following sense: there exists L > 0 for which then the integral operator S 1 defined above maps AA(X α ) into itself.
Define for all n = 1, 2, ..., the sequence of integral operators and hence from the Hölder's inequality and the estimate Eq. (2.7) it follows that Using Eq. (3.3), we then deduce from Weirstrass Theorem that the series defined by is uniformly convergent on R.Moreover, D ∈ C(R, X α ) and for all t ∈ R.
Let us show that Φ n ∈ AA(X α ) for each n = 1, 2, 3, ... Indeed, since ϕ ∈ AS p (X β ) ⊂ AS p (X α ), for every sequence of real numbers (τ ′ n ) n∈N there exist a subsequence (τ n k ) k∈N and a function ϕ such that Define for all n = 1, 2, 3, ..., the sequence of integral operators Using Lebesgue Dominated Convergence Theorem, one can easily see that Similarly, using [15] it follows that Similarly In view of the above, it follows that S 1 ∈ AA(X α ).
Lemma 3.3.The integral operator S 1 defined above is a contraction whenever L is small enough.
Proof.Let v, w ∈ AA(X α ).Now, Similarly, Consequently, and hence S 1 is a contraction whenever L is small enough.Proof.The proof makes use of Lemma 3.2, Lemma 3.3, and the Bananch fixed-point principle.
Almost Automorphic Solutions to Some Higher-Order Differential Equations
We have previously seen that each u ∈ H can be written in terms of the sequence of orthogonal projections E n as follows: u = A l (t)P l z, where From Eq. (1.3) it easily follows that there exists ω ∈ π 2 , π such that if we define On the other hand, one can show without difficulty that A l (t) = K −1 l (t)J l (t)K l (t), where J l (t), K l (t) are respectively given by and EJQTDE, 2010 No. 22, p. 16 For λ ∈ S ω and z ∈ X, one has Hence, EJQTDE, 2010 No. 22, p. 17 Moreover, for z := > 0. Thus, there exists C 1 > 0 such that K l (t)P l z ≤ C 1 d l n (t) z for all l ≥ 1 and t ∈ R.
Using induction, one can compute K −1 l (t) and show that for z := z for all l ≥ 1 and t ∈ R.
EJQTDE, 2010 No. 22, p. 18 Now, for z ∈ X, we have Let λ 0 > 0. Define the function It is clear that η is continuous and bounded on the closed set On the other hand, it is clear that η is bounded for λ > λ 0 .Thus η is bounded on S ω .If we take Therefore, Consequently, for all t ∈ R. Hence, for t, s, r ∈ R, computing A(t) − A(s) A(r) −1 and assuming that there exist L k ≥ 0 (k = 0, 1, 2, ..., n − 1) and µ ∈ (0, 1] such that it easily follows that there exists C > 0 such that (A(t) − A(s))A(r) −1 z ≤ C t − s µ z .
In summary, the family of operators A(t) t∈R satisfy Acquistpace-Terreni conditions.Consequently, there exists an evolution family U (t, s) associated with it.Let us now check that U (t, s) has exponential dichotomy.For that, we will have to check that (i)-(j) hold.First of all note that For every t ∈ R, the family of linear operators A(t) generate an analytic semigroup (e τ A(t) ) τ ≥0 on X given by On the other hand, we have Using the continuity of a k (k = 0, ..., n − 1) and the equality it follows that the mapping J ∋ t → R(λ, A(t)) is strongly continuous for λ ∈ S ω where J ⊂ R is an arbitrary compact interval.
for each t ∈ R, one can easily see that, for the topology of B(X), the following hold for each t ∈ R, and hence t → A −1 (t) is almost automorphic with respect to operator-topology.
It is now clear that if f satisfies (H.5) and if L is small enough, then the higherorder differential equation Eq. (1.4) has an almost automorphic solution Therefore, If f = f 1 + f 2 satisfies (H.5) and if the Lipschitz constant of f 1 is small enough, then Eq. (1.2) has at least one almost automorphic solution u ∈ H α .
Examples of Second-Order Boundary Value Problems
In this section, we provide with a few illustrative examples.Precisely, we study the existence of almost automorphic solutions to modified versions of the so-called (nonautonomous) Sine-Gordon equations (see [26]).
In this section, we take n = 2 and suppose a 0 and a 1 , in addition of being almost automorphic, satisfy the other previous assumptions.Moreover, we let α = Precisely, we are interested in the following system of second-order partial differential equations where a 1 , a 0 : R × J → R are almost automorphic positive functions and Q : R × J × L 2 (J) → L 2 (J) is S p -almost automorphic for p > 1.
Let us take Av = −v ′′ for all v ∈ D(A) = H 1 0 (J) ∩ H 2 (J) and suppose that Q : R × J × L 2 (J) → H β 0 (J) is S p -almost automorphic in t ∈ R uniformly in x ∈ J and u ∈ L 2 (J) Moreover, Q is Lipschitz in the following sense: there exists L ′′ > 0 for which Q(t, x, u) − Q(t, x, v) for all u, v ∈ L 2 (J), x ∈ J and t ∈ R. Consequently, the system Eq.(5.3) -Eq.(5.4) has unique solution u ∈ AA(H 1 0 (J)) when K ′′ is small enough.
A Slightly Modified Version of the Nonautonomous Sine-Gordon
Equations.Let Ω ⊂ R N (N ≥ 1) be a open bounded subset with C 2 boundary Γ = ∂Ω and let H = L 2 (Ω) equipped with its natural topology • L 2 (Ω) .Here, we are interested in a slightly modified version of the nonautonomous Sine-Gordon studied in the previous example, that is, the system of second-order partial differential equations given by ∂ 2 u ∂t Therefore, the system Eq.(5.5) -Eq.(5.6) has a unique solution u ∈ AA(H 1 0 (Ω)) when L ′′′ is small enough.
22 5. 1 .2 u ∂t 2 + c ∂u ∂t − d ∂ 2 u ∂x 2 +
No. 22, p. Nonautonomous Sine-Gordon Equations.Let L > 0 and and let J = (0, L).Let H = L 2 (J) be equipped with its natural topology.Our main objective here is to study the existence of almost automorphic solutions to a slightly modified version of the so-called Sine-Gordon equation with Dirichlet boundary conditions, which had been studied in the literature especially by Leiva[26] in the following form∂ k sin u = p(t, x), t ∈ R, x ∈ J (5.1) u(t, 0) = u(t, L) = 0, t ∈ R (5.2)where c, d, k are positive constants, p : R × J → R is continuous and bounded.
Definition 2.7.(Bochner) A function F ∈ C(R × Y, X) is said to be almost automorphic if for every sequence of real numbers (s ′ n ) n∈N , there exists a subsequence (s n ) n∈N such that G(t, u) := lim n→∞ F (t + s n , u) is well defined for each t ∈ R, and lim n→∞ Therefore the sequence Φ n ∈ AA(X α ) for each n = 1, 2, ... and hence D ∈ AA(X α ).Consequently t → S 11 (t) belong to AA(X α ).The proof for t → S 12 (t) is similar to that of t → S 11 (t) and hence omitted. | 5,087.6 | 2010-01-01T00:00:00.000 | [
"Mathematics"
] |
Spatial Niche Facilitates Clonal Reproduction in Seed Plants under Temporal Disturbance
The evolutionary origins and advantages of clonal reproduction relative to sexual reproduction have been discussed for several taxonomic groups. In particular, organisms with a sessile lifestyle are often exposed to spatial and temporal environmental fluctuations. Thus, clonal propagation may be advantageous in such fluctuating environments, for sessile species that can reproduce both sexually and clonally. Here we introduce the concept of niche to a lattice space that changes spatially and temporally, by incorporating the compatibility between the characteristics of a sessile clonal plant with its habitat into a spatially explicit individual-based model. We evaluate the impact of spatially and temporally heterogeneous environments on the evolution of reproductive strategies: the optimal balance between seed and clonal reproduction of a clonal plant. The spatial niche case with local habitats led to avoidance of specialization in reproductive strategy, whereas stable environments or intensive environmental change tended to result in specialization in either clonal or seed reproduction under neutral conditions. Furthermore, an increase in spatial niches made clonal reproduction advantageous, as a consequence of competition among several genets under disturbed conditions, because a ramet reached a favorable habitat through a rare long-distance dispersal event via seed production. Thus, the existence of spatial niches could explain the advantages of clonal propagation.
Introduction
Clonal reproduction is a universal mode of reproduction used by a broad range of terrestrial organisms [1][2][3]. This reproductive mode is described as the asexual way of propagating, and is often compared with sexual reproduction. Both sexual and asexual modes of reproduction have their respective benefits: the former produces genetically diverse individuals via genomic recombination, while the latter produces offspring without the need for a mating partner [2,4]. The evolution and maintenance of sexuality has long been the subject of debate about its relative costs and benefits [5,6]. Several hypotheses have been proposed, such as Muller's ratchet [7] and the deterministic mutation hypothesis [8], which suggest that sexuality can remove harmful genes, and the Red Queen hypothesis [9], which suggests that sexuality enables species to escape from infectious diseases by virtue of their genetic diversity. Despite the importance of the question, there have been few studies testing these hypotheses that use experimental approaches [10][11][12], so these hypotheses are still competing with one another.
Many taxonomic groups include species that reproduce both sexually and asexually, and their modes of propagation are tightly connected to the dispersal of their offspring. For example, several seed plants (spermatophytes) produce not only seeds but also clonal offspring from vegetative organs. New colonies of corals such as Plexaura kuna and Montastraea annularis are founded either clonally by fragments of colonies, or by offspring from egg spawning (inseminated gametes) [13,14]. In the case of ant species such as Wasmannia auropunctata, Vollenhovia emeryi, and Paratrechina longicornis, colonies expand to neighboring areas by means of asexually produced queens and nest budding, while workers are sexually produced and are therefore genetically diverse [15][16][17]. Asexually produced clonal offspring generally disperse to closer places than sexually produced ones. Despite the absence of genetic variation and the limited migration distance, clonal reproduction has continued successfully in combination with sexual reproduction in many species of sessile organisms.
Here we focus on clonal reproduction in seed plants. Clonality has evolved independently several times and has remained a dominant trait in various phylogenetic lineages [18,19]. Actually, 70-80% of herbaceous plants in the temperate zone have multiple reproductive modes [18,20]. On account of their rooted lifestyles, clonal offspring grow around their parent plants [4,21,22]. Consequently, genetically identical but phenotypically independent individuals (called ramets) of various ages are clustered and live together (this unit is called a genet) in the same space for a long time in a population. It is therefore natural that they experience not only various environmental changes and/or attacks by herbivores and pathogens [23][24][25] but also demographic changes of the species during their lives [26,27]. Sexual reproduction works well against unpredictable environmental fluctuation by providing long dispersal distances and genetic diversity [28,29]. It is thus still an unanswered question why clonal plants have evolved and what mechanisms work to maintain clonal reproduction under such conditions. Competition among generations can be understood as an issue affecting the evolution of dispersal strategies: seed reproduction is the long-distance dispersal strategy, and clonal reproduction is the short-distance one. Hamilton and May [30] demonstrated that the long-distance dispersal of newborn offspring at a certain rate was an evolutionarily stable strategy (ESS) even if there was no competition between a parent and its offspring. Furthermore, a small disturbance of habitat makes short-distance dispersal advantageous, whereas a large disturbance makes long-distance dispersal advantageous, if resource allocation to each dispersal strategy is fixed [31]. Nakamaru et al. [32] demonstrated, using the colony-based lattice model, that a disturbance affecting a large area of habitat and occurring at high frequency favored a long-distance dispersal strategy, whereas a disturbance causing damage within a small area at low frequency made shortdistance dispersal more advantageous. As regarding dispersal of offspring, their model framework is applicable to seed plants because the mode of offspring dispersal is similar to that in ant colonies; the long-and short-distance dispersal strategies correspond to clonal and sexual reproduction of plants [31,33], and colony size correlates with plant size. On the other hand, the impact of spatial heterogeneity of habitat on dispersal strategy is completely different, because seeds of plants do not choose the place where they germinate, but land there by chance, unlike animals, who can choose their habitat by moving. In fact, while animals can move to favorable habitats, the movement of sessile organisms is restricted within a limited distance and depends on other mediators. Thus, both spatially and temporally, environmental heterogeneity should be important keys to the evolutionary processes behind the development of the reproductive strategies of seed plants.
To investigate the direction of selective pressures on the reproductive strategy of sessile organisms, we have developed a lattice model that takes into account the spatial niche effect and temporal disturbance. In particular, we examine whether clonal reproduction is as effective as that via seeds in seed plants, without considering physiological integration and division of labor. Clonal reproduction should be a reasonable strategy if the habitat is constant, because genets would be spared the cost of unifying the connected organs. On the other hand, clonal reproduction causes intra-genet competition if each ramet interferes with the other ramets for resources [26,33], which also influences inter-genet competition [34,35]. We define ''spatial niche'' as spatial habitat heterogeneity, and environmental change of a habitat as equivalent to temporal heterogeneity. Then, because we suppose phenotype is genetically fixed in each individual, an individual plant with the optimal genotype colonizes a certain niche (thus, the ''neutral case'' as the case with no niche concept of habitat heterogeneity and plant phenotype). We evaluate the effect of spatial niche itself on the evolution of reproductive strategies by including and excluding this effect and comparing the results.
Simulation Framework
The model is a spatially explicit individual-based (SEIB) model in which each individual grows on a lattice space arranged in a torus form. Each lattice site is empty or occupied by a single plant. We model growth, reproduction, dispersal, competition, and death as life history events, and disturbance as a stochastic one. The plant species in our model are assumed to be perennials that perform clonal and seed reproduction, and all events occur in an annual step, as illustrated in Fig. 1. Several life history traits and environmental characters are described by model parameters, which are summarized in Table 1. Plants can propagate after they reach the age of maturity, and they produce offspring by clonal reproduction with probability P, or via seeds with probability (1 -P). If a plant chooses clonal reproduction, an offspring can occupy one of the surrounding eight neighboring lattice sites around its parent (Moore neighborhood), with probability P/8, contingent upon the cell being empty. If a plant chooses seed reproduction, a parent plant produces N seeds, and all seeds from all plants in the lattice collect in the same seed pool. Occupation of a vacant patch by a clonal offspring, next to its parent ramet, occurs first, after which the residual empty sites are available to seeds, which can reach every vacant site. It takes M c years and M s years for the clonal offspring and seeds to mature. In accordance with the hypothesis that abundant resource translocation is an important advantage for clonal offspring in the initial growth stage, we assume that clonal offspring reach maturity faster than seed offspring (M s .M c ). We assume the number of seeds (N) to be constant per individual and, for simplicity, ignore the gradient in seed density related to dispersal distance from the parent. Several seeds can settle into the same lattice site, and then the competition among them selects the fittest one (the way in which competition operates will be described later).
Environmental heterogeneity as spatial niche
The predicted death rate in our model consists of two components: one is the basal death rate determined for the species and the other is the additive probability of death depending on the compatibility with the growth environment (niche). Here we generate several habitat environments by dividing the lattice space into k areas. As shown in Fig. 2 (a), each area is assigned environmental conditions associated with a particular habitat. The boundary of each habitat is contiguous its neighboring habitats, so that clonal offspring of a mature plant inhabiting the edge of a certain habitat can colonize the edge of another adjacent habitat. The disturbance in our model changes aspects of the habitat environment such as soil moisture and/or light intensity. This is represented by the value of the environmental condition of that habitat changing from E t to E t+1 with an associated probability of p. If a habitat is disturbed at time t, the value E t+1 is taken from the Gaussian distribution with mean E t and variance q. This is described mathematically as: Table 1. Parameters in the model.
P ij
The reproductive strategy of a plant at site (i, j). (100% clonal reproduction if P51, and 100% seed reproduction if P50.)
Q ij
The trait suitable for the habitat of a plant at site (i, j). m The mutation rate for the genotypes P and Q. The number of seeds produced by a parent at every opportunity for seed production.
The habitat characteristics at site (i, j) at time t. doi:10.1371/journal.pone.0116111.t001 where l represents a certain habitat (1#l#k) and the value of E lies between zero and one. Change in the value of a habitat affects the plant death date indirectly via change in habitat condition, so that the magnitude of environmental change q corresponds to the plant death date. The environmental condition of a site (E) and the genotype of an individual (Q) inhabiting that site determine the death probability of the individual as shown in Fig. 2 (b). Both variables are continuous numerical values between zero and one, and the difference between a plant and its habitat results in an additive probability of death as follows: where i and j represent the position on the lattice, m ij is the death probabiligy of a plant living at site (i, j), and d min represents the basic death rate. Each plant survives every year with probability 1 -m ij . The environmental changes indirectly affect the plant death rate. We conduct the simulation under several changes in environmental conditions, altering the frequency (p) and the magnitude (q) of environmental change, with several levels of environmental heterogeneity (number of different habitats, k).
Mutation of plant traits and reproductive strategy
A plant has two heritable traits: one is the reproductive strategy P and the other is the trait of suitability for the habitat Q. Each of these traits is represented as a numerical value between zero and one. Mutation of the traits is expressed as changes in these values in seeds. Here genetic recombination via sexual reproduction is simply expressed as mutation in order to focus on the difference from clonal reproduction, which produces no genetic variation. In the same way as for E, the genetic traits of the next generation produced via seed reproduction are taken from the Gaussian distribution as follows: where g, X ij , X9 ij , and m represent the traits of the parent generation (X M P, Q), those of the next generation, and the mutation rate, respectively. Each trait undergoes mutation independently. Depending on the difference between E i'j' and Q9 i'j' , the best-fit seed for a habitat can be determined if several seeds drop into the same site (i', j'). When this occurs, the offspring with the lowest death probability will survive.
Neutral environment as control
We also model the case in which the habitat has no spatial heterogeneity. Our object in this study is to reveal the impact of considering the effect of the spatial niche on the evolution of clonal reproduction. Comparing the spatial niche and neutral cases highlights the effect of inter-genet competition, because in the neutral case the habitat compatibility phenotype is meaningless, i.e. inter-genet competition can be ignored. The neutral situation, in which all genotypes have an equal ability to grow, reproduce and survive, is important, together with the number of niches, for evaluating the effect of habitat heterogeneity. Thus, we remove the additional probability of death caused by the difference between habitat and plant traits. Therefore, the baseline death rate is held as d min for all plants. On the other hand, the effect of environmental change on the reproductive strategy should still be acting. An increase in death rate caused by environmental change occurs according to the age of each plant. A recent arrival suffers no increase in probability of death, but an old individual in the same habitat does have an increased probability of death, derived from the change in compatibility between the habitat and the plant's traits following environmental change.
Simulation Settings
We ran the simulation under several spatially and temporally heterogeneous conditions. The variables used in the simulations are described in Table 2. We generated different environments by dividing the total lattice space (1006100 square sites) into (1) 2). The number of seeds produced by a parent was set as 100 (N5100). The initial plant traits were randomly chosen from a uniform distribution independently of habitat condition, and the initial population covered 90% of total sites on the lattice. One hundred simulations were conducted and each simulation was run for 10,000 years. After running the simulation, the reproductive strategy (P) was collected from the remaining plants, and then its frequency distribution was calculated. The mutation rate m was fixed at 0.01 throughout this study.
In this simulation setting, the habitat space was finite, so an increase in the number of habitats, i.e. an increase in environmental heterogeneity within the total lattice space, resulted in a decrease in the space occupied by each habitat. In contrast, the number of potential seed reproduction events increased as the heterogeneity increased. Thus, we also examined the case in which the habitat size was fixed at 20620 square sites but the heterogeneity (k) differed, which meant that the total lattice space became larger as the habitat heterogeneity increased. Concretely, the total lattice size is 40640 square sites when k54, and it is 1006100 square sites when k525.
Effects of temporal heterogeneity of environment
First, we demonstrated how the temporal heterogeneity affected the evolution of reproductive strategy, by varying the frequency and magnitude of environmental change under the condition that spatial heterogeneity was fixed at k516. Fig. 3 shows the change in frequency distribution of the reproductive strategy (P), depending on the values of p and q. As suggested in previous studies, environmental change favored seed reproduction. In contrast, clonal reproduction became advantageous if the habitat environment was stable but empty spaces remained available for long-distance dispersal, as Hamilton and May [30] indicated. The width of the frequency distribution differed depending on the environmental condition of a habitat, and it increased as environmental change occurred more intensively. The balance between the advantage gained by rapid spread by seed dispersal into new habitats following environmental change and the advantage of strong clonal propagation with a suitable trait for its growth Table 2. Variables in the model.
Variables Description p
The frequency of environmental change. q The magnitude of environmental change. k The number of different habitats (environmental heterogeneity) within the total lattice space.
habitat determined the shape of the distribution. In an intensively disturbed environment, both modes of reproduction had beneficial effects for the spread of population. We also checked the effect of changing N (number of seeds) and M c (age of maturity for clonal offspring). An increase in N shifted the frequency distribution towards seed reproduction on the whole, while an increase in M c shifted it towards clonal reproduction on the whole.
Effect of spatial niche compared with the neutral condition Fig. 4 shows how the reproductive strategies responded to environmental change of a habitat with the same settings as in Fig. 3 (k516) in the spatial niche case (white) and the neutral case (grey). When the frequency of environmental change was low (p50.01, Fig. 4 (b)), reproductive strategies shifted from clonal reproduction toward seed reproduction as the magnitude of the change increased in both cases. The neutral environment favored clonal reproduction more than the environment with habitat heterogeneity, under all environmental change conditions. When the frequency of environmental change was high (p50.1, Fig. 4 (a)), the strategy became extreme: clonal reproduction became more advanta- geous in the absence of environmental change (q50.0) whereas seed reproduction was more favorable with intense change (q50.1). The reason for this is that environmental heterogeneity worked as a barrier that restricted a genet moving into its suitable habitat. A high frequency of environmental change caused many empty sites throughout the lattice, so that long-distance dispersers were more successful than short-distance dispersers in both cases. On the other hand, a high magnitude of disturbance killed many individuals at the same time and provided a good opportunity to a clonal offspring if its parent had survived, so that clonal reproduction had a greater effect in spreading the population after environmental change.
Effect of spatial heterogeneity of environment
We demonstrated the effect of spatial heterogeneity (k) on reproductive strategy (Fig. 5). Here we show the case in which the frequency of environmental change was large (q50.1) because in this case the effect of environmental change was clear, as shown in Fig. 4. Habitat heterogeneity made seed production advantageous in the absence of environmental change (p50.0, q50.0, Fig. 5 (c)), and environmental change made seed production advantageous regardless of habitat heterogeneity, but in the spatial niche case the impact was different depending on the degree of environmental change. Increasing habitat heterogeneity tended to make clonal reproduction more advantageous with intermediate environmental change (p50.01, q50.1, Fig. 5 (b)), whereas intensive environmental change made the difference in heterogeneity unclear (p50.1, q50.1, Fig. 5 (a)). In contrast to the spatial niche case, there was no great distinction among habitat heterogeneities in the neutral case (right column in Fig. 5). Next, we demonstrated the effect of spatial heterogeneity (k) on reproductive strategy, excluding the effect of difference in size of the lattice within a habitat. The effect of intra-genet competition was identical among habitats, but the opportunity for seed colonization was lower than when total lattice space was fixed. There was no clear difference in the reproductive strategy under a stable (Fig. 6 (c)), but lower habitat diversity favored seed reproduction in the case of intermediate environmental change (Fig. 6 (b)). Also, there was no clear difference in reproductive strategies among habitat conditions with intensive environmental change (Fig. 6 (a)) or in the case that the total lattice size was fixed Fig. 5 (a)). Environmental heterogeneity in the neutral case did not cause any difference in the reproductive strategy (results not shown) or in the case that total habitat was fixed (Fig. 5 (a2c)). It was quite natural that clonal propagation should be unfavorable when the size of each habitat was small. The number of empty sites should increase if the total habitat area is enlarged, so the possibility for seeds to settle into vacant sites will also increase. It should be noted that an increase in habitat heterogeneity drove the reproductive strategy toward seed reproduction in both the case of fixed habitat size and the case of fixed total lattice size.
Comparison between spatial niche and neutral models
This study demonstrates that the presence of spatial niches alters the impact of environmental change on the habitat condition, relative to the neutral case (Fig. 4). It shows that the effect of environmental change on reproductive strategy is almost the same in both the spatial niche and the neutral cases, meaning that long-distance seed dispersal is effective under a highly changed environment (Fig. 3), as previous studies have concluded [31,36,37]. In the spatial niche case, however, a high frequency of environmental change makes clonal reproduction more advantageous than in the neutral case. In other words, compatibility with the habitat makes selective pressure favor clonal reproduction (Fig. 4). Furthermore, regarding the direction of selective pressure, the long-distance dispersal strategy is more advantageous under low frequency and large environmental change in the spatially heterogeneous condition than in the neutral condition (Fig. 4). This implies that inter-genet competition tends to favor the long-distance dispersal strategy under spatial heterogeneity.
Natural habitats are never homogeneous spatially or temporally [38,39], although results of previous studies are consistent with the neutral case results of this study. On the forest floor, for example, light strength and soil moisture change with time due to regeneration of trees [40,41]. Advantages of clonal reproduction under conditions of disturbance are discovered by considering habitat heterogeneity and the adjustment between it and the phenotype of each individual. As this result suggests, habitat heterogeneity as it relates to the fitness of an individual has a great impact on the life history strategy and/or biodiversity. However, there are almost no studies that incorporate the spatial niche effect, except that of Tubay et al. [42], who investigated the biodiversity of phytoplankton in an aquatic ecosystem. It would therefore be useful to investigate biodiversity in an ecosystem with spatial heterogeneity, as opposed to in uniform (neutral) ecosystems [43].
Effects of spatial and temporal variation on reproductive strategy
The results obtained here under various spatial niche environments (Figs. 5, 6) reveal the evolutionary effects of intra-genet competition. When no environmental change occurs, seed (i.e., sexual) reproduction becomes more advantageous, because it avoids intra-genet competition within the same habitat (Fig. 5). This also indicates that genetic diversity maintained by sexual reproduction can deal with variable habitats [44,45], especially when seeds escape outward from already-filled niches in this model. On the other hand, when environmental change occurs in a spatially heterogeneous habitat, seed reproduction is less advantageous and clonal reproduction becomes beneficial (Figs. 5 (b) and 6). Spatial heterogeneity provides similar environments within the total habitat, and consequently the clonal reproducer can spread its population into new areas when rare opportunities for long-distance dispersal occur. This pattern is inconsistent both with the theory that genetic diversity gives seed reproduction an advantage and with previous studies showing that seed reproduction is advantageous under changed environments [24,46]. However, several clonal plants have adapted and been favored at the early successional stage [47], with dynamics similar to those found in our simulation. Generally, maintaining genetic diversity via seed reproduction tends to become an effective strategy in a fluctuating environment, such as one subject to disturbance [31,32,36,37]. However, once an individual is rooted in a suitable patch, it can spread circumferentially by vegetative propagation under a relatively stable environment [33], because environmental conditions are generally relatively similar in neighboring habitats. In other words, if habitat conditions are suitable, clonal reproduction is more effective because of the rapid propagation that is possible during the early stages of the young plants' lives. Actually, several pioneer or invasive plant species (for example, Miscanthus sinensis and Fallopia japonica) that rapidly colonize open spaces have clonalpropagating abilities, which indicates an adaptive response to good patches that appear after environmental change occurs. In branching scleractinian and gorgonian corals exposed to a wave-disturbed environment, new colonies are founded predominantly by fragments of broken colony branches, not by inseminated gametes that can emigrate long distances [13,14].
Since the probability of seed establishment decreases according to the distance of dispersal from the parents, density effects on the same (homogeneous, similar) habitat become larger in practice [48,49]. Furthermore, the situation in which it is difficult for clonal offspring to migrate to different patches because of the difference in environment is similar to the situation of habitat fragmentation. Travis and Dytham [50] considered the effect of habitat fragmentation on dispersal strategies, and showed that long-distance dispersal was more advantageous as habitat size became smaller in the SEIB model. Heibeler [51] examined unfavorable places to live on the lattice space, and demonstrated that habitat fragmentation favored long-distance dispersal, whereas a clustered habitat favored short-distance dispersal. Long-distance dispersal can be advantageous within a clustered habitat in some cases, but the opposite has never been demonstrated. Nevertheless, clonality would become advantageous as long as there is diversity in the habitat environments that an individual reaches. | 5,912.2 | 2014-12-30T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Unraveling the Biosynthesis of Quinolizidine Alkaloids Using the Genetic and Chemical Diversity of Mexican Lupins
: Quinolizidine alkaloids (QAs) are synthesized by the genus Lupinus as a defense against herbivores. Synthesis of QAs in lupins is species-and organ-specific. Knowledge about their biosynthesis and their corresponding pathways are still fragmentary, in part because lupins of commercial importance were mainly investigated, representing a small sample of the chemodiversity of the genus. Here, we explore the use of three Mexican lupins: Lupinus aschenbornii , Lupinus montanus, and Lupinus bilineatus as a model to study the physiology of QA biosynthesis. The corresponding QA patterns cover widely and narrowly distributed tetracyclic QAs. Quinolizidine alkaloid patterns of seeds and plantlets at different developmental stages were determined by GLC–MS and compared to identify the onset of de novo QA synthesis and to gain insight into specific and common biosynthesis trends. Onset of de novo QA biosynthesis occurred after the metabolization of seed QA during germination and was species-specific, as expected. A common QA pattern, from which the diversity of QA observed in these species is generated, was not found; however, lupanine and 3 β -lupanine were found in the three specieswhile sparteine was not found in Lupinus bilineatus , suggesting that this simplest tetracyclic QA is not the precursor of more complex QAs. Similar patterns of metabolization and biosynthesis of structurally related QAs were observed, suggesting a common regulation.
Biosynthesis of QAs is developmentally regulated and under environmental control [16][17][18].QAs are synthesized from L-lysine, mainly in the chloroplasts of leaves [19]; biosynthesis in hypocotyls, stems, and pods also occurs, albeit at a lower extent [18,20,21].QAs are transported from their place of synthesis to the whole plant via the phloem and stored in epidermal tissues and seeds, the latter as both a defense mechanism and as a source of nitrogen for the growth of the nascent plant [22].During seed germination, QAs are metabolized and mobilized from cotyledons to the roots [22,23].De novo biosynthesis of QAs activates during the early development of plantlets in a speciesspecific manner [22].The first step of QA biosynthesis involves the action of lysine decarboxylase (LDC), which decarboxylates lysine to the diamine cadaverine (Figure 1) [24].
Cadaverine is then converted to 5-aminopentanal, putatively, by copper amino oxidase enzyme (CAO) or amino transferase [20,25].The consensus is that 5-aminopentanal cyclizes spontaneously to produce ∆ 1 -piperideine, the intermediate from which bicyclic (lupinine) and tetracyclic QA (sparteine, lupanine, and multiflorine) are formed; these are then converted to a vast diversity of related QAs through tailoring reactions, including oxidation, dehydrogenation, hydroxylation, acylation, and methylation.Tigloyl-CoA:13α-hydroxymultiflorine/13α-hydroxylupanine O-tigloyltransferase, (HMT/HLT) catalyzes the acylation of 13α-hydroxymultiflorine and 13α-hydroxylupanine to form 13α-tigloyloxymultiflorine and 13α-tigloyloxylupanine using tigloyl-CoA as acyl donor [26].The fact that little is known about the rest of the enzymatic machinery involved in the pathway and its genetic regulation might soon change, considering the significant progress on the identification of candidate biosynthetic and regulatory genes achieved using transcriptomics and genomics in the last 4 years [17,25,27].This advancement of knowledge has been achieved through research of high and low QA producing varieties of Lupinus angustifolius due to the strong commercial interest on generating lupin varieties with stable low or null QA content in seeds, which could be safely used for food and feed purposes.However, L. angustifolius represents a small sample of the chemodiversity of the genus.Studies to unravel QA biosynthesis involving more than one Lupinus species are a steppingstone towards better understanding of the poorly known mechanisms involved in the generation of the diversity of QA patterns in nature, enabling us to answer questions, such as, is there a precursor QA pattern for the diversification of QAs?If so, which QAs composed this starting pool?
Mexico is one of the three centers of diversification of the genus Lupinus in the American continent; the Rocky Mountains and the Andes are the other two [1,28].There are approximately 65 lupin species in Mexico [29], distributed across the country from Baja California and Tamaulipas to Chiapas, along Sierra Madre Occidental, Sierra Madre Oriental, and particularly the Trans-Mexican Volcanic Belt [30].Mexican lupins constitute a vast genetic pool and source of QAs of commercial importance [31,32].Moreover, from a biological point of view, they represent an unexplored species diversity to study the biosynthesis of QA and the possible genetic changes that may have played a role in the adaptation of lupin species to the American habitats.
Our research group has characterized the QA patterns of wild Mexican lupins for over 20 years [31,33,34].Results from this work have led us to the conclusion that Lupinus aschenbornii S. Schauer, Lupinus bilineatus Benth, and Lupinus montanus Kunth (Figure 2) constitute interesting models for the study of QA biosynthesis and its genetic regulation, due to the similarities and differences in the QA patterns they produce.Quinolizidine alkaloids from these species span a broad range of related molecules, including the main QAs produced by most lupin plants (sparteine, lupanine, and multiflorine) and those with restricted distribution, such as aphylline and aphyllidine, found only in some American species [2].
Lupinus aschenbornii is a perennial species native to the Trans-Mexican Volcanic Belt; it is found in the high mountains of the states of Mexico, Michoacán, and Puebla, at altitudes ranging from 2800 to 4300 m above sea level (a.s.l.).This species produces the most diverse QA pattern among the Mexican species chemically characterized so far [30].Up to 24 QAs have been identified in the leaves of L. aschenbornii, including sparteine, lupanine, 13α-hydroxylupanine, angustifoline, N-formylangustifoline, multiflorine, and 13α-tigloyloxylupanine, among other esters, which are the main QAs produced by this species [33,35].Lupinus bilineatus is an annual, biennial, or short-lived perennial species that grows in the states of Aguascalientes, Morelos, Michoacán, and Mexico at 2780 to 2945 m a.s.l.[30].It produces a different (and less diverse) QA pattern compared to L. aschenbornii.The main QAs synthesized by this species are aphylline, aphyllidine, lupanine, hydroxyaphylline, and hydroxyaphyllidine [Bermúdez-Torres, personal communication].Lupinus montanus is a perennial polymorphic species [36].It is the most widely distributed of the Mexican lupins as it grows in pine, oak, and alpine meadow forests at 2500 to 4100 m a.s.l.from Chihuahua to Guatemala [30].Sparteine, lupanine, and aphylline are the main alkaloids produced by this species [31].
As we were aware of the potential of using L. aschenbornii, L. bilineatus, and L. montanus as a model to unravel QA biosynthesis, we embarked on the characterization of their QA patterns during germination and early plantlet development.Our aim was to identify the developmental stage at which de novo biosynthesis starts in each and the common pool of QA from which the diversity of QAs produced by these three species could be explained.
Seed Harvest
Seeds of L. aschenbornii, L. bilineatus, and L. montanus were collected from populations growing at Iztaccihuatl-Popocatepetl National Park between April and September 2009.Geographical coordinates are indicated in Table 1.Herbarium voucher material was collected from three flowering individuals per population and deposited at the MEXU herbarium.Seeds were stored in paper bags inside foil bags with silica gel to control humidity, at 4 • C, until use.
Disinfestation and Scarification of Seeds
Seeds were disinfested following protocol [37], with slight modifications.Seeds were washed in detergent solution (0.5% w/v) for 5 min, then transferred to 70% (v/v) ethanol for 5 min, immersed in 1.0% (v/v) sodium hypochlorite for 20 min, and finally placed in a solution of benzylpenicillin and nystatin (0.5% w/v, each) for 60 min.Seeds were dried on filter paper under sterile conditions.All disinfestation steps were carried out using continuous mechanical agitation; three washes with sterile distilled water were conducted in between each disinfestation step, except after the treatment with antibiotics.Disinfested seeds were mechanically scarified using a cylinder made of no.60 wood sandpaper; seeds were placed inside the cylinder and shaken manually in a rotatory and longitudinal manner for 40 min [38].Scarified seeds were immediately germinated.
Seed Germination and Plant Growth
Scarified seeds were placed onto wet germination strips (Sartorius grade 190) contained in aluminum trays closed with self-adhesive plastic.Seeds were germinated in a growth chamber at 20 • C, 60% relative humidity (RH), photoperiod of 16 h light/8 h darkness, and monitored daily.Germinated seeds with a radicle of at least 5 mm in length were transplanted to trays of 98 pots containing 20 mL of perlite in each pot and returned to the growth chamber for growth at 20 • C, 60% RH, and photoperiod of 16 h light/8 h darkness.Plantlets at different stages of development, i.e., germination (5 mm radicle); elongation of hypocotyl; emergence of first up to sixth leaf (in some cases), were harvested and dried at 30-34 • C for 24 h.Dry samples were reduced to powder using mortar and pestle and stored in a desiccator at room temperature until subjected to QA extraction.
Quinolizidine Alkaloids Extraction
QA extraction was performed according to [33]; 300 mg of dried and grinded plant material were resuspended in 20 mL of 1 M HCl and incubated at room temperature with continuous agitation for 24 h.Extraction mixture was centrifuged at 8500 rpm for 10 min and the supernatant recovered and alkalized (pH 12) with 3 M NH 4 OH.Supernatant was loaded to an Isolute ® HM-N column (IST, Biotage, Uppsala, Sweden); alkaloids were eluted with 30 mL of CH 2 Cl 2 (3×) and collected in a round flask.Eluate was concentrated to dryness in a rotary evaporator (40 • C, without vacuum).Alkaloids were resuspended in 1 mL of methanol and stored in an amber vial at 4 • C in darkness until gas liquid chromatography-mass spectrometry (GLC-MS, Agilent, Santa Clara, CA, US) analysis.All materials used for QA extraction were pre-washed with CH 2 Cl 2 .
Identification and Quantification of Quinolizidine Alkaloids by GLC-MS
Separation and identification of QA was performed by GLC-MS following the protocol reported by [33], with some modifications.A gas chromatograph (Agilent 7890A GC) with HP 5 MS column (30 m, 250 µm internal diameter, and 0.25 µm film thickness) (Agilent, Palo Alto, CA, US) coupled to a mass spectrometer (Agilent 5975C MSD, Santa Clara, CA, US) with electronic impact detector (EI) was used.The capillary column had a length of 30 m, an internal diameter of 0.25 mm, and a film thickness of 0.25 µm.Hydrogen was used as carrier gas at a flow rate of 1.5 mL/min (SPLIT mode) and an injector temperature of 280 • C was used.The initial temperature was maintained at 120 • C for 2 min in isothermal flow, followed by a rise in linear isothermal temperature to 300 • C, at a rate of 8 • C/min, and finally maintained for 10 min in isothermal mode at 300 • C. Samples (1 µL) were injected automatically.Each peak in the chromatogram was integrated and identified using the NIST (access on: 17 May 2018) spectrum library [39] and literature data [2].Kovats retention index was calculated using the retention time of each QA compared to that of the respective alkane (obtained from 1 mg/mL of alkanes).Abundance of each QA was expressed in mg/mL, using the percentage of the area of 1 mg/mL of sparteine as reference.Content of the total and each QA present in the plant material was calculated and expressed in mg/g dried weight (DW).Three biological samples were evaluated for each point and a descriptive and inferential statistical analysis, and an analysis of variance (ANOVA) were carried out.
QA Patterns of Lupinus aschenbornii Seeds and Plantlets
Seeds of L. aschenbornii had a total QA content of 68.7 mg/g DW; this content decreased to 11.7 mg/g DW during germination, as plantlet development progressed an increase in QA content was observed, reaching 16.9 mg/g DW at the hypocotyl elongation stage, suggesting de novo QA biosynthesis (Figure 3A).Total QA content then increased to 17.8 and decreased to 10.7 mg/g DW at the emergence of the first and second leaf, respectively (Figure 3A).The main QA present in L. aschenbornii seeds were 13α-tigloyloxylupanine (3.4%), lupanine (3.0%), sparteine (2.7%), angustifoline (1.7%), 13α-hydroxylupanine (1.1%), 13α-valeroyloxylupanine (0.8%), 3β-hydroxylupanine (0.3%), and two unidentified QA esters, n.i.IKunknown2 (31.6%) and n.i.IKunknown4 (53.4%) (Table 2).Tetrahydrorhombifoline, 11,12-seco-12,13-didehydromultiflorine, and multiflorine were found as traces in L. aschenbornii seeds; however, these QAs were not considered in further analyses since they were not synthesized by the plantlets.Nascent plantlets quickly metabolized all of the main QAs during germination, suggesting their use as a source of nitrogen (Figure 3B-D), and the synthesis of all of them, except 3β-hydroxylupanine, n.i.IKunknown2, and n.i.IKun-known4, was initiated after germination and before hypocotyl elongation (Figure 3B-D).Maximum content of sparteine was observed at the hypocotyl elongation stage, while that of angustifoline, lupanine (and derivatives), n.i.IKunknown2 and n.i.IKunknown4 at the emergence of the first leaf.Moreover, 13α-tigloyloxylupanine (4.6 mg/g DW at the first leaf emergence stage) was the main QA produced by L. aschenbornii plantlets (Figure 3B,D).
QA Patterns of Lupinus bilineatus Seeds and Plantlets
QA content characterization revealed cycles of metabolization and de novo synthesis during L. bilineatus germination and plantlet development (Figure 4A-C).Total content of QAs in L. bilineatus seeds was 37.3 mg/g DW, which slightly decreased to 34.6 mg/g DW during germination.QA content further decreased to 11.9 mg/g DW at the plantlet hypocotyl elongation stage indicating that plantlets were metabolizing these secondary metabolites, it then increased to 19.7 mg/g DW as the first leaf emerged suggesting de novo biosynthesis (Figure 4A).This biosynthesis was not sustained as revealed by the decrease in QA content as plantlet development continued, reaching a minimum of 6.2 mg/g DW at the emergence of third leaf.An increase in QA content was then observed as fourth and fifth leaves emerged, caused mainly by the increase in aphylline, QA content then dropped again to 5.5 mg/g DW in plantlets where the sixth leaf was emerging.The main QAs present in seeds of L. bilineatus were lupanine (16.5%), aphylline (14.5%), anagyrine (8.2%), aphyllidine (5.8%), 3β-hydroxylupanine (5.3%), sparteine (4.0%), and four unidentified QAs, n.i.KIunknown5 (31.3%).n.i.2204 (5.9%), n.i.2281 (4.2%), and n.i.2441 (1.2%); however, only.n.i.2204 was actively synthesized by the plantlets.
QA Patterns of Lupinus montanus Seeds and Plantlets
Seeds of L. montanus contained 20.9 mg/g DW of QAs, a reduction of QA content was observed during germination (14.7 mg/g DW) and elongation of the hypocotyl (14.0 mg/g DW), a likely consequence of the use of QAs as a source of nitrogen (Figure 5A).The content of QAs in plantlets increased as their development progressed and was maintained until the emergence of the third leaf, where QAs reached a maximum of 23.6 mg/g DW (Figure 4A), suggesting de novo QA biosynthesis.A steep decrease in QA content (2.0 mg/g DW) was observed at the emergence of the fourth leaf and then a slight increase was detected again at the emergence of the sixth leaf (5.6 mg/g DW), suggesting again the biosynthesis of these compounds.QA content then decreased during the transition to the emergence of the seventh leaf (0.3 mg/g DW) (Figure 5A).Lupinus montanus seeds contained sparteine (70%) and lupanine (29.9%) as main alkaloids with traces of α-isosparteine, 17-oxosparteine, 5,6-dehydrolupanine, 3β-hydroxylupanine, multiflorine, and 17-oxolupanine.Lupanine, 3β-hydroxylupanine, 13α-hydroxylupanine, sparteine, aphyllidine, aphylline, and multiflorine were the main alkaloids produced by L. montanus plantlets (Figure 5B,C).Not all of these QAs followed a similar pattern of metabolization and synthesis.Sparteine was the main QA in seeds and was actively metabolized during germination and the elongation of hypocotyl showing a 91.5% of reduction in content (from 14.6 to 1.3 mg/g DW, Figure 5C).Sparteine and lupanine showed a similar trend of synthesis along the developmental stages characterized (Figure 5B,C).Moreover, 3β-hydroxylupanine, 13α-hydroxylupanine, and multiflorine displayed similar dynamics from germination to the emergence of the seventh leaf (Figure 5B,C).Lupanine (13.5 mg/g DW at the emergence of first leaf) and 3β-hydroxylupanine (12.2 mg/g DW at the emergence of third leaf) were the most abundant QAs in L. montanus plantlets (Figure 5B).De novo biosynthesis of the various QAs was initiated at different developmental stages (Figure 5B,C).
Discussion
QAs are mostly stored in seeds; they protect the seed from potential eaters and serve as source of nitrogen [2,5].Lupinus aschenbornii, L. bilineatus, and L. montanus seeds contained 68.7, 37.5, and 20.9 mg/g DW of QA, respectively.The authors of [33] noted that seeds of L. aschenbornii contained 3.3 mg QA/g DW, which is about 20 times less than the content (68.7 mg/g DW) determined in this study; however, both reports are within the percentage of QA content (up to 8.0%) detected in lupin seeds [40].Differences in seed QA content within the same lupin species have been commonly reported, which can be explained by the fact that QA biosynthesis is affected by environmental conditions [17].Two unidentified QA esters were the main QAs in the L. aschenbornii seeds, IKunknown2 (31.6%) and n.i.IKunknown4 (53.4%).This is in line with previous studies indicating that seeds of L. aschenbornii are rich in QA esters [31,35]; however, it contrasts the reports of N-formylangustifoline or 13α-hydroxylupanine as the most abundant QAs [33,35].These discrepancies in QA patterns were also observed for L. montanus.In this work, L. montanus seeds had sparteine (70.0%) and lupanine (29.9%) as the main alkaloids, whereas sparteine (89.0%), an unidentified QA n.i.1940 (6.5%), aphylline (2.4%), and lupanine (1.3%) were the most abundant QAs reported by [31].We hypothesize that the observed differences in the QA patterns of L. aschenbornii and L. montanus seeds may be due to (1) the result of the effect of environmental factors; (2) differing chemotypes; or (3) the recent origin of the Mexican species, which would imply that its chemical characters might still be established.
QA content decreased during germination of L. aschenbornii, L. montanus, and L. bilineatus seeds; for the latter, the decrease extended to the hypocotyl elongation stage.This phenomenon may be explained by the fact that QAs are used as sources of nitrogen by the nascent plant [22,23].QAs are an example of the promiscuous use of secondary metabolites by plants for their survival, which maximize the gains from producing molecules of high energetic cost.Metabolization of QAs was faster in L. aschenbornii than in L. montanus and L. bilineatus since a reduction of 85.0% was observed during germination in the former compared to 29.5% and 7.6% in L. montanus and L. bilineatus, respectively.It is logical to think that, the more diverse enzyme machinery for QA catalysis is present in the seeds, the better it is to maximize the use of QAs during these early stages of development.This was clearly observed as L. aschenbornii degraded all of the main QAs present in its seeds, i.e., n.i.IKunknown4, n.i.IKunknown2, 13α-tigloyloxylupanine, lupanine, sparteine, angustifoline, and 13α-hydroxylupanine.Interestingly, a much higher degradation rate was observed for n.i.IKunknown4, the most abundant QA.A similar phenomenon was documented in L. montanus seeds, which degraded sparteine and lupanine during germination, with a higher degradation for sparteine, the most abundant QA, similar behavior was reported by [22] for L. polyphyllus.The behavior of the metabolization of QAs during germination of L. bilineatus was different to the observed in L. aschenbornii and L. montanus and similar to the metabolization of total QAs by L. albus and L. angustifolius reported by [22].Unexpectedly, an increase of the main QAs identified in the seeds of this species (lupanine, aphylline, aphyllidine, 3β-hydroxylupanine, and sparteine) was observed during germination, even though a decrease in the total QA content was detected during this developmental stage due to the degradation of unidentified QAs.Were the main QAs the products of this degradation?Or were they newly synthesized?We hypothesized that they may be the products of the degradation of other QAs since QAs are synthesized in photosynthetic tissues, which are obviously absent at germinating seeds.On the other hand, active degradation of these main QAs was observed during the hypocotyl elongation stage, in which a steep decrease (68.2%) in QA content was determined and similarly to L. aschenbornii and L. montanus, the most abundant QA, lupanine in this case, was more rapidly degraded.
The developmental stage at which de novo biosynthesis of QAs occurred was species specific, as was reported for other Lupinus species [22].Lupinus aschenbornii initiated de novo biosynthesis after germination and before hypocotyl elongation.As mentioned before, this species metabolized more rapidly the QAs present in its seeds compared to L. montanus and L. bilineatus, which interestingly started to biosynthesize QAs (mostly) after the elongation of hypocotyl.The onset of QA biosynthesis in the three species was triggered after the decrease of QA.The content of QA was on the range of 11 to 14 mg/gDW before biosynthesis started in all species, raising questions of whether QA concentration is a regulatory mechanism in their de novo biosynthesis.
Clear differences in the biosynthetic capacity of QAs in L. aschenbornii, L. bilineatus, and L. montanus were evident as revealed by their QA patterns.Lupinus bilineatus and L. montanus (mostly) synthesize QA molecules derived from the modification of the rings A and B of a tetracyclic precursor (Figure 1), being these modifications mainly hydroxylation, oxidation, and dehydrogenation.These species do not accumulate esters derived from 13α-hydroxylupanine in contrast to L. aschenbornii, which is rich in these compounds and whose most abundant QAs are the product of tailoring reactions on ring D (Figure 1), i.e., hydroxylation, esterification, and ring cleavage.Are these differences in QA patterns the result of the transcriptional turn on and turn off status of the genes coding for the enzymes that synthesize 13α-hydroxylupanine esters in each species?In this regard, it has been reported that in L. angustifolius the transcriptional regulation of HMT/HLT seems to be under a separate genetic control than the genes LDC and CAO [17].Lupinus aschenbornii plantlets seemed to have similarities in their QA biosynthetic machinery, with the leaves of L. angustifolius [18].
Similar patterns of metabolization were observed for structurally related QA within each species, suggesting a common regulation in their synthesis.For instance, lupanine and its derivatives (except 3β-hydroxylupanine) and angustifoline showed identical trends from germination to the emergence of the second leaf in L. aschenbornii, whereas lupanine, 3β-hydroxylupanine, and anagyrine did in L. bilineatus from germination to the emergence of the sixth leaf.The dynamics of metabolization and synthesis of aphylline and aphyllidine in L. bilineatus were also alike; however, a clear increase in aphyllidine content was observed after the emergence of the fifth leaf in the plantlets, which coincides with a decrease in the content of aphylline.Interestingly, in L. montanus plantlets, lupanine, sparteine, and aphylline displayed similar trends along the developmental stages characterized, so did the hydroxylated forms of lupanine and multiflorine.Regarding similarities among the three species, an increased peak in the content of most QAs was observed at the emergence of the first leaf, suggesting a high biosynthetic activity with the development of the principal photosynthetic tissues.
Lupinus bilineatus did not accumulate sparteine, the simplest structure of tetracyclic QAs, at least during the developmental stages characterized, which suggests that sparteine is not the precursor of more complex QAs.This is in line with the consensus that tetracyclic QAs are most likely synthesized independently from a common precursor, the diiminium cation (Figure 1) [41].Aphylline is a QA found in some American Lupinus species, and it is highly accumulated in L. bilineatus and present in L. montanus.When did this biosynthetic capacity first appear?Is it a novel enzyme activity or was it already present in a common ancestor of the lupin American species?The presence of aphylline in Sophora alopecuroides L. [42] suggests that the enzyme(s) involved in the synthesis of aphylline was already present in a common ancestor of the Lupinus and Sophora genus, implying a reversion of the turn off of the aphylline biosynthetic genes in some American lupins as postulated for other QAs [43].
Conclusions
The main results of the present research are: (1) QAs were metabolized during germination-the main QAs were the most rapidly catabolized; (2) de novo QA biosynthesis starts once QAs are metabolized, for each evaluated species at a different developmental stage, suggesting that it is regulated in a species-specific manner; (3) structurally related QAs showed similar patterns of metabolization and biosynthesis, suggesting a common regulation; (4) there is no common QA pattern from which the diversity of QAs produced in these species is generated.Lupanine and 3β-lupanine were found in the three species, while sparteine was not present in L. bilineatus, excluding it as the precursor of more complex QAs;.(5) Lupinus montanus showed the most diverse QA pattern since it synthesized four different QA types: sparteine skeleton, lupanine and derivatives, aphylline and derivatives and multiflorine skeleton while Lupinus aschenbornii showed the most complex pattern of lupanine derivatives, what these changes in QA biosynthetic specificities mean in an evolutionary scenario is intriguing.
Figure 1 .
Figure 1.Biosynthetic pathway of quinolizidine alkaloids.QAs are synthesized from lysine, which is converted to cadaverine and then to 5-aminopentanal that spontaneously cyclize to form ∆ 1 -piperideine; the latter intermediate is the precursor of the diiminium cation, from which it is thought tetracyclic QA are derived.The four heterocyclic rings of the diiminium cation are indicated for the letters A, B, C, and D. Enzymes molecularly characterized are indicated: LDC, lysine decarboxylase; CAO, copper amino oxidase; HMT/HLT, tigloyl-CoA:13α-hydroxymultiflorine/13α-hydroxylupanine O-tigloyltransferase. Solid lines represent the known steps and dotted lines those on construction.
Table 1 .
Populations for seed collection. | 5,770.8 | 2021-08-14T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Analysis of the Mutual Impedance of Coils Immersed in Water
Magnetic induction communication and wireless power transmission based on magnetic coupling have significant application prospects in underwater environments. Mutual impedance is a key parameter particularly required for the design of the systems. However, mutual impedance is usually extracted from measurements when the coils are processed, which is obviously not conducive to the system optimization in the design phase. In this paper, a model of the mutual impedance of coils immersed in water is established. The magnetic vector potential is expressed in the form of series by artificially setting a boundary, and then the mutual impedance calculation formula of the coils immersed in water is derived. In the analysis, the effect of the conductivity of water, the excitation frequency, and the number of turns of the coils are mainly taken into account. In addition, the variation of the mutual impedance of coils in air and water with axial displacement is also compared. The models can be used to analyze the coil coupling characteristics in the presence of conductive medium, which is helpful for the design process.
Introduction
Recently, magnetic coupling technology based on the concept of magnetic induction (MI) has been widely used in eddy current testing (ECT) [1], MI-based communication [2], and wireless power transmission (WPT) due to its advantages of isolation and convenience [3][4][5]. Especially in the underwater environment, WPT technology can overcome the problem of rapid energy supply for autonomous underwater vehicles (AUVs) [6][7][8][9]. As a result, it is of great significance to develop a WPT system for underwater environments.
The WPT system used in underwater environments is similar to that used in air. It couples the alternating power generated by the primary side to the secondary side through a pair of coils, and the secondary side obtains direct current (DC) power through rectification. The difference is that the media between the coils is conductive water. Therefore, the magnetic coupling characteristics are different from that in air. It has been reported that the power transmission efficiency and capacity are related to the mutual coupling of the coils [10]. In the design process of WPT, most of the work is the design and optimization of the coils [11]. Thus, the mutual coupling analysis of the coils in water is an important topic for the design of the underwater WPT system.
It is usually necessary to carefully design the shape and installation position of the coils for the reliable coupling between coils in the power transmission system. There are inevitably magnetic materials and conductive materials around coils, which makes the coupling analysis between coils more complicated. However, the mutual impedance test is often carried out after the coils are processed. In order to estimate mutual impedance, numerous methods have been proposed by the researchers in order to calculate the mutual impedance of the coils. In [12], the magnetic vector potential approach is used to calculate the mutual inductance of two circular coils arbitrarily placed with respect to each other. This paper gives the mutual inductance calculation method of two coils in a non-conductive environment based on the Neumann formula. In [13], the mutual impedance characteristics of two coils above a conducting plate are studied. The conducting plate will not only cause the change of the real part of the mutual impedance(the effect of the losses in the media), but also affect the imaginary part (mutual inductance). In [14], the influence of the conductivity of the medium on the mutual inductance is systematically studied. The research shows that the existence of the conductive medium will make the mutual inductance become complex. In [15], the mutual inductance between two planar coils is developed, and an analytical expression of the mutual inductance with respect to the properties of the media is carried out. However, this paper focuses on the influence of the material of the coil substrate on the mutual inductance of the coils. The influence of the medium between coils on mutual inductance has not been studied. In [16], an exponential attenuation factor is introduced to express the effect of conductive media on mutual impedance. The attenuation rate depends on the permeability, conductivity and dielectric constant of the media. However, in the MI region, the change of magnetic field is usually a complex function of the electrical parameters of the media and distance, so the introduction of exponential attenuation factor cannot well express the effect of the media on mutual impedance. In [17], the eddy current equivalent resistance is introduced to express the influence of water on the electrical parameters of the coil. However, when the coil is close to the conductive medium, it will not only cause the change of self inductance and AC resistance, but also form eddy currents in the conductive medium and produce ohmic loss. The eddy current equivalent resistance cannot directly express the influence of conductive medium on the coil coupling characteristics. In addition, a variety of measurement methods have been proposed to measure the mutual impedance of the coils and study the characteristics of media [18,19].
In this paper, the planar circular coil is taken as an example, and the mutual impedance of the coils immersed in water is analyzed from the perspective of magnetic vector potential by using the Truncated Region Eigenfunction Expansion (TREE) method [20,21]. The influence of truncation region and summation term on the accuracy of calculation results is analyzed. In addition, the influence of excitation frequency and the conductivity of media on mutual impedance is systematically studied, and the similarities and differences of mutual impedance in air and water are compared. The models can be used to analyze the coil coupling characteristics in the presence of conductive medium, which is helpful for the design process.
The organization of this paper is as follows. Section 2 introduces the mutual impedance model of coils immersed in water from the perspective of magnetic vector potential. In Section 3, the influence of truncation region and summation term on the accuracy of the proposed model is analyzed. In Section 4, an experiment is established to verify previous conclusions. Finally, the most important conclusions are summarized in Section 5.
Mutual Impedance of Two Coils of Filamentary Currents
The structure of the analyzed system is shown in Figure 1. In this figure, two coils insulated by small dielectric boxes are immersed in water. The two coils have n i and n j turns, respectively. In this approach, the multi-turn coil is modeled as a series of circular filamentary coil. The coil i is driven by a constant current I at angular frequency ω. The vertical distance of the two coils is d. The electromagnetic model of the coils depicted in Figure 1 is shown in Figure 2. Space is divided into six sections. The dielectric box is replaced by an infinitely wide dielectric layer. Region I, IV, and VI are characterized by electrical conductivity σ and magnetic permeability µ of water, and Region II, III, and V are characterized by σ = 0 and µ = 1. With axial symmetry, the coil current density has only an azimuthal component; therefore, the vector potential becomes a scalar one. The potential A in all regions satisfies the following differential equation for the circular filamentary coil with radius r 0 and height z 0 .
where ι 0 represents the current density of the circular filamentary coil, it can be described as where k 2 i = jωµ r µ 0 σ i , the general form of the solution can be expressed as a series where λ i = κ 2 i + jωµ r µ 0 σ i , J 1 is the first-kind order-one Bessel function, and Y 1 is the second-kind order-one Bessel function. Because the electromagnetic field generated by the coil i does not extend to great distances, it can be assumed that potential vanishes at r = h, i.e., A(h, z) = 0. In particular, in Region I, C (1) i = 0, and in Region VI, D (6) i = 0 to ensure that the potential remains finite at z = ±∞. B i should be 0 due to the divergence of Y 1 (0). Therefore the eigenvalues κ i are the roots of the equation: Therefore, the components of the magnetic vector potential in all regions can be expressed as From the continuity of B z and H r at the five interfaces of the six layers, we get Substituting (5) into the boundary conditions (6), there are Multiply both sides of the above equations by J 1 (κ i r)r , integrate from 0 to h and use the orthogonality property of the Bessel function, we get Then we get the coefficients C On the assumption that a multi-turn coil can be approximated by the superposition of a number of filamentary coils, and the coil has a rectangular cross-section. If the current density is constant over the dimensions of the coil, the equivalent current density of coil i can be expressed as follows If we let the current distribution in the filamentary coils approach a continuous distribution, we can approximate the coil i of finite cross-section by the integral [20,21]. Replace the current density ι 0 in (9) with ι, we get According to the law of electromagnetic induction, the voltage induced in a length of wire can be expressed as where dl is a vector differential line element tangential to the path of the source current.
The total voltage induced in the coil j is then where V i can be calculated by (12), then we can approximate the sum operation as an integral operation: where N D = n j (r 4 −r 3 )(z 11 −z 12 ) represents the turn density of the coil j. The mutual impedance Z ij is defined as the ratio between V total and the driving current I, as it can be seen as follows: The parameter R ij represents the effect of the losses in the media, it represents a component of the induced voltage V total in phase with the driving current I. The parameter M is the total mutual inductance between the two coils [15]. It can be seen from the above formula that in the presence of conductive media, mutual impedance becomes complex.
Convergence Analysis of the Model
In the analysis of the previous section, the mutual impedance is expressed in the form of series. Obviously, the results are closely related to the sum term N and truncation region h. The convergence of TREE method has been fully verified in the field of ECT [21]. Therefore, this section focuses on the influence of summation term N and truncation region h on the accuracy of calculation results in the presence of water, which can be used as a theoretical guide for engineering application.
The summation term N is considered as a variable that ranges from 1 to 40. The truncation region h is swept from h = r 2 to h = 10r 2 . The distance between the two coils is equal to 2 cm. The outer radius and inner radius of the coils are 10 cm and 6 cm, respectively. The number of turns is 20, and the excitation frequency is fixed at 100 kHz. The conductivity of water is set to σ = 4 S/m. According to these conditions, the calculated results are shown in Figures 3 and 4. As it can be seen, different summation terms N and h can lead to different calculation results. For any h, the results will converge with the increase of the sum term N. Basically, when the sum term is greater than 20, the results will converge.
However, for different h, the convergence results are different. When h is equal to r 2 or 2r 2 , the results of convergence are quite different. However, when h is greater than 3r 2 , the final convergence result is almost the same. This indicates that the distribution region of magnetic field generated by excitation current is limited. Therefore, the hypothesis that the magnetic vector potential is equal to 0 when r = h is reasonable. Thus, in practical engineering application, the truncation region can be selected according to the needs of calculation accuracy and calculation time.
Experimental Results and Discussion
An experimental is implemented to verify the theoretical and simulation results as shown in Figure 5. Two identical coils have been winded by copper wires with a diameter of 1 mm, and each one of them has 20 turns of external and internal radii of 100 mm and 60 mm, respectively. In order to prevent leakage of current, the two coils are insulated from the surrounding environment by small dielectric boxes made of acrylic material. The dielectric box size is 600 mm × 800 mm × 10 mm. In order to ensure the accuracy of the coil position during the test, coils are fixed on the inner surface of the dielectric boxes by glue. The dielectric boxes are immersed in a water tank with the size of 900 mm × 800 mm × 600 mm. The inner wall of the water tank is designed with a guide groove, which can be used for fixing the dielectric boxes. During the experiment, the conductivity of water is adjusted by adding sea salt into the water. Measurements have been performed by using the Wayne Kerr 6500B Impedance Analyzer. The mutual impedance is measured by the in-phase and the opposing-phase connections of the two coils. Two different connections give the following impedances applying the reciprocity theorem [15]: where Z ij is the mutual impedances between the two coils. R i , R j are the parasitic resistance of the coil i and coil j, while L i , L j are the self-inductances of the two coils. Subtract the left side from the left side, and the right side from the right side of the above two equations, we get The FEA tool COMSOL is used to model and compare the estimated results with the experimental results. The selected space dimension is 2D axisymmetric, the physical field is magnetic field. Two coaxial uniform multi-turn coils are constructed, and the number of turns is 20. The coils are surrounded by a rectangular region with width of 40 cm and height of 1 cm, respectively. The region is defined as air. The regions are surrounded by a rectangular region with width of 80 cm and height of 30 cm. The models of the coils immersed in air or water can be constructed by modifying the material properties of this area to water or air. the element size of mesh is normal. By applying 1A excitation current to one coil and setting the other coil as open circuit, the mutual impedance between coils is obtained by measuring the voltage at both ends of the open circuit coil.
Influence of the Conductivity in the Mutual Impedance
In order to investigate the effect of the conductivity in the mutual impedance, the coils in air and immersed in water with different conductivity are both tested in this experiment. As depicted in Figure 5, the coils are parallel and coaxial in the water tank. When the water tank is not filled with water, the mutual impedance between the coils in air is tested, and different conductivity is generated by adding sea salt into the water tank. The experimental results are shown in Figure 6a,b compared with the analytical results. According to the influence of summation term N and truncation region h on the accuracy of calculation results in the previous section, N = 30, and h = 5r 2 are selected in this case. The distance between the two coils is 20 mm, and the measurement frequency is fixed at 100 kHz. The conductivity ranges from σ = 0 S/m, which corresponds to the air, to σ = 4 S/m, which corresponds to seawater. It is obviously observed that the estimated results have the same trend with the simulation and measurement results. As shown in Figure 6a, the real part of mutual impedance increases with the increase of conductivity. When the conductivity is zero, the difference between the estimated results and the simulation and measured results is large. The estimated real part of mutual impedance is zero. When the conductivity increases, the difference between the estimated results and the simulation and measured results is gradually narrowed. This is because the proposed model assumes that the current in the coil is uniformly distributed and does not consider skin effect and proximity effect. However, the actual coil inevitably has the above two effects. There is a coupling between the turns of the coil due to the proximity effect. At the same time, the magnetic field generated by the eddy current in water will also affect coupling between the coils. When the medium conductivity σ is zero, the real part of mutual impedance mainly comes from the influence of proximity effect, which is not considered in the model. When σ increases gradually, the magnetic field produced by eddy current in water is the main reason for the change of the real part of mutual impedance. Therefore, when the conductivity is low, the real part estimation error is larger, while the error decreases when the conductivity increases gradually. In Figure 6b, there is only a slight reduction in the imaginary part of mutual impedance, when σ = 0 S/m, the imaginary part is equal to 51.56 uH, and when σ = 4 S/m, the imaginary part is reduced to 51.52 uH. The estimated error is about 4%. The estimation error of imaginary part mainly comes from the definition of equivalent turn density of coil j. In order to simplify the calculation, the cross-section of coil area is assumed to be a rectangle, and the equivalent turn density is used to approximate the coil space. However, the cross-section of each turn of the actual and simulated coils is circular, and the coil does not occupy the whole rectangular section space. Thus, there is a constant bias in the estimation. The change of mutual impedance with conductivity shows that in the case of medium conductivity and excitation frequency of the coil are not too high, the additional magnetic field induced by eddy current in the medium has little effect on the original magnetic field generated by the excitation coil, so the mutual inductance in the water is almost the same as that in air.
Influence of Frequency in the Mutual Impedance
In this case, the effects of excitation frequency in the mutual impedance are investigated. Considering that the change of the imaginary part is not obvious when the conductivity is less than 4 S/m, and in underwater environment, high frequency will cause high ohmic losses, this section focuses on the variation of mutual impedance with respect to frequency within 1 MHz when the coil is immersed in seawater. N = 30, and h = 5r 2 are also selected as in the previous section. As shown in Figure 7a,b, the results show that the imaginary part decreases with the increase of frequency, but the real part of mutual impedance increases with the increase of frequency. However, the imaginary part at tens of kHz and hundreds of kHz is only reduced a little, the real part of mutual impedance increases exponentially. With the increase of the coil excitation frequency, the estimation error of the real part increases gradually. With the increase of the excitation frequency of the coil, on the one hand, the eddy current energy in the water is enhanced. At the same time, due to the existence of skin effect and proximity effect, the current distribution in the coil can no longer be equivalent to uniform distribution. Therefore, the real part estimation error increases with the excitation frequency. It is precisely because the increase of frequency will cause the increase of energy loss in the medium, so the working frequency of underwater wireless power transmission system is usually around 200 kHz [6]. The estimated results of the model in this frequency band can be used for parameter optimization in the design phase.
Variation of Mutual Impedance with Axial Displacement
In the underwater MI communication or WPT system, in order to ensure sufficient coupling between the coils, it is usually necessary to keep the coils coaxial. In this configuration, we focus on the influence of the axial displacement in the mutual impedance when the coil is immersed in water. In order to compare the effect of water on the mutual impedance, we also compare and test the mutual impedance in air. The measurement frequency is fixed at 100 kHz. The axial displacement is varied from 0.2r to 1.2r. As shown in Figure 8a,b, when the axial displacement of the coil increases, both the real part and the imaginary part of mutual impedance decrease gradually. When the displacement increases to r, then the imaginary part decreases by about 0.8. It is worth mentioning that when the coil is in water and air, the trend of the imaginary part with axial displacement is almost the same. Because the frequency is also one of the factors that affect the mutual impedance between coils, the mutual impedance of coils with axial displacement at 10 kHz and 910 kHz is compared and tested when σ = 4 S/m. The results are shown in Figure 9a,b. It can be seen that Figure 9b, and Figure 8b are almost the same. This is because when the conductivity of the media is fixed, although the frequency has an impact on the imaginary part, the attenuation of the imaginary part is very small in the frequency range of interest. The difference is that the difference of excitation frequency will lead to the difference of the real part of mutual impedance.
Error Analysis Introduced by the Assumption of Cross-Section
In the previous sections, we systematically analyzed the effects of medium conductivity, coil excitation frequency and axial displacement on mutual impedance. It can be seen that the method proposed in this paper can estimate the influence law of the above factors on the mutual impedance, but the estimation results will also have a small deviation.
Because the method proposed in this paper assumes that the current is evenly distributed in the coil, and for the convenience of calculation, the cross-section of the coil is approximated by a rectangular cross-section. Since the estimation error caused by current distribution has been analyzed in Sections 4.1 and 4.2, this section focuses on the error introduced by the assumption of the coil cross-section. Some calculations of the mutual impedance of the coils have been performed considering the number of turns as a parameter. The two coils are placed coaxial, the vertical distance is 2 cm, the inner radius of the coil remains unchanged, both are fixed at 6 cm, the excitation frequency is fixed at 100 kHz, and the conductivity of the medium is equal to 4 S/m. The number of turns has been swept from 10 to 20. In order to analyze the error of the model, both the circular cross-section closer to the real object and the rectangular cross-section used for approximate calculation are compared. The results are shown in Figure 10a,b. With the increase of the number of turns, the real and imaginary parts of mutual impedance increase gradually. This is because the increase of the number of turns increases the coupling area of the coil, but also increases the eddy current loss in water, which leads to the increase of the real and imaginary parts of mutual impedance. It can be seen from the simulation results that when the excitation frequency and other parameters remain unchanged, the simulation result with rectangular cross-section will make the estimation of the real part of mutual impedance smaller and the estimation result of the imaginary part of mutual impedance larger than that of circular cross-section. This also indirectly verifies the deviation of the method proposed in this paper. Therefore, in the process of solution, the rectangular cross-section can be used to approximate the cross-section of the coil, because the error caused by the assumption of rectangular cross-section is not large.
Discussion
In the design process of MI communication and WPT system, the most important work is the design of coil and the optimization of system parameters. The size of the coil and the operating frequency of the system are related to the communication distance of the MI communication system or the theoretical power capacity that can be transferred and power transmission efficiency of the WPT system. The model and calculation method proposed in this paper can be used to estimate the coupling state of coils in the design process. Through the estimation of the coupling state, combined with the electrical parameters of the coil itself, the working parameters of the system can also be optimized. Although the proposed model only studies the mutual impedance characteristics of the coil immersed in water, it can also be used in other cases where there is conductive medium. The coupling state of the coil in the corresponding environment can be obtained simply by adding the dielectric layer and modifying the electrical parameters of each layer.
Conclusions
In this paper, a model of the mutual impedance between the coils immersed in water is established. In this model, the TREE method is adopted, and the magnetic vector potential is expressed as a series according to the continuity of magnetic induction intensity and magnetic field intensity at the media boundary. Finally, the expression of mutual impedance is derived. The calculated results are in good agreement with the experimental results in the frequency bands commonly used in underwater wireless power transmission system. The study shows that the mutual impedance between the coils is related to the excitation frequency and the conductivity of the media in the presence of water. The increase of frequency will cause the decrease of the imaginary part of mutual impedance, but compared with that in the air, it is only reduced by less than 1%. However, the real part of the mutual impedance increases significantly with the increase of frequency and conductivity. Therefore, resistive loss is the significant difference between coils in water and air. | 6,113.8 | 2021-08-05T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
A FUZZY SEMANTIC INFORMATION RETRIEVAL SYSTEM FOR TRANSACTIONAL APPLICATIONS
In this paper, we present an information retrieval system based on the concept of fuzzy logic to relate vague and uncertain objects with un-sharp boundaries. The simple but comprehensive user interface of the system permits the entering of uncertain specifications in query forms. The system was modelled and simulated in a Matlab environment; its implementation was carried out using Borland C++ Builder. The result of the performance measure of the system using precision and recall rates is encouraging. Similarly, the smaller amount of more precise information retrieved by the system will positively impact the response time perceived by the users
INTRODUCTION
Information Retrieval (IR) is the science of searching for information in documents, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or a hypertextually-networked database such as the World Wide Web.The core functionality of an IR system is the retrieval of data from a database whose abstraction matches the description of an ideal object, inferred from a query (Bellman et al., 1992).A complex algorithm is used to search through the information, retrieve, and deliver the results to the user.
Traditional information retrieval models as employed in current search engines typically express a query as a set of keywords in which Boolean expressions involving words are used as terms to find information (Huang & Tsai, 2005).These keyword indexing systems and Boolean logic queries are sometimes equipped with statistical methods (Buckley & Fuhr, 1990), for instance, using frequency of occurrence of a keyword to strengthen the relationship between keyword and object.This model, called the keyword-based information retrieval model, uses keyword lists to describe the contents of information objects.The keyword list is a description that does not say anything about semantic relationships between keywords.According to Baeza-Yates and Ribiero (1999) the simplicity of these models usually prevents the formulation of more elaborate querying tasks.In addition, they give unsatisfactory query results when the semantic content of the query forms can not be easily represented.
In this study, a fuzzy semantic information retrieval model is developed for query tasks that can not be translated into clear forms.For instance, a potential buyer may be interested in an elementary Java textbook that introduces graphics programming with a pinch of data structure together with a bit of design patterns.Formulating a query using conventional retrieval models to describe the buyer's information needs specified above is difficult due to the ambiguity introduced in the specification of intensity of the data structure content and the concentration of design patterns to be included in said textbook.In a related manner, the use of advanced features of a typical search engine by users has always been limited because most users do not know about them or understand their use (Jansen et al., 2000).This study also presents a simple search interface that does not require users to use the advanced feature explicitly but is powerful enough to describe all the user's information needs.
The IR system retrieves documents based on a given query, and since documents and in most cases the queries themselves are often vague and uncertain, the use of fuzzy logic to relate these classes of objects with un-sharp boundaries (degree of membership) will certainly not be out of place, especially in transactional applications where the needs of the potential buyers are diverse and imprecise.This study uses the flexibility and power of fuzzy if-then rules to develop an information retrieval system with an interface that enables users to easily define their complex information needs and obtain results that will closely and precisely represent such needs.
Data Science Journal, Volume 8, 24 October 2009 In a related manner, the use of advanced features of a typical search engine by users has always been limited because most users do not know about them or understand how to use them (Jansen et al., 2000).This study presents a simple search interface that does not require users to use advanced features explicitly but is powerful enough to describe all the user's information needs.This paper is organised as follows.Section 2 presents related work.In Section 3, a description of the fuzzy semantic information retrieval system is given.In Section 4, the simulation of the IR model using Simulink from the Matlab software is described.In Section 5, the implementation of the fuzzy semantic information retrieval model using Borland C++ Builder software is discussed.Performance evaluation of the IR model with results is presented in Section 6.Finally, Section 7 concludes the study.
RELATED WORK
Several approaches have been proposed to help users specify their information needs more effectively.For instance, Belkin et al. (2003) proposed the use of an additional space for users to type a more wordy description of their information needs while Kelly et al. (2005) proposed the use of clarification forms to extract additional information about search context from users.These approaches are effective in best-match retrieval systems where longer queries generally lead to more relevant search results (Belkin et al., 2003).The downside of these methods is the increase in size of the query.Relevance Feedback (RF) (Oddy, 1977) and interactive query expansion (Efthimiadis, 1996) are other useful techniques to improve the quality of information provided by users of IR systems regarding their information needs.For the RF approach, the user presents the system with examples of relevant information that are later used to formulate an improved query.However, according to Kaski et al. (2005), getting users to use RF in the Web domain is difficult due to the complexity in conveying the meaning and the benefit of RF to users.Query suggestions offered based on query logs have the potential to improve retrieval performance with a limited burden on users.However, the approach is not suitable for commercial sites where re-execution of a similar query is rare.
Most commercial search engines provide advanced query interface to allow the specification of advanced queries using Boolean operators (AND, OR, and NOT) to combine terms.However, according to Jansen (2000) and Silverstein et al. (1999), only a small percentage of users are able to use the function perfectly because of the complexity involved in the formulation of such syntax.Separate studies conducted in Chi et al. (2001) and Teevan et al. (2005) revealed that gathering more information about users can improve the effectiveness of searches.However, this comes at the expense of storing more information about users than what is typically available from interaction logs, and also, there is difficulty in associating interactions with user characteristics.
FUZZY SEMANTIC INFORMATION RETRIEVAL MODEL
A bottom-up approach was employed in the design of the fuzzy semantic information retrieval system.For simplicity of presentation, an information retrieval system for books is considered here.The system consists of three modules, each implemented as a fuzzy inference system.The modules comprise category, which ranks related group of books (programming, artificial Intelligence, databases, etc.) into one partition; feature, which maps books using their features (price, format, publisher, etc.) into another partition; and fsir, which combines the partitions from the other two modules (category and feature) into a partition that is used to identify book(s) in the database.Figure 1 depicts the architecture of the model.In this model, the ambiguity in the database or user's query is represented using a fuzzy logic concept.A fuzzy semantic information retrieval system is described using a fuzzy logic model (Mamdani, 1974) of the form: ) 1 ( : where R n is a typical fuzzy rule of degree n, and the output of the rule depends on the degree of activation of its antecedent.The Mamdani inference scheme aggregates the outputs into a single fuzzy set for the variable Y, where de-fuzzification process is applied later to transform the output fuzzy set into a crisp value.The premise of a fuzzy rule specifies the condition that must be true before the firing of the rule.The firing of each component premise clause of a rule depends on the degree of truth associated with it due to the result of the fuzzified crisp input values.The premise space of the variables is partitioned into fuzzy subspace by studying the characteristics (context and content of books and their features) and the relationships between the books in the database.For instance, a Java programming textbook is likely to be related to computer graphics concepts, as one to four chapters of the Java textbook may be dedicated to these concepts.The linguistic variables are associated with a specific range of values by defining fuzzy sets over the Universe of Discourse (UoD) for each input variable.
For clarity of presentation, the fuzzy logic modelling of the category module is discussed in-depth in this study.
The Category module has five input variables (programming, general, graphics, artificial intelligence, and internet), with their linguistic terms expressed as fuzzy sets.For example, the input variable general has five linguistic terms, which represent a sub-group of books under it.This sub-group includes automata theory, compiling techniques, data structures and algorithms, operating systems, and databases, formally represented in the system as general = {automata, compiler, datastructure, os, database}.
The degree to which an input value belongs to a given fuzzy set is computed by the respective membership function.After a careful analysis of the characteristics of the available books (input data), the triangular and trapezoidal membership functions are selected.The fuzzy set for each of the input variables (programming, general, graphics, ai, and internet) of the category module is shown in Figure 2.For the premise parameters identification (identification of premise and consequence) process, the space of each input variable is taken in turn and partitioned into fuzzy subsets while keeping the range of the other variables unpartitioned.Therefore, for the category module, when the 'programming' variable is partitioned, the variables 'general', 'graphics', 'ai', and 'internet' are not partitioned.In addition, when the 'general' variable is partitioned, the variables 'programming', 'graphics', 'ai', and 'internet' are not partitioned.At the end of the identification process for the consequence and premise parameters, a set of rules, which describes the behaviour of the fuzzy inference system, is produced.Looking at the membership functions depicted in Figure 2, the input variable 'programming' has seven sets of premises, the variable 'general' has five sets of premises, the variables 'graphics' and 'ai' have four sets of premises respectively, while the variable 'internet' has two sets of premises for each fuzzy subset.Hence, there are 7*5*4*4*2=1120 rules for each input variable.As there are five variables, the total number of rules will amount to 1120*5=5600.However, using the rule of thumb, or heuristic, concerning the relationship among the variables, it is possible to reduce the number of rules significantly (Zhang et al., 1997).It should be noted that removing a fuzzy subset from the clause of a rule reduces the number of rules by 25.After eliminating irrelevant rules, the total number of rules left in the category module is 540.Similar procedures were carried out for the feature and fsir modules respectively.The feature module has 360 rules, while the fsir module has 720 rules.
Performing a fuzzy inference process involves the following steps: (i) Fuzzification: takes the crisp numerical values of the inputs and determines the degree to which they belong to each of the appropriate fuzzy sets via membership functions.(ii) Weighting: applies specific fuzzy logic operators (AND, OR, and NOT) on the membership values of the premise parts to get a single number between 0 and 1 that forms the fuzzy strength of each rule.(iii) Generation: creates the consequent relative to each rule.(iv) Deffuzification: aggregates the consequents to produce the output.From the various defuzzification methods, the weighted average is used in this study because it is reliable in average performances.
The following example illustrates how fuzzy logic is used to define a partition for books.We assume that a set of books has the following properties: 70-100% in content of C++ programming, with 20-40% content of automata theory, 60-100% of compilation techniques, and 20-60% content of data structures and algorithms.
Then the values 0.35 and 0.36 (see Figure 2 (a-b), for instance) are entered for the input variables programming and general respectively for the category module.The corresponding truth values when obtained from Figure 2(a-b) give the values indicated in Table 1.These values, when plugged into rules that will be fired (3, 4, and 10 given below), give a crisp value that represents the ranking of books in this set.The membership functions (singletons) 2.5, 3.5, and 9.5 are assigned to 'fuzzy-set is c3', 'fuzzy-set is c4', and 'fuzzy-set is c10' respectively.
Thus, the value 4.32 represents the partition of the set of books with the properties given in the example above.
In most fuzzy IR systems (Anvari & Rose, 1987;Buckles & Petry, 1982;Medina et al., 1994), a numeric indexing form F exists where F: D x T→[0,1] such that F maps a given record r i and a given keyword k j to a numeric weight between 0 and 1. F(r j ,k i )= 0 implies that the record r j is not at all about the concept represented by keyword k i , and F(r j ,k i )=1 implies that the record r j is perfectly represented by the concept indicated by k i .On the contrary, based on the assumption that the clustering process can be performed, our proposed model partitions the sample into sets such that each one contains exactly those values that represent one and only one real world object.Thus, the partitions are used as the set of values that should be returned.
MODELLING WITH MATLAB
In schools and industry, simulation tools based on MATLAB and Simulink are popular for science and engineering applications.MATLAB has many instructions and tools for designing applications and developing algorithms, while Simulink provides excellent graphical user interface and block libraries that allow rapid and easy building, simulating, and testing of system models.Furthermore, since MATLAB contains the Fuzzy Logic Toolbox, it turns into a powerful intelligent systems simulation and analysis tool.The fuzzy inference engine of the category module is depicted in Figure 3.The fuzzy semantic IR system was simulated using Simulink.
Figure 4 depicts an interaction with the model when the following book features are entered: book category (100%), programming in C++ (0.65), price ($85), pages (500), format-paperback (0.5), and published by O'Reilly (14.5).The simulated model responds with the crisp value of 18.53, which describes the user's information needs.
Figure 5 depicts the objects in the database (in the blue rectangle) that relate closely to this value (18.53) given by the fuzzy semantic information retrieval system modelled in Simulink and depicted in Figure 4.In this study, the relationship between the user's query and the objects in the database is defined using Equation (2): where q is the query, o represents the object, and r is the query result.
MODEL IMPLEMENTATION
A fuzzy semantic product search system (ABC Bookstore) was developed using Borland C++ Builder running in a Windows environment on a PC.The search interface of the system is depicted in Figure 6.To conduct a query, a user specifies his/her information needs using one of two methods: either by dragging the scroll bar in which the Textbox above the scroll bar shows the corresponding value or by typing the feature value into the text box with the scroll bar moving accordingly.To prevent an erroneous input, the system disallows entering a value smaller than the minimum or larger than the maximum value.For instance, the minimum and the maximum prices considered in the system, based on information available on books used for the study, are $20 and $200 respectively.Hence, a user is not able to enter a price for a book that is less than 20 or greater than 200.
The system also allows for indecisive selection by users.For instance, if the user is not too particular about the book category, he or she can simply check the "don't know" option, and the system thereafter executes the query using default settings.To search the system, the user selects a coarser-grain option of book categories, for instance programming, then, using a combo box, selects a finer-grain option for specific programming books, e.g., JAVA, BASIC, Pascal, etc., and finally, specifies the feature (price, format, publisher, etc.) that represents Data Science Journal, Volume 8, 24 October 2009 his/her information need.The search interface (see Figure 6) is divided into two parts (category and feature selection), which allows for easy entry of query specifications.The experimental database consists of five book categories and four book features.The categories include Programming, General, Design and Graphics, Artificial Intelligence, and Internet/Network, while the features considered are price, number of pages, publisher, and format.
The respective default settings for each category and features are shown in Figure 6.After finishing the query specifications, the user clicks the "Search" button, which triggers the system to search the database and to present a search result that closely relates to the user's information need.For instance, assume a user wants a list of paperback format books consisting of 50% Java and 50% graphics in content, published by O'Reilly, and having price and pages not more than $50 and 1000, respectively.The search interface showing how the query is entered is depicted in Figure 7. Figure 8 shows the search result (targets) returned by the search engine.In this example, the system returns the objects that are closely related to the specified query.Equation ( 2) is used to determine the closeness between the objects and user's query.
PERFORMANCE EVALUATION
The algorithm developed in Section 2 was applied to a university library database containing about 2000 computer textbooks.The database structure has fields that include bookID, author, title, isbn, publisher, category, price, edition number, and publication date.The proposed fuzzy IR system reads a book's properties and partitions it into sets using book attributes as well as the relationship between book categories (i.e., is Java Data Science Journal, Volume 8, 24 October 2009 programming related to database).Queries were manually built with terms from the title, publisher, category, and price fields.Twenty (20) queries were used in the evaluation, and some examples of the queries used are given in Table 2.In order to evaluate the performance of our proposed fuzzy semantic information retrieval model, a comparison of results was accomplished with respect to the fuzzy IR algorithm developed by Kraft et al. (1994) and the boolean IR system, using the standard recall and precision evaluations by computing the precision and recall at various cut-off points, where the precision is determined at various recall levels.The equations describing the two measures are given in Equations ( 3) and (4).where P is the precision rate, FR represents targets found to be relevant, and TF represents total targets found.Similarly, R is the precision rate; NR is the number of targets found to be relevant, and TR is the total number of targets.Stated in another way, the recall rate is the ratio of the numbers of relevant targets discovered to the number of total relevant targets in the database repository.Figure 9 shows the precision evaluation when taking 10 recall levels from 0 to 100%; that is, given a ranked result of the search, we used human experts to judge the relevance of the first ranked document to the query.If it was truly relevant, it was associated with a 100% precision level.The same procedure was repeated for the second ranked document, the third ranked document, and so forth.The values in Figure 9 were obtained using the average of 20 searches of roughly similar queries.The proposed fuzzy semantic information retrieval system outperforms the other two IR systems, with 82% on average of relevant documents being retrieved versus 75% and 41% of relevant documents obtained in fuzzy IR and boolean IR systems respectively.Our model outperformed both boolean and fuzzy IR systems.
Although our system and the fuzzy IR system are both built using fuzzy logic, they use different indexing and query processing strategies, which leads to their performance difference.In traditional fuzzy IR, indexed terms map every record and the query to a numeric weight between 0 and 1.As a result, queries are always evaluated against all objects in the database.On the contrary, in our system, objects are partitioned into sets, and queries are evaluated against the most suitable partition(s) from the available list of partitions.In this way, our system benefits from grouping objects with similar interests.By doing so, a query is only evaluated against the correct set/partition, which usually produces good results.
Moreover, most fuzzy IR systems use Hamming distance or Euclidean distance as the distance measure between query and object, whereas in our model, the formula for the matching degree is simpler (given in Equation ( 2)), which slightly reduces the computational costs.
Table 2. Queries used in the evaluation
Query Meaning 60% of Java and 30% of Graphics Extract all textbooks having at most 60 percent of Java programming content and at most 30 percent of graphics content Data structures and price between $45 and $80 Extract from the data structure category, all books having price in the range $45 to $80 20 percent of database Extract all computer textbooks with at most 20 percent of their contents containing "database" concepts Similarly, we evaluated the fuzzy semantic information retrieval model against the boolean IR system with respect to information size returned.Our belief is that the smaller and more precise the information returned, the better for users as well as for the resources (bandwidth, processor, RAM) used to process and convey the information.This will go a long way to improving the user perceived quality of service, particularly the response time, where bandwidth limitation do often results in slow traffic.
We compared the two models with relation to search result size and size of information (images and texts) returned when the same query is invoked using the two IR systems.This process was repeated a number of times using different queries, and the average value of five trials was recorded.The approximate system response time was calculated for the two IR systems using Equation ( 5) for different transmission speeds of 28.8kpbs, 56kpbs, 96kbps, and 128kbps.
Data Science Journal, Volume 8, 24 October 2009 where PgSize is the size of the information returned (texts and images) measured in Kbytes, and B is the transfer rate.
The average page sizes returned by the two models were taken and plotted against bit rates.The response time was recorded for both as shown in Figure 10.The approximate response time experienced in the fuzzy semantic information retrieval model is shorter (half of the time recorded for the boolean IR system) for the different transmission speeds.This is a result of the extraction of less and more relevant information.This attribute will go a long way to reducing loads on computing device and the network.
CONCLUSION AND FUTURE WORK
In this article we have developed a fuzzy semantic information retrieval model that can be used to query transactional databases.The most apparent aspect is the use of a fuzzy inference system to develop the three sub-modules of the
Figure 2 .
Figure 2. Fuzzy sets for the input variables
Figure 3 .
Figure 3. Fuzzy inference engine of the category module
Figure 4 .
Figure 4. Simulink model of the information retrieval system
Figure 5 .
Figure 5. Objects related to query given in Figure 4
Figure 6 .Figure 8 .
Figure 6.A fuzzy semantic information retrieval Figure 7.A sample search user interface
Figure 9 .
Figure 9. Precision and recall rates of the model Figure 10.System response time
Table 1 .
Truth values for the input variables | 5,413.8 | 2009-09-30T00:00:00.000 | [
"Computer Science"
] |
Research on Influence of Damping on the Vibration Noise of Transformer
To improve the transformer vibration analysis accuracy, a numerical vibration model of transformer, including the damping effect, is proposed in the paper. According to power transformer structure, the Rayleigh damping model is used to indicate the transformer damping effect. The damping coefficients can be got by analyzing and testing the transformer structure. The modal measurement system of the prototype is constructed and tested to improve and verify the modal analysis method, which can be used to analyze the engineering power transformer that the modal measurement should not be achieved. According to the Rayleigh damping parameters, the vibration and noise of damped and undamped transformers are calculated respectively. Then the effect of the damping on the vibration and noise can be obtained. At last, the vibration and noise of the transformer were tested and compared with the analytic results. Comprehensive analysis shows that the analysis results considering damping can be improved, and are closer to the measured results.
it is of great significance to study the vibration and noise of 23 transformer [1], [2], [3]. 24 The main source of transformer noise is its body vibra-25 tion and cooler. The vibration and noise of the transformer 26 is related to the transformer load, silicon steel sheet mate-27 rial, core structure, magnetic flux density and other factors 28 [4]. In recent years, in order to better design low-noise 29 The associate editor coordinating the review of this manuscript and approving it for publication was Yingxiang Liu . transformers, the academic community has paid more and 30 more attention to improving the accuracy of transformer 31 vibration and noise calculation [5], [6]. By combining the 32 electromagnetic field theory with the elastic theory, a math-33 ematical model of transformer electromagnetic vibration is 34 established [7], [8]. Reference [9] studied the magneto-35 mechanical effects of core transformers with different struc-36 tures, and experiments verified that transformers have high 37 vibration strength. [10], [11] established the electromagnetic 38 mechanical vibration coupling mathematical model consid-39 ering the magnetostrictive characteristics of the transformer, 40 and simulates the vibration and noise of the transformer. 41 Reference [12] proposed to obtain the quantitative relation-42 ship between the magnetostriction characteristics of the core 43 reactor and the noise by systematically evaluating the noise 44 and vibration shape of the simple small transformer core. 45 [13] obtained the relationship between magnetostriction of 46 grain oriented electrical steel (GOES) coils and no-load noise The expressions of transformer mass matrix and stiffness 98 matrix coefficients are obtained by numerical calculation:
103
Transformer vibration noise mainly comes from noise caused 104 by core vibration, that is, magnetostrictive effect of core 105 silicon steel sheet and electromagnetic attraction caused 106 by magnetic leakage between the joint of silicon steel 107 sheet and disk. In addition, when transformer is run-108 ning, winding current will generate magnetic leakage in 109 space. Winding under alternating magnetic field will be 110 affected by Lorentz force and cause winding vibration and 111 noise [14].
112
Damping, as one of the characteristics of energy dissipa-113 tion in vibration process, is also an important factor affecting 114 the vibration response of transformer.
115
Proceed from vibration analysis, this paper comprehen-116 sively interprets the vibration and noise of the transformer 117 through modal analysis, magnetic-mechanical coupling 118 analysis and acoustic field analysis of the prototype 119 transformer.
121
Modal analysis is the process of replacing the original finite 122 element node coordinates with vibration coordinates. The fre-123 quency response function of a given input and output position 124 is expressed by modal parameters: In this paper, the motion equation of the prototype trans-127 former's N-degree-of-freedom system is simplified into a 128 finite element elastic system with mass, elasticity and damp-129 ing in the vibration coordinate system, and its motion equa-130 tion is shown in (6): Take the special solution:
179
In the magnetic field, the governing equation for the mag-180 netic vector is where A is the magnetic vector potential, J is the cur- In the structural force field, the relationship between stress 191 and strain is During the operation of a power transformer, the trans-194 former produces electromagnetic vibration due to the inter-195 action of electromagnetic fields, among which the main force 196 is the Maxwell force F vmax of the core and the magnetostric-197 tive force F vms , and the Lorentz force F l generated by the 198 windings. The electromagnetic force is calculated according 199 to (15), and the obtained results are analyzed in combina-200 tion with solid mechanics, so as to realize the magnetic-201 mechanical coupling: Among them: where T represents Maxwell stress tensor; σ vms magnetostric-207 tive stress; D represents elastic tensor, which can be obtained 208 from Young's modulus and Poisson's ratio of silicon steel; 209 ε vms is the magnetostrictive strain tensor, which is obtained 210 by interpolating the measure B − λ pp curve; F is the volume 211 force in the structural field.
212
The node vibration equation is: where M is the mass matrix, C is the damping matrix, K is the 215 stiffness matrix.
216
Combined with the virtual work displacement method, 217 the finite element method is used to discretize the solu-218 tion element, and all the subdivision elements are collected. 219 The magnetic mechanical coupling model can be written as 220 follows: where S is the electromagnetic matrix; u is the displacement 224 to be determined; A is the magnetic vector potential to be 225 found. When the damping effect is not taken into account, 226 the damping term in the vibration equation of the structure 227 is 0. (19): where ρ is the density of the medium, η is shear modulus, Table 1:
287
Due to the symmetry of the structure of the prototype 288 transformer, the whole transformer model is simplified to a 289 1/2 model, which can simplify the model and improve the cal-290 culation efficiency. Then mesh them, including 7454 domain 291 elements, 3839 boundary elements and 944 edge elements. 292 The results are shown in Figure 1.
293
Since the magnetic properties and magnetostrictive prop-294 erties are inherent properties of the material, the data of 295 the magnetic properties and magnetostrictive properties of 296 the silicon steel sheet when the transformer is running are 297 brought into the model for calculation. First, the input exci-298 tation of the transformer is set by the current calculated 299 from the no-load rated voltage provided by the prototype 300 transformer manufacturer. In this paper, the 26A AC power 301 supply is used as the input excitation of the transformer, and 302 the windings are set to be uniform and multi-turn. Because the 303 prototype works at power frequency, the frequency domain 304 is set to 50Hz.After the designed circuit meets the operating 305 conditions of the transformer, carry out numerical simulation 306 of electromagnetic vibration and noise.
307
Through the modal experiment platform and electromag-308 netic vibration measurement platform, the modal parame-309 ters, electromagnetic vibration and noise of the three-phase 310 VOLUME 10, 2022 Table 2 and the modal test system for 320 transformer specimens is shown in Figure 2. few orders of modal state have a more significant effect on 333 the system vibration, the first six orders of modal frequency 334 were taken as the main object of study in this paper. The first 335 six orders of the transformer are shown in Figure 4.
336
Combined with the simulated experimental calculation 337 data, after the modal test completed the data acquisition at 338 each measurement point, the collected excitation force signal 339 and the response signal collected by the three-way acceler-340 ation s-ensor were imported into the X-MODAL/DSP signal 341 proces-sing system respectively, and the first six orders of the 342 transf-ormer calculated and measured eigenfrequencies were 343 collat-ed asshown in Table 3. (3) and (4) with the change of excitation frequency. By analyzing the 386 electromagnetic vibration of a 10kVA/380V three-phase 387 transformer, the electromagnetic vibration of transformer 388 under different damping conditions is studied in this paper. 389 To reflect the influence of modal damping on vibration it 390 is first necessary to carry out a vibration analysis of the trans-391 former. On the basis of a correct calculation of the magnetic 392 field, the analysis of electromagnetic vibration is started.
393
The specific parameters of the experimental prototype first 394 need to be entered into the simulation software to calcu-395 late the core flux density and coil current density in the 396 magnetic field in the transformer prototype as shown in 397 Figure 7.
398
Based on the calculations obtained in the magnetic field, 399 the magnetostrictive strain is converted into magnetostrictive 400 stress using the stressstrain relationship of elastodynamics, 401 which is brought into the vibration calculations as a load, 402 resulting in the vibration of the transformer as shown in 403 Figure 8.
404
This paper is based on the magneto-mechanical cou-405 pling model, which is further extended to establish an analyt-406 ical model of the sound field of the prototype. The purpose 407 of the model is to accurately calculate the magnitude of 408 the vibration noise generated by the prototype transformer. 409 VOLUME 10, 2022 line is closer to the experimental values.Therefore, it is easy to 428 draw a conclusion that the electromagnetic vibration analysis 429 of transformers is affected by damping effect to some extent, 430 and considering damping effect can make the calculation of 431 electromagnetic vibration more accurate. The comparison data are shown in Table 4. 2) The accuracy of calculation will be improved with the 477 addition of damping effect.
478
In this paper, the distribution of electromagnetic vibra-479 tion and noise of transformer considering damping effects 480 is studied. These opinions play an important role in improv-481 ing the calculation accuracy of transformer electromagnetic 482 vibration and noise, accurately predicting the noise level of 483 transformer products, and researching more effective method 484 of vibration reduction and noise reduction. He received the bachelor's degree in electri-573 cal engineering and automation from the Hebei 574 Normal University of Science and Technology, 575 in 2019. He is currently pursuing the master's 576 degree in electrical engineering with Tiangong 577 University.
578
His research interests include numerical anal-579 ysis of engineering electromagnetic fields, vibra-580 tion reduction, and reduction of electromagnetic 581 energy equipment noise technology. During his master's degree, he won a 582 Freshman Scholarship and a Third-Class Scholarship, in 2020. In April 2021, 583 he has published an invention patent ''Method for Active Noise Reduction 584 of Electrical Equipment,'' which has been disclosed. His research interests 585 include numerical analysis of engineering electromagnetic fields and multi-586 physics coupling.
587
LAN LU was born in November 1995. She received 588 the bachelor's degree in electrical engineering and 589 automation from the School of Science and Tech-590 nology, North China Electric Power University, 591 in 2018. She is currently pursuing the master's 592 degree in electrical engineering with Tiangong 593 University. Her research interests include numer-594 ical analysis of engineering electromagnetic field 595 and vibration and noise reduction technology of 596 electromagnetic energy equipment. During her 597 master's degree, she won a Freshman Scholarship and a Third-Class Scholar-598 ship. In September 2021, she published a utility model patent ''A Phononic 599 Crystal Sound Isolator for Noise Reduction of Electrical Equipment.'' At 600 present, the electromagnetic vibration and noise of engineering electrical 601 equipment are mainly studied and studied. | 2,678.2 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Anti-Depressant Fluoxetine Reveals its Therapeutic Effect Via Astrocytes
Although psychotropic drugs act on neurons and glial cells, how glia respond, and whether glial responses are involved in therapeutic effects are poorly understood. Here, we show that fluoxetine (FLX), an anti-depressant, mediates its anti-depressive effect by increasing the gliotransmission of ATP. FLX increased ATP exocytosis via vesicular nucleotide transporter (VNUT). FLX-induced anti-depressive behavior was decreased in astrocyte-selective VNUT-knockout mice or when VNUT was deleted in mice, but it was increased when astrocyte-selective VNUT was overexpressed in mice. This suggests that VNUT-dependent astrocytic ATP exocytosis has a critical role in the therapeutic effect of FLX. Released ATP and its metabolite adenosine act on P2Y11 and adenosine A2b receptors expressed by astrocytes, causing an increase in brain-derived neurotrophic factor in astrocytes. These findings suggest that in addition to neurons, FLX acts on astrocytes and mediates its therapeutic effects by increasing ATP gliotransmission.
Introduction
Depression is a major public health problem worldwide. About 350 million people suffer from the disease and it will be ranked the second leading cause of death by the year 2020 [49]. There are several effective treatments for depression, but it is estimated that one-third of depressed patients do not respond adequately to conventional antidepressant drugs. Moreover, the slow onset of their therapeutic effects also restricts antidepressant use. Thus, there is an urgent need to identify the biological mechanism of depression and the pharmacological action of antidepressants. It is thought that antidepressants mediate their therapeutic effects by acting on neurons especially monoaminergic neurons, but they also act on non-neuronal cells such as glial cells. However, to date, how glial cells respond to antidepressants or whether glial responses are involved in the therapeutic effects of antidepressants remains unknown.
Astrocytes are the most abundant glial cells in the brain. In addition to their classical roles such as providing physical support to neurons or the removal of neuronal waste, astrocytes are active regulators of brain functions by releasing so-called "gliotransmitters" such as ATP, glutamate and D-serine [26]. Of these, ATP has received increased attention because it is released from astrocytes [25] and mediates various functions to regulate adjacent cells. In addition, released ATP is metabolized into adenosine, and both ATP and adenosine provide autocrine and paracrine signals via P2 and P1 receptors, respectively. Regarding the release of ATP, multiple pathways were reported, including connexin hemi-channels [16], pannexin hemi-channels [61], maxi-anion channels [40], P2X 7 receptors [60] and exocytosis. Recently, Sawada et al. [51] reported vesicular nucleotide transporter (VNUT) uptakes ATP into intracellular vesicles. ATP was released by VNUT-dependent exocytosis in several types of cells including neurons [41], keratinocytes [30], microglia [29] and astrocytes [24,35]. Astrocytic ATP has gained increasing attention because a recent report by Cao et al. clearly showed that decreased extracellular ATP mediated by astrocytes in the hippocampus caused depression in mice [6]. However, the mechanisms underlying the contribution of decreased ATP to depressive behavior, and whether anti-depressants affect astrocytic ATP functions, are poorly understood.
Brain-derived neurotrophic factor (BDNF) is increased by antidepressants and is considered to have a major role in the therapeutic action of antidepressants. For example, reduced BDNF levels were reported in depressed patients and models of depression, and antidepressant treatment increased BDNF expression [21]. It is well known that the majority of BDNF is produced by neurons [42] as well as microglial
Generation of Mlc1-tTS BAC Transgenic Mice
The codons of bacterial tetracycline activator protein and human zinc finger protein KRAB domain were fully mammalianized (tTS). Mouse BAC DNA (clone RP23-114I6) was initially modified by inserting a Rpsl-Zeo cassette (gift from Dr. Hisashi Mori) into the translation initiation site of the Mlc1 gene followed by replacement with a cassette containing tTS and SV40 polyadenylation signal. BAC DNA was linearized by PI-SceI (Cat. # R0696S, New England Biolabs Inc., Massachusetts, U.S.A) enzyme digestion, and injected into fertilized eggs from CBA/ C57BL6 mice.
Generation of VNUT-tetO Knock-in Mice
tetO responsive transgenes were constructed by placing a tetO responsive promoter element by use of 129 SvEv ES cells (Cat. # CMTI-1, RRID:CVCL_GS41). The tetO sequence was inserted upstream of the translation initiation site, and tetO insertion did not alter wild-type expression patterns [64]. Therefore, VNUT protein levels in VNUT-tetO homozygous mice were equivalent to those in wild-type mice.
Doxycyline-Mediated Control of Gene Expression in Double Transgenic Mice
We did not administer doxycycline to inhibit tTA-or tTS-mediated transcriptional control.
All mice were housed in plastic cages in groups of one to five per cage, at room temperature, and with free access to water and food. They were kept on an artificial 12 h light/dark cycle.
Experimental Schedule for Drug Treatment of Mice
FLX was freshly dissolved in saline before use. Animals were administered with FLX orally at a dose of 10 or 20 mg/kg or saline, using a volume of 10 ml/kg once daily for 21-28 days.
Tail Suspension Test
Animals were tested using a modified version of the tail suspension test (TST) that has been previously validated [58]. On the testing day, mice were brought into the behavior room 1 h before the test session to allow them to habituate to the environment. All experimental testing sessions were conducted between 12:00 P.M. and 6:00 P.M., with animals assigned and tested randomly. Eight FLX-treated animals were used, with a matched number of saline-treated control subjects. Each behavioral test was conducted 1 h after the previous drug injection. Mice were individually suspended by the tail with clamp (1 cm distant from the end) for 6 min in a box (MSC2007, YTS, Yamashita Giken, Tokushima, Japan) with the head 10 cm above the bottom of the box. Testing was carried out in a darkened room with minimal background noise. The duration of immobility was scored manually during a 6 min test. The behavioral measure scored was the duration of "immobility", defined as the time when the mouse did not show any movement of the body and hanged passively.
Immunohistochemistry
After perfusion, brain segments were postfixed in 4% paraformaldehyde for 24 h, and then permeated with 20% sucrose in 0.1 M phosphate-buffered saline (PBS) (pH 7.4) for 24 h and 30% sucrose in 0.1 M PBS for 48 h at 4°C. Brain segments were frozen in an embedding compound (Sakura Finetek, Tokyo, Japan) on dry ice. They were cut with a cryostat (Leica CM 1100; Leica, Wetzlar, Germany) at a thickness of 30 μm and collected in PBS at 4°C to be processed immunohistochemically as free-floating sections. The sections were incubated for 48 h at 4°C with primary antibodies: mouse anti-GFAP (1:2000; Cat. # AB5804, RRID: AB_10062746), rabbit anti-BDNF (1:2000; Cat. # sc-546, RRID:AB_630940). The sections were washed six times with 0.1 M PBS (10 min each) and then incubated for 3 h at room temperature with the secondary antibody: Alexa488-and Alexa546-conjugated mouse-and rabbit-IgGs.
Primary Cultures of Rat or Mouse Hippocampal Astrocytes
Primary cultures of astrocytes were derived from the hippocampus of newborn Wistar rats with the exception of Fig. 1C, which were from C57BL/6J mice and VNUT KO mice. Rat or mouse hippocampi were separated, minced, treated with 0.025% trypsin/EDTA (Gibco, NY) for 10 min at 37°C, and then centrifuged for 10 min at 1000 ×g. The pellet was suspended in horse serum (Invitrogen, San Diego, CA), filtrated and cultured in 75 cm 2 flasks in DMEM (Gibco, NY) containing 5% fetal bovine serum (Biological Industries, Kibbutz Beit-Haemek, Israel) and 5% horse serum at 37°C in a 5% CO 2 environment. After 10-13 days incubation, the culture was placed on a shaker and the cells were subjected to 24 h of continuous shaking to remove detached cells. Adherent astrocytes were detached by exposure to 0.1% trypsin/ EDTA and then plated on 3.5-cm dishes and cultured in DMEM containing 5% fetal bovine serum and 5% horse serum at 37°C in a 5% CO 2 environment. Experiments were conducted with 5-7-day-old cultures. and NF157 (Cat. # 2450) were from Tocris Bioscience (Ellisville, Missouri, USA). Botulinum toxin type A was from Allergan (Irvine, California, USA). All drugs were prepared as stock solutions in PBS or DMSO. The stocks were divided into single-use aliquots and stored at 4°C or − 30°C as required. In all experiments, the control groups without drugs received PBS or DMSO at a final concentration that matched the drug-containing solution. The maximum final DMSO concentration was 0.1%, and administering this concentration of DMSO had no effect on the expression of Bdnf mRNA compared with PBS alone (data not shown). NF340 stock solutions and Botulinum toxin type A were inactivated even at −30°C. Therefore, solutions were made by dissolving in distilled water for each use, and used on the same day.
Quantitative PCR Analysis
Astrocytes were prepared in 35 mm dishes (4 × 10 5 cells/dish) and total RNA was isolated and purified using NucleoSpin RNA II Kit (Cat. # U0955, Macherey-Nagel) according to the manufacturer's instructions. Reverse transcription (RT)-PCR was performed using a one-step PrimeScript RT-PCR Kit (Cat. # RR064, Takara Bio Inc., Shiga, Japan). The reaction mix contained 200 ng of total RNA, 200 nM primers, 100 nM TaqMan probe, TAKARA Ex Taq HS and PrimeScript RT enzyme mix. PCR assays were performed in 96-well plates on an Applied Biosystems 7500 (Applied Biosystems, Foster City, CA, USA). Reverse transcription was performed at 42°C for 5 min followed by inactivation at 95°C for 10 s. The temperature profile for PCR consisted of 40 cycles of denaturation at 95°C for 5 s, and annealing/extension at 60°C for 34 s. Primers and the TaqMan probes for rodent Gapdh (Cat. # 4308313) and Bdnf (Mm01334045_m1) were obtained from Applied Biosystems.
Luciferin-Luciferase ATP Assay
The bulk extracellular ATP concentration of astrocytes cultured in 24-well plates was measured by the luciferin-luciferase assay, as described previously (Wilharm et al., 2004), using an ATP Bioluminescence Assay Kit CLS II. This kit was used according to the manufacturer's recommendations. In brief, samples (100 μl for 24-well plates) were collected from each well at specified time points, boiled at 95°C for 10 min, mixed with 100 μl of sample solution containing 100 μl of luciferin-luciferase reagent, and then photons were measured for 30 s by a luminometer at 20°C. ATP standards provided with the kit were diluted in the range 10 −5 to 10 −10 M ATP. The no cells blank was subtracted from the raw data to calculate ATP concentrations from a log-log plot of the standard curve data.
Primary Culture of Rat Hippocampal Neurons
Primary cultures of neurons were derived from the hippocampus of newborn Wistar rats. Rat hippocampi were separated, minced, and digested in Neuron Dissociation Solutions Kit (Cat. # 291-78,001, Wako Pure Chemical) according to the manufacturer's protocol. Neurons were dispersed in DMEM containing 5% fetal bovine serum and 5% horse serum and maintained under an atmosphere of 10% CO 2 at 37°C. The culture medium was changed twice a week and neurons were used 14 days after plating.
Purification of Astrocytes by Magnetic-Activated Cell Sorting (MACS)
Purification of astrocytes from adult mouse brain was performed with MACS technology using an adult brain dissociation kit (130-107-677, Miltenyi Biotec, Bergisch Gladbach, Germany) and a MCASmix™ Tube Rotator (130-090-753) following the manufacturer's protocol. Mice were anesthetized with 50 mg/kg pentobarbital (i.p. injection) and transcardially perfused with ice-cold 0.1 M PBS. The brain was chopped into small pieces (approximately 1 mm) with surgical scissors and digested in 1900 μl of buffer Z containing buffer Y (20 μl), enzyme A (10 μl) and P (50 μl) using the dissociation program of the tube rotator. Then 20 ml of ice-cold PBS containing 0.5% (wt/vol) BSA (PBS/BSA) was added, mixed, and samples were filtered through a cell strainer (100 μl).
Samples were centrifuged at 300 ×g for 7 min at 4°C and the supernatant was discarded. The pellet was resuspended in 3100 μl of PBS/BSA and 900 μl of debris removal solution (130-109-398) was added followed by 4 ml PBS/BSA and centrifugation at 3000 ×g for 10 min at 4°C. The supernatant was aspirated and 15 ml of PBS/BSA was added and the solution was mixed well. Samples were centrifuged at 300 ×g for 7 min at 4°C and the supernatant was discarded. Then 80 μl of PBS/BSA and 10 μl of FcR blocking buffer were added followed by 10 μl of anti-astrocyte cell surface antigen-2 (ACSA-2) microbeads (130-097-678). Samples were incubated for 15 min at 4°C, centrifuged at 300 ×g for 7 min at 4°C, and the cells were resuspended in 500 μl PBS/ BSA. The cells were then transferred to an LS column (130-042-401). The column was set on a magnetic stand and 3 ml PBS/BSA was added three times. The column was removed from the magnet and 3 ml PBS/ BSA was added. The flow through was collected as the ACSA-2-negative (ACSA-2-) fraction. Another 3 ml of PBS/BSA was then added to the column and the fraction containing the anti-ACSA-2-attached astrocytes was collected. Using this technique, the ACSA-2-positive fraction had significantly higher Gfap mRNA levels (N30 fold) compared with the ACSA-2-fraction ( Fig. S1B and S1C), indicating the successful purification of astrocytes from the adult mouse brain.
Statistics
Data were presented as the mean ± SEM, from n ≥ 3 independent determinations performed in duplicate. Significance of differences between data obtained for control samples and each sample treated with reagents was determined using ANOVA, followed by Tukey's test for multiple comparisons. Unpaired and paired t-tests were used for the comparison of two groups. Differences were considered significant when the P value was b0.05.
FLX Stimulates the Exocytosis of ATP in Astrocytes
As reported by Cao et al., a decrease in extracellular ATP in the hippocampal astrocytes caused depression, which was restored by exogenously applied ATP [6]. To investigate whether the antidepressant FLX increases extracellular ATP in astrocytes, primary cultures of hippocampal astrocytes were stimulated with FLX. As shown in Fig. 1A, FLX increased extracellular ATP, which reached a maximal level (30.5 ± 2.8 nM) 5 h after FLX stimulation. Multiple pathways or mechanisms have been reported for ATP release in glial cells, such as maxi-anion channels [40], P2X 7 receptors [60], connexin and pannexin hemi channels [16,61] and exocytosis [44]. The FLX-evoked increase in extracellular ATP was significantly reduced by a Ca 2+ chelator BAPTA-AM (10 μM), a V-ATPase inhibitor bafilomycin (3 μM) and a SNAREs inhibitor Botulinum toxin A (BTX, 5, 10 U/ml), but not by the connexin/pannexin inhibitor, carbenoxolone (CBX, 100 μM), suggesting the involvement of exocytosis (Fig. 1B). Astrocytes express soluble N-ethylmaleimide-sensitive factor attachment protein receptors (SNAREs) such as synaptobrevin, syntaxin I and SNAP-23 [67] and release ATP by an intracellular Ca 2+ dependent mechanism [11]. In addition, Sawada et al. recently identified a vesicular nucleotide transporter (VNUT or Slc17a9), an essential molecule for vesicular storage and release of ATP [51]. FLX-evoked ATP release was significantly inhibited in astrocytes obtained from VNUT-knockout (VNUT-KO) mice (Fig. 1C, WT; 34.5 ± 2.4 nM vs VNUT-KO, 22.4 ± 2.3 nM) suggesting that FLX at least in part induces the release ATP by VNUT-dependent exocytosis. Although BTX almost abolished FLX-induced ATP release in astrocytes, BAPTA-AM, bafilomycin, and VNUT-KO astrocytes also partially reduced ATP release (by approximately 45%, 49%, and 34%, respectively). These findings suggest that mechanisms other than exocytosis are also involved in ATP release.
FLX Increased Extracellular ATP and Induced Anti-Depressive Behavior Via Astrocytic VNUT
To determine whether FLX affects the amount of ATP in vivo, we measured the concentration of ATP in the ACSF from acute hippocampal slices of FLX-administered mice. The chronic administration of FLX markedly upregulated the amount of ATP in the hippocampus of WT mice, while this increase was completely blocked in VNUT-KO mice ( Fig. 2A). These results indicate that increased ATP release by FLX is dependent on VNUT.
ATP derived from astrocytes modulated depressive behaviors in mice [6]. To elucidate whether astrocytic ATP gliotransmission facilitated by FLX mediates its therapeutic effects, we tested FLX-induced anti-depressive effects in VNUT-KO mice. FLX at 10 and 20 mg/kg was administered for 21 days in wildtype (WT) control mice, and its therapeutic effect was assessed by tail suspension test (TST). As shown in Fig. 2B, chronic administration of FLX (21 days) significantly decreased immobility time in a concentration-dependent manner over a concentration range from 10 to 20 mg/kg (Saline [control] 145.4 ± 11.6 s vs. FLX at 20 mg/kg 38.4 ± 18.4 s), indicating FLX induced antidepressive effects in mice. These results correspond well with a previous report [14], and thus we chose FLX (20 mg/kg) administered for 21 days for the following experiments.
When FLX (20 mg/kg) was administered for 21 days in VNUT-KO mice, its anti-depressive effect, as measured by a decrease in immobility time, was significantly weaker than in WT mice (Fig. 2C). When saline was administered, there was no significant difference in immobility time between WT and VNUT-KO mice. As shown in Fig. 1, FLX caused VNUT-dependent ATP exocytosis from hippocampal astrocytes; therefore, we generated double-transgenic mice from astrocyte-specific tetracycline trans-silencer (tTS) or tetracycline trans-activator (tTA) lines and VNUT tetO knockin lines for astrocyte-specific gene knockout or overexpression. Astrocytes purified from the adult brains of Mlc-tTA:: VNUT-tetO or Mlc-tTS::VNUT-tetO mice using MACS exhibited significantly increased or reduced Slc17a9 mRNA levels (1020-fold increase or 2.6-fold decrease, respectively) (Fig. S1), whereas no changes were detected in other cell types (Fig. S1D, E). Hereafter, we refer to Mlc-tTA::VNUT-tetO (astrocyte-selective VNUT-overexpression) and Mlc-tTS::VNUT-tetO (astrocyte-selective VNUT-KO mice) mice as astro-VNUT-OE and astro-VNUT-KO mice, respectively. We investigated the effect of astrocyte-selective VNUT-deletion on FLX-evoked anti-depressive effects using astro-VNUT-KO (Fig. 3A). There was no significant difference in immobility time between astro-VNUT-KO mice and their littermate control mice when treated with saline. However, similar to VNUT-KO, FLX-evoked anti-depressive effects were significantly weaker in astro-VNUT-KO mice than in littermate control mice (astro-VNUT-KO, 86.6 ± 5.1 vs. littermate control, 49.3 ± 9.8; immobility time, sec, *p b .05) (Fig. 3B). We tested the effect of astrocytic VNUT overexpression on the FLX-evoked anti-depressive effect using astro-VNUT-OE mice. At 20 mg/kg, the FLX-induced anti-depressive effect in astro-VNUT-OE mice was similar to that in WT mice and littermate controls (Fig. 3C). However, at 10 mg/kg, the FLX-induced anti-depressive effect was significantly stronger in astro-VNUT-OE mice than WT or littermate control mice (Fig. 3C, middle). Thus, a decrease and increase in VNUT in astrocytes correlated with the decrease and increase in the anti-depressive effects of FLX, respectively. There was no significant difference in basal immobility time when tested before and after saline administration among any mutant mice used in these studies (WT, litter mate control mice, astro-VNUT-KO mice and astro-VNUT-OE mice) (data not shown). These findings strongly suggest that FLX acts on astrocytes to control VNUT-dependent ATP exocytosis, which mediates its therapeutic effect, at least in part.
FLX and Other Antidepressants Induce BDNF in a Primary Culture of Hippocampal Astrocytes
To address the mechanisms underlying the astrocytic VNUTmediated anti-depressive effect of FLX, we focused on extracellular ATP and BDNF because FLX increased ATP in astrocytes ( Fig. 1) and ATP increased the astrocytic expression of Bdnf mRNA [62], one of the most important molecules in the pathogenesis of depression [72]. Fig. 4A and B show the dose-and time-dependency of the FLX-evoked increase in Bdnf mRNA in a primary culture of hippocampal astrocytes. Treatment with FLX for 6 h increased Bdnf mRNA in a concentrationdependent manner (1-30 μM) (Fig. 4A), and at 12 h after 30 μM FLX administration, it reached 4270% of the PBS-treated control (p b .01). The FLX-evoked increase in Bdnf mRNA was initiated at 1 h and gradually increased to at least 12 h after FLX administration (Fig. 4B). FLX also increased BDNF protein levels, which reached a maximal level 24 h after FLX treatment (Fig. 4C). We investigated the effects of other antidepressants on Bdnf mRNA expression in cultured hippocampus astrocytes. As shown in Fig. 4D, treatment with imipramine (30 μM), paroxetine (30 μM) and FLX (30 μM) for 12 h significantly increased the expression of Bdnf mRNA, but mianserin (30 μM) did not. Imipramine is a tricyclic antidepressant, paroxetine and FLX are classified as selective serotonin reuptake inhibitors (SSRIs), and mianserin is a tetracyclic antidepressant. These results suggest that increased BDNF in astrocytes might be a common pharmacological feature across different types of antidepressants.
FLX is a pro-drug, and is metabolized into norfluoxetine (NFLX), which then mediates its pharmacological effects [57]. Therefore, we tested the effect of NFLX on Bdnf mRNA expression in hippocampal astrocytes. NFLX increased Bdnf mRNA in a concentration-dependent manner (10 and 30 μM) (Fig. S2A).
In cell cultures, the concentration of FLX and other antidepressants used (20-30 μM) mostly exceeded the therapeutic plasma levels in patients (1-3 μM), indicating the effects of antidepressants in this study might be overestimated. However, concentrations of FLX in the human brain were reported to be 20-fold higher than those in the plasma [34], indicating that an FLX concentration of 30 μM might occur in the brain.
Chronic Administration of FLX Increases BDNF in Hippocampal Astrocytes In Vivo
To determine whether FLX increases BDNF in astrocytes in vivo, we measured BDNF expression in astrocytes by immunohistochemical analysis. After chronic administration of FLX (20 mg/kg for 21 days), brain sections were stained with anti-BDNF and anti-GFAP (glial fibrillary acidic protein) antibodies (Fig. 5). In saline-administered mice, BDNF-immunoreactivities were predominantly observed in neurons of the granule cell layer, dentate gyrus and pyramidal cell layers in CA1, CA2 and CA3 (Fig. 5a) but little BDNF-immunoreactivity was observed in GFAP-positive astrocytes (Figs. 5d-g). After chronic FLXadministration, however, BDNF-positive signals were increased in neurons, and in astrocytes across all regions of the hippocampus (Figs. 5hn), indicating FLX increased BDNF in astrocytes in vivo. In comparison to the hippocampus, astrocytes showed only a slight increase in BDNF-immunoreactivity in the cortex (not shown) indicating that the FLX-induced BDNF increase in astrocytes is partly dependent on brain region. This regional difference in BDNF expression was also observed in vitro, where FLX-induced Bdnf mRNA upregulation was significantly higher in hippocampal astrocytes than in cortical astrocytes (Fig. S3).
We measured the BDNF expression in VNUT-KO mice (Fig. S4). In contrast to WT mice, BDNF-positive signals were less elevated in GFAP-positive astrocytes despite the chronic administration of FLX, indicating that FLX-induced astrocytic BDNF expression in vivo depends on VNUT. We also examined whether microglia express BDNF by immunohistochemistry after the chronic administration of FLX. CD11b positive microglia did not express BDNF and CD11b staining did not show any morphological changes such as the retraction of processes or hypertrophic cell bodies and processes, which are characteristic of activated microglia (data not shown). These data indicate that the major sources of BDNF in our model are astrocytes and neurons, but not microglia.
FLX-Evoked BDNF Upregulation in Astrocytes Is Mediated by Activation of P2 and P1 Receptors
Next, we investigated the mechanisms underlying the FLX-evoked increase in BDNF in astrocytes, with a focus on extracellular ATPmediated signals, because FLX increases extracellular ATP. The FLXevoked increase in Bdnf mRNA in astrocytes was significantly decreased by the non-selective P2 receptor antagonist suramin or RB-2, a P2Y 11 receptor antagonist NF340, but not by a P2X receptor antagonist pyridoxal phosphate-6-azobenzene-2,4-disulfonic acid (PPADS) or a P2Y 1 receptor antagonist MRS2179. In addition, the upregulation of Bdnf mRNA was also inhibited by an adenosine A2b receptor antagonist MRS1706, which was further inhibited when simultaneously applied with suramin (Fig. 6A). This suggested that both P2 and P1 receptors, especially P2Y 11 and A2b receptors, are involved in BDNF production. Similar results were obtained from the western blot analysis of BDNF (Fig. 6B). To confirm the involvement of ATP in BDNF upregulation, hippocampal astrocytes were directly stimulated with ATP. We observed increased Bdnf mRNA and BDNF proteins in astrocytes, which was decreased by treatment with MRS1706 or suramin alone, or their co-application which further decreased BDNF (Fig. 6C, D). In addition, the NFLX-evoked increase in Bdnf mRNA was also inhibited by suramin or MRS1706 (Fig. S2A).
Regarding P1 receptors, we also performed additional pharmacological analyses. Extracellular ATP is rapidly metabolized into ADP, AMP and adenosine by NTPDases and 5′-nucleotidase [70]. Unlike ATP or ADP, adenosine acts on P1 receptors such as A1, A2a, A2b, and A3 adenosine receptors. When hippocampal astrocytes were treated with adenosine directly, Bdnf mRNA was increased in a dose-dependent manner over a concentration range from 1 to 100 μM. The ED50 value was approximately 4.1 μM (Fig. S6A), suggesting the involvement of a lowaffinity adenosine receptor subtype, possibly the A2b receptor [45]. Adenosine also increased BDNF protein levels in astrocytes (Fig. S6D). The time-course of adenosine-evoked Bdnf mRNA upregulation was transient and peaked at 1 h after stimulation (Fig. S6B). This time course was similar to that of ATP (Fig. S5A) and faster than that of FLX (Fig. 4B). Adenosine-evoked increases in Bdnf mRNA in astrocytes were inhibited by an A2b receptor antagonist, MRS1706, but not by A1 (DPCPX), A2a (SCH58261), or A3 (MRS1220) receptor antagonists (Fig. S6C). All these pharmacological profiles strongly suggest that A2b receptors are responsible for BDNF induction.
Next, we investigated the effect of 5-HT on BDNF expression in astrocytes. When treated with 5-HT (0.1, 1, 10 μM) for 1 and 6 h, astrocytes did not upregulate Bdnf mRNA (Fig. S5E) indicating no involvement of 5-HT in astrocytic BDNF production. Thus, the FLXinduced BDNF increase in astrocytes appears to be dependent on extracellular ATP and adenosine, and activation of their corresponding receptors, P2Y 11 and A2b, respectively.
FLX-Evoked BDNF Upregulation Is Mediated by cAMP/PKA in Hippocampal Astrocytes
We further investigated the intracellular signaling cascades of the FLX-evoked BDNF increase in astrocytes. We showed that both P2Y 11 and A2b receptors are involved in the FLX-evoked responses. These receptors are coupled with Gs proteins, the activation of which results in the accumulation of cAMP and activation of protein kinase A (PKA) [12,13,22]. In addition, both receptors also mobilize intracellular calcium and activate Ca 2+ /calmodulin-dependent kinase (CaM kinase) [63]. P2Y 11 receptors are coupled to Gs proteins as well as Gq proteins, leading to the mobilization of Ca 2+ from inositol 1,4,5-trisphosphate [Ins (1,4,5)P3]-sensitive stores [68]. A2b receptors evoked a phospholipase C-dependent increase in intracellular Ca 2+ [47] by Gq-dependent or -independent mechanisms [22]. However, a CaM kinase inhibitor, KN-93, and a calmodulin antagonist, W-7, did not inhibit the FLX-or ATP-evoked upregulation of Bdnf mRNA in astrocytes. However, a PKA inhibitor H-89 [10] significantly reduced ATP-and FLX-evoked responses by 62.8 ± 7.9% and 70.8 ± 1.6%, respectively, suggesting the involvement of PKA-mediated intercellular mechanisms in FLX-and ATPinduced BDNF expression (Fig. 7A, B).
Astrocytes constitutively release ATP [36], which is degraded by membrane-associated NTPDases [73]. Thus, the balance between release and degradation of ATP greatly affect extracellular ATP concentrations [39]. FLX inhibited NTPDases [46]; therefore, the FLX-evoked Bdnf mRNA upregulation seen in the present study may occur by the inhibition of NTPDases, rather than the stimulation of ATP exocytosis by FLX. To address this, we treated astrocytes with ARL67156 (100 μM), a selective inhibitor of NTPDases, but did not observe the upregulation of Bdnf mRNA (121.6 ± 2.7% of non-treated control, n = 5), suggesting the mechanism of BDNF upregulation by FLX cannot be explained by the inhibitory effect of FLX on NTPDase.
FLX Increases VNUT Via PKA-Dependent Mechanisms
Finally, we tested whether FLX affected VNUT expression in astrocytes. As shown in Fig. S8, treatment of astrocytes with FLX (30 μM) increased Slc17a9 mRNA (encoding VNUT). The upregulation peaked at 12 h and lasted at least 24 h after FLX treatment. The increase in Slc17a9 mRNA was abolished by H-89 (Fig. S8B), suggesting the involvement of PKA in Slc17a9 mRNA upregulation. FLX-evoked ATP release peaked at 5 h (Fig. 1A) and lasted at least 10 h after FLX stimulation. Because FLX-evoked ATP release preceded the FLX-evoked upregulation of VNUT, this suggests that FLX stimulates ATP exocytosis via a VNUTdependent mechanism, and the released ATP and its metabolite adenosine act on P2Y 11 and A2b receptors, respectively, thereby causing the PKA-dependent upregulation of VNUT (Fig. 8). Such a feed-forward mechanism may affect ATP release and BDNF increase when FLX is administered chronically.
We also tested whether other psychotropic drugs affected VNUT expression. The SSRI-type antidepressants paroxetine and fluvoxamine upregulated Slc17a9 mRNA in astrocytes, but the tetracyclic antidepressant mianserin did not (Fig. S8C). An antipsychotic drug haloperidol also had no effect (data not shown). Thus, there is a close correlation between Bdnf and Slc17a9 upregulation by FLX and paroxetine, but not by mianserin (Fig. 4D).
Discussion
In general, major depression is thought to be caused by the dysfunction of monoaminergic neurons, because a number of antidepressants exert their primary biochemical effects by inhibiting the reuptake of 5-HT and/or noradrenaline [15]. Thus, antidepressants are believed to act on neurons especially monoaminergic neurons. SSRIs are the most commonly prescribed drugs for the treatment of depression, and are also thought to inhibit 5-HT reuptake in neurons. In addition, SSRIs also increase BDNF and neurogenesis [72], and these effects on neurons might contribute to their therapeutic effect. We demonstrated that astrocytes have a pivotal role in mediating the therapeutic effect of FLX (Figs. 2 and 3). Few studies have shown that astrocytes are involved in the pathogenesis of depression (reviewed by [27]). For example, loss of glia but not neurons was sufficient to induce depressive-like behavior in rats [2], and FLX counteracted astrocytic cell loss in an animal model of depression [17]. Furthermore, the anti-depressant-like effects of imipramine were abolished when astrocytic function in the hippocampus was inhibited by fluorocitrate [31]. All these findings strongly suggest that astrocytic dysfunctions correlate with the pathogenesis of depression, and that anti-depressants might counteract these dysfunctions. However, these reports only reported a correlation between astrocytic functions and depressive behaviors or depression-related molecules, and did not show causality between them. A causal relationship as well as molecular mechanisms between astrocytes and depression is a still matter of debate. A recent report by Cao et al. reported a correlation between decreased extracellular ATP and depressive behavior, whereby (1) extracellular ATP concentrations in hippocampal astrocytes was low in depressive mice, and (2) when ATP was administered to mice, the repressive behavior was restored [6]. Thus, there seems to be a causal relationship between decreased extracellular ATP and depressive behavior, suggesting ATP might be an astrocytic molecule that controls depressive behavior. In the present study, we demonstrated that FLX, a SSRI antidepressant, increased extracellular ATP from hippocampal astrocytes by a VNUT-dependent mechanism. In addition, and most importantly, the FLX-induced anti-depressive effect was dependent on astrocytic VNUT (Fig. 3). A decrease or increase in VNUT in astrocytes decreased or increased the FLX-induced antidepressive effects, respectively. Previous studies reported that FLX acted on neurons to mediate its therapeutic effects. In contrast, astrocytes have received limited attention as a therapeutic target of antidepressants. Therefore, this study emphasizes that in addition to neurons, astrocytes also respond to FLX or anti-depressants, and contribute to its therapeutic effects. These findings strongly suggest that astrocytes might be a potential target for anti-depressants.
As described in the introduction, astrocytes possess multiple pathways for the release of ATP, including diffusible release from connexin hemi-channels [16], pannexin hemi-channels [61], P2X 7 receptor channels, maxi-anion channels [40,60], and exocytic release [24,38,44]. We previously showed that microglia, another type of glial cell, released ATP by exocytosis dependent upon VNUT [29]. In the present study, we clearly showed FLX increased ATP release from astrocytes by exocytosis because it was inhibited by bafilomycin A, BTX or the deletion of VNUT, but not by CBX (Fig. 1B, C). We did not determine how FLX stimulates ATP exocytosis from astrocytes, but there seems to be at least two distinct mechanisms: (1) the direct stimulation of VNUT-dependent ATP exocytosis (which we have not clarified but might be independent of 5-HT-mediated signals (Fig. S5E)); and (2) the upregulation of VNUT in astrocytes, based on our findings that released ATP activated P2Y 11 receptors and its metabolite adenosine activated A2b receptors, upregulating VNUT in a PKA-dependent feed-forward mechanism. Furthermore, FLX upregulated VNUT, which was inhibited by the PKA inhibitor, H-89 (Fig. S8B). Based on differences in the time-course of FLX-evoked ATP release (Fig. 1A) and FLX-evoked VNUT-upregulation (Fig. S8A), events (1) and (2) probably occur separately. In addition to the inhibition of 5-HT uptake in neurons, FLX or other SSRIs have several other pharmacological functions. Tricyclic antidepressants [59] and SSRIs [43] were reported to inhibit Kir4.1 channels, an astrocytespecific inwardly rectifying K channel. FLX inhibited Kir4.1 in astrocytes with an IC50 value of approximately 15 μM, similar to the ED50 value for FLX-evoked BDNF production in astrocytes in the present study (Fig. 4A). It is interesting that tricyclic anti-depressants and SSRIs inhibited Kir4.1 [43,59] and produced BDNF in astrocytes (Fig. 4), but mianserin, a tetracyclic antidepressant, did not inhibit Kir4.1 [43] or produce BDNF in astrocytes (Fig. 4D). These similarities are interesting but we must await further experiments to clarify the involvement of Kir4.1 in the FLX-evoked ATP release in astrocytes.
How decreased extracellular ATP causes depressive effects, or how increased ATP mediates anti-depressive effects remain unknown. Cao et al. showed that astrocytic ATP acts on neuronal P2X receptors to mediate its therapeutic effects. However, detailed mechanisms have not been clarified. In the present study, we showed that FLX increased BDNF in astrocytes, which was ATP-and adenosine-dependent. BDNF has received increasing attention as a therapeutic target for depression because BDNF levels were reduced in mood disorders and preclinical depression models [33,56], chronic treatment with anti-depressants increased brain BDNF gene expression and signaling [9], treatment with anti-depressants increased BDNF in serum in patients [53], and an infusion of BDNF into the midbrain [55] or hippocampus [54] produced antidepressant-like effects in animal models of depression. In addition, patients with depression had SNPs of BDNF (X Jiang et al., 2005; Licinio et al., 2009). All these findings strongly suggest SSRIs might control BDNF-mediated signals, thereby leading to their therapeutic effects. Furthermore, all these reports showed the importance of neuronal BDNF. Therefore, the upregulation of BDNF by FLX-evoked ATP in astrocytes seen in the present study indicates it might mediate astrocyterelated anti-depressive effects. Anti-depressants including SSRIs were reported to increase BDNF in neurons [42]. In the present study, the chronic administration of FLX increased BDNF in hippocampal neurons, but this increase was greater in astrocytes (Fig. 5). Recent reports showed that BDNF expression in cultured astrocytes under several situations was upregulated by anti-depressants [1]. In this study, we demonstrated for the first time that BDNF was strongly upregulated in hippocampal astrocytes in chronically FLX-treated mice in vivo. The source of BDNF was reported to be mainly from neurons, and possibly from microglia in the CNS. Astrocytes have received limited attention as a source of BDNF because BDNF is expressed at low amounts in astrocytes of the normal adult brain. However, upon stimulation with FLX, astrocytes dramatically increased BDNF production from astrocytes in vitro (Fig. 4) and in vivo (Fig. 5). The brain contains higher numbers of astrocytes compared with neurons suggesting astrocytes might be a more important source of BDNF than neurons when exposed to FLX or anti-depressants.
What is the mechanism(s) underlying the ATP-mediated BDNF production in astrocytes? In neurons, SSRIs increase extracellular 5-HT, upregulating neuronal BDNF in a 5-HT receptor-dependent manner [4]. However, unlike neurons, the FLX-evoked BNDF increase in astrocytes was independent of 5-HT, but was dependent on P2 and P1 receptors. Released ATP acts on several types of P2 receptors [48], and is immediately metabolized into adenosine by NTPDases and 5′-nucleotidases [73]. We showed that ATP and adenosine act on P2Y 11 and A2b receptors, respectively, and upregulate BDNF via cAMP/PKA/pCREBdependent pathways (Figs. 6, 7, S5, S6). Some studies have reported A2b receptors in astrocytes [47,69], and few studies have reported P2Y 11 receptors in astrocytes [3]. Astrocytic P2Y 11 receptors were functional and were inhibited by NF340 or MRS1706 (Fig. 6A). In addition, cultured astrocytes expressed anti-P2Y 11 receptor antibody-positive signals, which disappeared when the antibody was absorbed by its antigen-peptide (Fig. S2B). Thus, both P2Y 11 and A2b receptors appear to be present and functional in astrocytes. Both receptors are Gscoupled GPCR, and their activation results in cAMP/PKA pathway Fig. 8. Schematic diagram of the mechanism involved in FLX-induced ATP release. FLX acts on neurons astrocytes to promote the release of ATP by exocytosis, which is dependent on VNUT. Released ATP and its metabolite adenosine respectively activate P2Y 11 and A2b receptors expressed by astrocytes. The activation of both receptors results in an increase in cAMP, activation of PKA, and the induction of pCREB leading to an increase in the transcription of BDNF in astrocytes. Activated PKA also upregulates VNUT expression, leading to a feed-forward loop of FLX-evoked ATP release and BDNF increase.
signaling in astrocytes. We demonstrated that the PKA-dependent formation of pCREB, a well-known transcription factor [19], is a key event that upregulates Bdnf mRNA in astrocytes. The inhibitory effect of FLX-evoked BDNF production and formation of pCREB by suramin or MRS1706 was accentuated by co-application of both antagonists, suggesting that at least in part, P2Y 11 and A2b receptors might contribute to these events independently.
FLX induced a marked increase in astrocytic BDNF in the hippocampus, but only a small increase in the cortex. Thus, the effect of FLX on BDNF upregulation seems to be dependent on the brain region. We must await further investigation to clarify why such a difference occurs. However, this region-dependent upregulation of astrocytic BDNF by FLX may reveal new findings as to how and where anti-depressants mediate their therapeutic effects.
In conclusion, we demonstrated that the anti-depressant FLX acted on astrocytes, and mediated its therapeutic effects by facilitating VNUT-dependent ATP exocytosis. Decreased or increased VNUT in astrocytes resulted in decreased and increased FLX-evoked antidepressive effects, respectively, suggesting astrocytic ATP exocytosis via VNUT plays a pivotal role in modulating the therapeutic effect of FLX. The upregulation of BDNF in astrocytes might be the most likely event in FLX-evoked ATP-mediated anti-depressive effects. In addition to FLX, other anti-depressants also increased VNUT and BDNF in astrocytes, suggesting the astrocytic regulation seen in the present study might be a common pharmacological profile for anti-depressants. | 8,857.8 | 2018-06-01T00:00:00.000 | [
"Biology"
] |
System Architecture for IIoT-Based POC Molecular Diagnostic Device †
: In this paper, we investigate an efficient structure for a point-of-care (POC) molecular diagnostic system based on the industrial Internet of things (IIoT). The target system can perform automated molecular diagnosis including DNA extraction, PCR amplification, and fluorescence detection. Samples and reagents are placed in a multi-room cartridge and loaded into the system. A rotating motor and a syringe motor control the cartridge to extract DNA from the sample. The extracted DNA is transferred to a polymerase chain reaction (PCR) chamber for DNA amplification and detection. The proposed system provides multiplexing of up to four colors. For POC molecular diagnostics, the World Health Organization demands features such as low volume, low cost, fast results, and a user-friendly interface. In this paper, we propose a system structure that can satisfy these requirements by using a PCR chip and open platform. A distributed structure is adopted for the convenience of maintenance, and a web-based GUI is adopted for the user’s convenience. We also investigated communication problems that may occur between system components. Using the proposed structure, the user can conveniently control from standard computing devices including a smartphone.
Introduction
The disease-related mortality rate is high because it is difficult to diagnose infectious diseases in environments with limited resources, such as in developing countries. In this environment, in order to reduce the mortality rate due to infectious diseases, it is necessary to improve the performance of various test methods and shorten the test time. For this reason, various diagnostic methods have been developed and point-of-care (POC) is being used [1].
There are several methods for diagnosing infectious diseases with POC. Among them, culturing viruses or serological diagnosis methods take a lot of time for obtaining, extracting, and analyzing samples, and the extraction process of the sample is also complicated. Therefore, molecular biological diagnosis methods are preferred [2].
The World Health Organization (WHO) proposed ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and robust, Equipment-free, and Deliverable to end-users) as a performance evaluation index for POC devices [1]. In this paper, the proposed system used an open platform using disposable polymerase chain reaction (PCR) chips [3]. POC equipment consumes a lot of time and human resources for maintenance, and this can be saved by using an open platform. In addition, in the existing POC device software architectures, each function was controlled by one process. This causes difficulties to Eng. Proc. 2021, 6, 60 2 of 4 maintain and to operate the system and monitor the functions of the device at the same time. This paper introduces a distributed software architecture and a web-based user interface [4].
Materials and Methods
The molecular diagnosis process is functionally divided into a nucleic acid extraction, an amplification, and detection of nucleic acids. Figure 1 is the hardware system architecture of the proposed system that performs these functions.
Eng. Proc. 2021, 6, 60 2 of 4 saved by using an open platform. In addition, in the existing POC device software architectures, each function was controlled by one process. This causes difficulties to maintain and to operate the system and monitor the functions of the device at the same time. This paper introduces a distributed software architecture and a web-based user interface [4].
Materials and Methods
The molecular diagnosis process is functionally divided into a nucleic acid extraction, an amplification, and detection of nucleic acids. Figure 1 is the hardware system architecture of the proposed system that performs these functions. In the system, the nucleic acid extraction unit operates based on a magnetic beadbased DNA extraction protocol and uses two stepper motors and one servo motor. Each of the two stepper motors selects a cartridge chamber or controls a syringe for moving reagents between chambers. The servo motor is used to hold the magnetic beads by bringing the magnet close to the chamber. The cartridge has several chambers where the samples, magnetic beads, and reagents used in the protocol are loaded.
By controlling three motors, DNA is extracted by performing the DNA extraction protocol: lysis, add beads, separation, wash, and elution. The extracted nucleic acid is moved to a PCR chip connected to the cartridge, and nucleic acid amplification and detection are performed.
The nucleic acid amplification and detection unit uses a PCR chip, a fan, a stepper motor, a photodiode, four LEDs, an excitation filter, and an emission filter. The PCR chip is equipped with a heating pattern and a thermistor. They control the temperature of the chip with an external fan. By heating and cooling the reagent inside the chip, DNA is amplified by repeating the processes of denaturation, binding, and extension. At the end of each cycle, the photodiode, LEDs, and filters are used to detect fluorescence and monitor the amplification process. The emission filter was selected by placing four excitation LEDs to illuminate the front of the chip at an angle and by placing a filter wheel in front of the photodiode. This structure simplifies the system optics because there is no need for complex optics such as fluorescence cubes.
In this paper, we propose a software system with a distributed structure that facilitates operation, monitoring, and maintenance through web-based UIs. The function of the proposed system is verified in the emulator for the system shown in Figure 1. The extraction unit, the amplification unit, and the detection unit are functionally independent and can be commercialized in general. Therefore, in the proposed system, the device interface In the system, the nucleic acid extraction unit operates based on a magnetic bead-based DNA extraction protocol and uses two stepper motors and one servo motor. Each of the two stepper motors selects a cartridge chamber or controls a syringe for moving reagents between chambers. The servo motor is used to hold the magnetic beads by bringing the magnet close to the chamber. The cartridge has several chambers where the samples, magnetic beads, and reagents used in the protocol are loaded.
By controlling three motors, DNA is extracted by performing the DNA extraction protocol: lysis, add beads, separation, wash, and elution. The extracted nucleic acid is moved to a PCR chip connected to the cartridge, and nucleic acid amplification and detection are performed.
The nucleic acid amplification and detection unit uses a PCR chip, a fan, a stepper motor, a photodiode, four LEDs, an excitation filter, and an emission filter. The PCR chip is equipped with a heating pattern and a thermistor. They control the temperature of the chip with an external fan. By heating and cooling the reagent inside the chip, DNA is amplified by repeating the processes of denaturation, binding, and extension. At the end of each cycle, the photodiode, LEDs, and filters are used to detect fluorescence and monitor the amplification process. The emission filter was selected by placing four excitation LEDs to illuminate the front of the chip at an angle and by placing a filter wheel in front of the photodiode. This structure simplifies the system optics because there is no need for complex optics such as fluorescence cubes.
In this paper, we propose a software system with a distributed structure that facilitates operation, monitoring, and maintenance through web-based UIs. The function of the proposed system is verified in the emulator for the system shown in Figure 1. The extraction unit, the amplification unit, and the detection unit are functionally independent and can be commercialized in general. Therefore, in the proposed system, the device interface is separated, and the entire function is integrated by placing an application programmer interface (API) server. Figure 2 shows a block diagram of the all the device functions. The parts responsible for DNA extraction, amplification, and detection are the extractor controller, detector controller, and PCR controller shown in the figure, respectively. The extractor controller is connected to the extraction interface, a thread of the representation state transfer (REST) API server, and the detector controller and the PCR controller are connected to the PCR interface, which is also a thread of the REST API server, and are controlled by the API server. The API server communicates with the web GUI through the web API. The interface or controller is implemented using a socket server that is one of the inter-process communication methods. In this way, functions can be monitored even outside the device.
Implementation Results
is separated, and the entire function is integrated by placing an application programmer interface (API) server. Figure 2 shows a block diagram of the all the device functions. The parts responsible for DNA extraction, amplification, and detection are the extractor controller, detector controller, and PCR controller shown in the figure, respectively. The extractor controller is connected to the extraction interface, a thread of the representation state transfer (REST) API server, and the detector controller and the PCR controller are connected to the PCR interface, which is also a thread of the REST API server, and are controlled by the API server. The API server communicates with the web GUI through the web API. The interface or controller is implemented using a socket server that is one of the inter-process communication methods. In this way, functions can be monitored even outside the device. Figure 3 shows the case where each DNA extraction unit, amplification unit, and detection unit are connected to the equipment manager through a monitoring server. The extractor monitoring server monitors the extraction unit, and the PCR monitoring server monitors the amplification and detection unit of the equipment. Each monitoring server operates independently by being connected to each GUI through HTTP communication. In this paper, Python's Jupyter notebook server is utilized for the monitoring server to check if the proposed architecture is feasible. Figure 3. Monitoring software block diagram.
Implementation Results
As experimental results on the system were implemented in the emulator, even when the proposed system was in operation, it was possible to monitor each function at the same time without any problems. Figure 3 shows the case where each DNA extraction unit, amplification unit, and detection unit are connected to the equipment manager through a monitoring server. The extractor monitoring server monitors the extraction unit, and the PCR monitoring server monitors the amplification and detection unit of the equipment. Each monitoring server operates independently by being connected to each GUI through HTTP communication. In this paper, Python's Jupyter notebook server is utilized for the monitoring server to check if the proposed architecture is feasible.
Eng. Proc. 2021, 6, 60 3 of 4 is separated, and the entire function is integrated by placing an application programmer interface (API) server. Figure 2 shows a block diagram of the all the device functions. The parts responsible for DNA extraction, amplification, and detection are the extractor controller, detector controller, and PCR controller shown in the figure, respectively. The extractor controller is connected to the extraction interface, a thread of the representation state transfer (REST) API server, and the detector controller and the PCR controller are connected to the PCR interface, which is also a thread of the REST API server, and are controlled by the API server. The API server communicates with the web GUI through the web API. The interface or controller is implemented using a socket server that is one of the inter-process communication methods. In this way, functions can be monitored even outside the device. Figure 3 shows the case where each DNA extraction unit, amplification unit, and detection unit are connected to the equipment manager through a monitoring server. The extractor monitoring server monitors the extraction unit, and the PCR monitoring server monitors the amplification and detection unit of the equipment. Each monitoring server operates independently by being connected to each GUI through HTTP communication. In this paper, Python's Jupyter notebook server is utilized for the monitoring server to check if the proposed architecture is feasible. Figure 3. Monitoring software block diagram.
Implementation Results
As experimental results on the system were implemented in the emulator, even when the proposed system was in operation, it was possible to monitor each function at the same time without any problems. As experimental results on the system were implemented in the emulator, even when the proposed system was in operation, it was possible to monitor each function at the same time without any problems.
Conclusions
In this paper, we proposed a POC molecular diagnostic device capable of a web-based UI and easy maintenance, equipment operation, and monitoring [3]. The use of an open platform, an independent execution for each function, and the introduction of a REST API server facilitated software management. Additionally, the web-based UI reduces | 2,921.8 | 2021-05-17T00:00:00.000 | [
"Computer Science"
] |
A Symmetric Banzhaf Cooperation Value for Games with a Proximity Relation among the Agents
: A cooperative game represents a situation in which a set of agents form coalitions in order to achieve a common good. To allocate the benefits of the result of this cooperation there exist several values such as the Shapley value or the Banzhaf value. Sometimes it is considered that not all communications between players are feasible and a graph is introduced to represent them. Myerson (1977) introduced a Shapley-type value for these situations. Another model for cooperative games is the Owen model, Owen (1977), in which players that have similar interests form a priori unions that bargain as a block in order to get a fair payoff. The model of cooperation introduced in this paper combines these two models following Casajus (2007). The situation consists of a communication graph where a two-step value is defined. In the first step a negotiation among the connected components is made and in the second one players inside each connected component bargain. This model can be extended to fuzzy contexts such as proximity relations that consider leveled closeness between agents as we proposed in 2016. There are two extensions of the Banzhaf value to the Owen model, because the natural way loses the group symmetry property. In this paper we construct an appropriate value to extend the symmetric option for situations with a proximity relation and provide it with an axiomatization. Then we apply this value to a political situation. N , w , ρ ) . Theorem 3. The prox-Banzhaf–Myerson value Z satisfies null group, substitutable
Introduction
Cooperative game theory describes the way to allocate the worth that result when a set of agents collaborate together in a coalition. A cooperative game with transfer utility is given as a characteristic function defining a worth for each coalition of agents. A value for a game is a function determining a payoff vector for each cooperative game. The most known value was introduced by Shapley [1]. From the political context another value was introduced by Banzhaf [2] and Dubey and Shapley [3], with similar properties to the Shapley value. The Shapley value can be used as an allocation of the worth of the great coalition but not the Banzhaf value. Both of them can be used as indices in the sense that they measure the power of the agents and then they allow to distribute all kind of goods taking into account the capacity of each player.
In the classic model there are not restrictions in cooperation. In real life, political, social or economic circumstances may impose certain constraints on coalition formation. This idea has led several authors to develop models of cooperative games with partial cooperation. One of the first approximations to partial cooperation is due to Aumann and Dreze [4]. A coalition structure is a partition of the set of players such that the cooperation is possible only if the players belong to the same element of the partition. They introduced the concept of value for games with coalition structure. In this case, the final coalitions are the elements of the partition, but inside each of them all coalitions are feasible. Myerson [5], in his seminal work Graphs and Cooperation in Games, presented a new class of games with partial cooperation structure. A communication structure is a graph on the set of players, where the links represent how the players can define feasible relations in the following sense: a coalition is feasible if and only if the subgraph generated by the vertices in that coalition is connected. This model is also an extension of the model of coalition structures, here the final coalition structure is the set of connected components. The Myerson value [5] determines a payoff vector for each game and each communication structure in the Shapley sense, moreover if the graph is complete this solution coincides with the Shapley value.
Owen [6] introduced a different model in partial cooperation. In this case the coalition structure is interpreted as a priori unions formed by the closeness among the players. Nevertheless these unions are not the final cooperation, they are a priori relationships determining the bargaining to get the great coalition. The Owen model defines a payoff vector in two steps, taking a game over the unions and later taking another game inside each union. Owen [6] also defined two values for games with a priori unions: the Owen value (considering the Shapley value in both steps) and the Banzhaf-Owen value (using the Banzhaf value in both steps). However, Alonso-Meijide and Fiestras-Janeiro [7] showed that the Banzhaf-Owen value loses one important property: the group symmetry, namely two unions with the same size and symmetric in the game obtain the same payoff. They considered a new value for games with a priori unions using the Banzhaf value among the unions and the Shapley value inside each union. Following the Myerson model, Casajus [8] raised a graph as a map of the a priori relations among the players in the Owen sense. This model, called cooperation structure, considers that the a priori unions are the connected components of the graph and the subgraph in each component explains the internal bilateral relationships among the players. The Myerson-Owen value is a two-step value like the Owen value that applies the Shapley value among the components and the Myerson value inside each component. It is defined an axiomatized in Fernández et al. [9]. Later Fernández et al. [10] introduced a Banzhaf value from the Owen version to the Casajus model. Now we define in this paper another Banzhaf solution for games in the Casajus model but from the Alonso-Meijide and Fiestras-Janeiro point of view, this in taking into account the symmetry in groups.
Aubin [11] considered games with fuzzy coalitions. In a fuzzy coalition the membership of the players is leveled. A critical issue arises when dealing with usual games and fuzzy coalitions: how to assign a worth to a fuzzy coalition from a usual game. Tsurumi et al. [12] used the Choquet integral [13] to extend a classic game to fuzzy coalitions and they introduced a value by a Choquet formula to define a Shapley value. Jiménez-Losada et al. [14] began to study games with partial cooperation from fuzzy coalition structures. They introduced the concept of fuzzy communication structure in a particular version and defined the Choquet by graphs partition of a fuzzy graph with the purpose of constructing values in this context; see [14][15][16]. Later the analyzed games with a proximity relation among the players, the Shapley value [9] and the Banzhaf value [10] (following the Owen version). Now we use the symmetric version introduced in this same paper to get another Banzhaf value for games with a proximity relation among the agents. Section 2 sets preliminaries information about cooperative games, a priori unions and fuzzy sets. In Section 3 we recall the symmetric coalitional Banzhaf value and we extend it to the Casajus model. In Section 4 we extend again the cooperation value to proximity situations and we axiomatize it in Section 5. Section 6 compares the application of the new values in a political example with the other values for games with a proximity relation among the players. Section 7 is a short summary of conclusions. Finally, in Appendix A we include the proofs of the theorems.
Cooperative Tu-Games
A cooperative game with transferable utility, game from now on, is a pair (N, v) where N is a finite set, v : 2 N → R is a mapping with v(∅) = 0. The elements of N = {1, 2, . . . , n} are called players. The mapping v is named characteristic function of the game. A subset S ⊆ N is named coalition. The family of games will be denoted by G. If S ⊆ N, we denote by (S, v S ) the restricted game, where v S is the restriction of v to 2 S . A payoff vector for a game (N, v) is a vector x ∈ R N so that x i is interpreted as the payment that the player i ∈ N would receive for its cooperation. A value or solution for games is a mapping over G so that it assigns to each game (N, v) a payoff vector ϕ(N, v) ∈ R N . Two of the most important values are the Shapley value φ and the Banzhaf value β, defined by and The Shapley value satisfies efficiency, i.e., . It is known that the Shapley value is the only allocation rule over G satisfying efficiency, linearity, null player and equal treatment. Moreover these axioms are not redundant. The Banzhaf value satisfies pairwise merging, linearity, null player and equal treatment. Pairwise merging uses the amalgamated game of (N, v) for i, j ∈ N.
The Banzhaf value β satisfies the pairwise merging axiom, i.e., for each (N, v) ∈ G and each pair of
Communication Structures
Myerson [5] thought that sometimes not all communications between players are feasible. He introduced a graph as a representation of this situation. Let N be a finite set of players and L N = {{i, j} ∈ N × N : i = j} the set of unordered pairs of different elements in N. We will use ij = {i, j} by abuse of notation. A communication structure L for N is a graph with set of vertices N and set of links L ⊆ L N . A game with communication structure is a triple (N, v, L) where (N, v) ∈ G and L is a communication structure for N. The family of games with communication structure will be denoted by GC. A game (N, v) ∈ G can be identified with the game with communication structure (N, v, L N ). Let (N, v, L) be a game with communication structure. A coalition S ⊆ N is called connected in L if for each pair of different players i, j ∈ S there exists a sequence i 0 , . . . , i k ∈ S with i p−1 i p ∈ L for all p = 1, . . . , k, i 0 = i and i k = j. Individual coalitions are considered connected. The communication structure L for N is called connected if N is connected in L (this concept coincides with the notion of connected graph). The maximal connected coalitions (by inclusion) are named the connected components (the connected components form always a partition of N) of L and will be denoted by N/L. If S ⊆ N then the restricted communication structure for S is L S = {ij ∈ L : i, j ∈ S}. We (3) The Shapley value was extended for games with communication structure in [5]. The Myerson value is a function defined as Myerson proved that his value is the only one satisfying the following axioms:
a Priori Unions
The Owen's approach supposes that the players are organized in a priori unions that have common interests in the game. However, these unions are not considered as a final structure but as a starting point for further negotiations. So each union negotiates as a whole with the other unions to achieve a fair payoff. A game with a priori unions is a triple (N, v, P ) where (N, v) is a game and P = {N 1 , . . . , N m } is a partition of N. We will denote the set of games with a priori unions by GU . A value for games with a priori unions is a mapping f that assigns a payoff vector f (N, v, P ) ∈ R N to each (N, v, P ) ∈ GU . Owen [6] proposed a method to obtain values for games with a priori unions, which is defined in two steps. First we need some definitions. Let (N, v, P ) ∈ GU with P = {N 1 , . . . , N m }. The quotient game is a game M, v P with set of players M = {1, . . . , m} defined by Let (N, v, P ) ∈ GU , P = {N 1 , . . . , N m } and k ∈ M. For each S ⊂ N k the partition P S of (N \ N k ) ∪ S consists of replacing N k with S, i.e., Let f 1 be a classic value for games. The first step consists of a negotiation among unions that is focused on S. The result of the quotient game generates a new game in N k . We define the game In the second step the game in every group is solved using another classic value f 2 . So, for each player i ∈ N, if k(i) is such that i ∈ N k(i) then the new value f is defined by The first values for games with a priori unions were introduced in [6], one of them (the Owen value) applies the Shapley value in both steps of the negotiation and the another one applies the Banzhaf value in both. Alonso-Meijide and Fiestra-Janeiro [7] observed that the Banzhaf value of Owen for a priori unions loses an important property for a value, it does not satisfy the group symmetry. A value f for games with a priori unions satisfies group symmetry if for all pair of groups N p , They introduced a new Banzhaf value for these situations, the symmetric version. We will extend here the symmetric coalitional Banzhaf value (that applies the Shapley value among the unions and the Banzhaf value inside each union).
In the Owen model players are organized in a priori unions but there is no information about the internal structure of these unions. Casajus [8] proposed a modification of the Owen model in the Myerson sense. We call this model games with cooperation structure. A cooperation structure is a graph where the connected components represent the a priori unions, but the links give us additional information about how they are formed. A game with cooperation structure is a triple (N, v, L) with (N, v) ∈ G and L ⊆ L N . The family of games with cooperation structure is denoted by GCO. By definition GC = GCO; nevertheless the interpretation is completely different. Moreover we have GU ⊂ GCO, because an a priori union structure can be identified with a cooperation structure with complete components. A value for games with cooperation structure is a mapping f that assigns a payoff vector f (N, v, L) ∈ R N to each (N, v, L) ∈ GCO. Casajus [8] proposed to follow the model of Owen to get a value for games with cooperation structure. Given (N, v, L) ∈ GCO, we consider the partition of N by its connected components N/L. Therefore N/L is a set of a priori unions for the players in N but the links in L tell us how these unions are formed. We use the same quotient game (5) with the partition N/L = {N 1 , . . . , N m } and also the same first game v k (7) with a particular chosen value f 1 . In the second step we consider a communication value f 2 to allocate the profit inside each component.
Casajus defined a value using the Shapley value in the first step and the Myerson value in the second step and gave an axiomatization. Another one was given in [9]. Fernández et al. [10] defined an extension of the non-symmetric version of the Banzhaf value to the Casajus model. In this paper we consider a cooperation value consisting of applying the Banzhaf value in the first step and the Myerson value in the second step in order to get a symmetric version.
Fuzzy Sets and Proximity Relations
In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition, an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set. In this subsection we are going to recall some concepts related to fuzzy sets and the Choquet integral that will be useful subsequently. We will use ∨, ∧ to denote the maximum and the minimum respectively. A fuzzy set of a finite set K is a mapping τ : The image of τ is the ordered set of the non-null images of the function, im(τ) = {λ ∈ (0, 1] : ∃i ∈ K, τ(i) = λ}. The family of fuzzy sets over a finite set K will be denoted by [0, 1] K . Sometimes, for convenience, the image of a fuzzy set is expressed by Comonotony is an equivalence relation in [0, 1] K . A fundamental tool for the analysis of fuzzy sets are the so-called cuts. For each t ∈ (0, 1] the t-cut of the fuzzy set τ is The Choquet integral is an aggregation operator defined in [13]. Given f : 2 K → R and τ a fuzzy set over K, the (signed) Choquet integral of τ with respect to f is defined as where im (τ) = λ 1 < · · · < λ p and λ 0 = 0. The following properties of the Choquet integral are known: In this paper we focus on a particular case of fuzzy relations. A bilateral fuzzy relation, see [17], over K is a function ϕ :
the Banzhaf-Myerson Value
Fernandez et al. [10] defined a Banzhaz value following [6] for the Casajus model [8]. However, this value fails in an important condition for a value: the symmetry for groups as we can see in [7] (a priori unions are particular cases of the Casajus model). Now we propose a new Banzhaf value with a group symmetry property (here in this context it is denominated substitutable components). The cooperation value that we present applies the Banzhaf value among the unions and the Myerson value within the unions.
If we look at the Casajus model, in this case, f 1 = β and f 2 = µ. The Banzhaf-Myerson value is a generalization of the symmetric coalitional Banzhaf value ϕ defined in [7], but taking into account the inner structure of the a priori unions, in this case N/L. The Banzhaf-Myerson solution satisfies the following coincidences.
With the purpose of obtaining an axiomatization we introduce some axioms. The first four axioms also appear in the axiomatization of the Myerson-Owen value in [9]. We will also prove that this value is a coalitional value of Banzhaf.
Definition 2.
A coalitional value of Banzhaf f over GCO is a cooperation value that satisfies where ∅ denotes the empty graph, i.e., the graph without links.
In spite of the strategic position of each agent, a component cannot obtain profits if all its players in are null. We say that a coalition S ⊆ N is a null coalition in a game (N, v) ∈ G if each player i ∈ S is a null player in the game, i.e., v( We consider that substituible components must get the same total outcome. The following axiom is an extension of the group symmetry axiom for games with a priori unions. This is the main difference between this Banzhaf value and that introduced in [10].
The asymmetry of the structure of each component modifies the equal treatment property within the unions used in the axiomatization of the Owen value. In our case the Myerson fairness is not enough to fix this asymmetry because the deletion of a link can cause a change in the number of components. So, we use the modified fairness proposed in [8]. This axiom says that the difference of payoffs when we break a link, placing the players disconnected by this fact out of the game, is the same for both of the players in the link.
We also add the typical axioms of linearity and efficiency for a particular case.
Connected efficiency.
A cooperation value f satisfies connected efficiency if for every L that is connected.
The following axiom is a property for the situation in which we connect two components. First we define this modification of a graph. If we compare the axiomatizations of the Myerson-Owen value in [9] and the Banzhaf-Myerson value, the latter differs from the first in the fact that connected efficiency and component merging replace efficiency. This seems a logical consequence from the axiomatizations of the Shapley value and the Banzhaf value presented before. They have in common linearity, symmetry and null player. Nevertheless, the Shapley value is efficient, whereas the Banzhaf value satisfies pairwise merging.
Value for Games with a Proximity Relation
The goal of this paper is to define and axiomatize a value for games with a proximity relation among the players.
Definition 4.
A game with a proximity relation is a triple (N, v, ρ) where (N, v) ∈ G and ρ is a proximity relation over N. The family games with a proximity relation is denoted as GP.
A proximity relation can represent the level of coincidence between players, for instance in interests, ideas, etc. We write ρ(i, j) = ρ(ij) from now on.
For example, consider N = {1, 2, 3, 4, 5} a set of five agents. They cooperate to obtain the maximum profit making use of a land. The owners of the land are the agents 2 and 3, the rest of them are workers. However, there exist also particular relationships among the agents which can influence in the decision: players 1 and 2 are relatives, players 1, 2 and 5 are friends since their youth, and finally players 1 and 5 are supporters of the same football team. The characteristic function is the profit (in millions of euros) obtained depending on who owner cooperate (which part of the land is used), otherwise.
Suppose all the kinds of relations with the same importance, we propose next proximity relation ρ to represent them: ρ(i, i) = 1 for all i, ρ(1, 5) = 0.6, ρ(1, 2) = 0.4, ρ(1, 4) = ρ(2, 3) = ρ(2, 5) = ρ(4, 5) = 0.2 and ρ(i, j) = 0 otherwise. Figure 1 shows the relations as a fuzzy graph. Now we extend the Owen model in a fuzzy way. A proximity relation can be seen as a cooperation structure by levels of the players. Let (N, v, ρ) ∈ GP. For each t ∈ (0, 1] we have a cooperation structure. We obtain then a partition of the proximity relation in cooperation structures as we can see in the following figure. Casajus considers the different connected components as unions with internal structure. We recall the concept of group that appears in [9]. This is an extension of the unions in an a priori union structure.
Next defitions were introduced in [9]. Let ρ be a proximity relation over N. A coalition S ⊆ N is a t-group for ρ with t ∈ (0, 1] if S ∈ N/[ρ] t . The family of groups of ρ is the set Let ρ be a proximity relation over N. Coalitions S 1 , . . . , S r ⊆ N are leveled groups if there is a number t ∈ (0, 1] such that S 1 , . . . , S r are t-groups. For each set of leveled groups S 1 , . . . , S r , (r ≥ 1) Fernández et al. [9] also introduced two ways to rescale a proximity relation and the relation between these scalings and the Choquet integral. Let ρ be a proximity relation over N. If a, b ∈ [0, 1] with a < b then ρ b a is the interval scaling of ρ, a new proximity relation over N defined as Let a, b ∈ [0, 1] be numbers with a < b and a = 0 or b = 1. The dual interval scaling of ρ is a new proximity relation over N given by To aggregate the information of the proximity relation we use the Choquet integral.
Lemma 1 ([9]
). Let ρ be a proximity relation over N. For every pair of numbers a, b ∈ [0, 1] with a < b and for every set function f over L N it holds We define the set function We introduce the prox-Banzhaf-Myerson value for games with cooperation structure. It is the Choquet integral of the proximity relation with respect to the Banzhaf-Myerson set function. (N, v, ρ) be a simple game with a proximity relation. The prox-Banzhaf-Myerson value is defined by
Definition 5. Let
Suppose the game of our example in Figure 1. Depending on the assumed information we obtain the following solutions. If we only consider the game, we have that the Shapley value is φ(N, v) = (20.333, 37, 46, 20.333, 20.333). If we consider only the communication structure L in Figure 1 without the numbers on the links we apply the Banzhaf-Myerson value of the game (which coincides with the Myerson value because the graph is connected), δ(N, v, L) = (20.4, 50.9, 36.733, 15.566, 20.4). Finally we calculate the prox-Banzhaf-Myerson value. We have to consider the different graphs in Figure 2 to determine the Choquet integral. So, for each player i ∈ N = {1, 2, 3, 4, 5},
Axiomatization of the Value
We say that ρ is connected if ∃t ∈ (0, 1] such that [ρ] t is connected. In that case is called connection level of ρ.
We are going to see some axioms for Z that are a fuzzy extension of the axioms already presented for δ.
Fuzzy connected efficiency. A proximity value F satisfies fuzzy connected efficiency if
If |im(ρ)| = 1 and ρ is connected then t ρ = 1 and the axiom reduces to connected efficiency.
Then the fuzzy extension of component merging is constructed using this proximity relation.
Group merging. A proximity value F satisfies group merging if for every pair of leveled groups S, T and each pair i ∈ S, j ∈ T it holds Notice that ρ(ij) ≤ t ST by (12).
If |im(ρ)| = 1 group merging reduces to component merging. If a coalition is null then its players do not get profits when it is considered as a union or a partition of unions, therefore we can take as negligible cases these levels and later rescale.
Null group. Let (N, v, ρ) ∈ GP and S ∈ N/ρ a group which is null for the game (N, v) then Particularly if we consider a crisp proximity relation ρ (a cooperation structure) the axiom says: "if S is a component for ρ which is a null coalition for the game (N, v) then F i (N, v, ρ) = 0 for all i ∈ S", i.e., it coincides with the null component axiom.
We take two substitutable coalitions. We can suppose that while both coalitions are groups the total payoff for each group is the same, that is However, we can get a similar condition using the next axiom, the part of the payoffs for each group which is not obtained in the common interval must be the same.
Substitutable leveled groups Let (N, v, ρ) ∈ GP. If S, T ∈ N/ρ are leveled groups and they are substitutable in (N, v) then If we consider a proximity relation ρ which is crisp the axiom says: if S, T are substitutable components of ρ for a game (N, v) then ∑ i∈S F i (N, v, ρ) = ∑ j∈T F j (N, v, ρ), i.e., it coincides with the substitutable components axiom. Observe that, by Lemma 1, our value verifies the substitutable leveled groups axiom if and only if we get (17).
The modified fairness axiom [8] can be extended to proximity relations. Now, we do not consider the deletion of links but the reduction of level. The axiom only affects to the levels in the interval between the reduced level and the original one. Let ρ be a proximity relation over a set of players N with im(ρ) = {λ 1 < · · · < λ m } and λ 0 = 0. Consider i, j ∈ N two different players with ρ(ij) = λ k > 0. The number ρ * (ij) = λ k−1 satisfies that for all t ∈ (ρ * (ij), ρ(ij)] the set N i ij (or N j ij ) in the cooperation structure [ρ] t is the same. We denote also as N i ij (or N j ij ) this common set for ρ. Now the modified fuzzy fairness says that the modified fairness is true if we reduce by t the closeness of link ij for the outcomes in (ρ(ij) − t, ρ(ij)], adding the outcomes obtained out of the interval.
Modified fuzzy fairness
consists of omitting the link ij in ρ ρ(ij) ρ(ij)−t . If we consider a crisp proximity relation and we take t = 1 then the last axiom coincides with the modified fairness for games with cooperation structure. Finally, we introduce linearity.
Theorem 4.
There is only one proximity value that satisfies null group, substitutable leveled groups, modified fuzzy fairness, linearity, fuzzy connected efficiency and group merging.
Application: The Power of the Political Groups in the European Parliament
We will use the political example proposed in [9,10] in the context of the European Parliament. We compare the new value, the prox-Banzhaf-Myerson value with the others for these situations.
The European Parliament is an ideologic representation in Europe but using the political parties of the different countries. So, there are two capital axes in the political action: the national component and the ideologic component. The example is based on the seventh legislature (2012) were seven political groups lived together in the European Parliament: Ref [9] represented the game as a voting game with 735 seats and a quota of 368, the EP-game. The set of players (the political parties) is N = {1, 2, 3, 4, 5, 6, 7, 8} and the characteristic function os defined as: v(S) = 1 if the sum of the number of seats of the groups in S is greater or equal to 368, and v(S) = 0 otherwise. Besides a proximity relation between the groups is given taking into account both components of the closeness of the groups. The proximity relation ρ is represented by a fuzzy graph in Figure 3. Number ρ(ij) is interpreted as the level of coincidence between groups i and j in economy, immigration policies, etc. So, the proximity relation represents the percentage of policy dimensions where two different parties agree. Then, ρ(ij) = 1 means the complete concurrence of the ideologies of i, j. The matrix representation of the EP proximity relation is γ (we only need those numbers above the main diagonal), Table 1 we obtain the graph Banzhaf-Myerson value for each cut, and finally (3) we determine the value by the Definition 5. Table 1 shows the Banzhaf-Myerson values for the different cuts of the fuzzy relationship, this is δ for each graph version g k in the above figure. The application of the formula similar to the Choquet integral of the definition of the value to the set of indices in Table 1 obtains our index taking into account the fuzzy information. We compare in Table 2 three values with different information: the Shapley value (the classic one, without more information than the characteristic function), the Banzhaf-Myerson value (introduced in Section 3, using the crisp graph of relationships) and the goal of this paper, the prox-Banzhaf-Myerson value (taking into account all the information with the levels in the links). As the graph of this example is connected then the Banzhaf-Myerson value coincides with the Myerson value. We denote as g γ the crisp version of the EP proximity relation. We can see how the aggregation of information changes the power of the groups. For instance group 2 has greater power than group 3 with the crisp indices but they exchange their position with the fuzzy index. Furthermore, group 1 increases its power index with the fuzzy value. The reason in this example can see in the level of the winning coalitions. Graph g 5 shows that at certain level of proximity (0.7) group 1 and group 3 can obtain winning coalitions but group 2 not. The crisp values, considering the unions or not, cannot see the difference. Now, in Figures 4 and 5, we compare for this example the three known indices for games with a proximity relation among the agents (the prox-Owen value [9], the prox-Banzhaf value [10] and the prox-Banzhaf-Myerson value. Observe that, besides getting a different theoretical approximation for the problem, the new solution obtain a moderate option between the others. They get the same results in the qualitative sense, but they obtain quantitative difference. The quantitive indices are used for instance to allocate the seats in specific committees of a chamber. They are distributed proportionally to the index, so a difference in the quantitative power can mean a difference in the number of seats of each group in these committees.
Conclusions
In this paper, a new solution for cooperative games with a proximity relation among the players was introduced, This outcome is a new version of Banzhaf value for these situations satisfying a fuzzy property based in the group symmetry. We showed in Section 6 that the prox-Banzhaf-Myerson value obtains a power distribution between the prox-Owen and the prox-Banzhaf values.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Appendix A
In this section we include the proofs of the theorems. We will test that each one of the axioms is satisfied by the Banzhaf-Myerson value.
Connected efficiency. Using that L is connected and that the Myerson value is efficient by components we get Hence 1 is a null player in M, v (N/L) T . As the Banzhaf value satisfies the null player axiom we get β 1 M, v (N/L) T = 0. So, using (7), v 1 (T) = 0 for all T ⊆ N 1 . However, if v 1 = 0 then v 1 /L N 1 = 0 in N 1 . For all i ∈ N 1 we have Substitutable components. Let S, T ⊆ N be two substitutable coalitions in the game (N, v) such that S, T ∈ N/L. Consider N 1 = S, N 2 = T. For each Q ⊆ M we denote N Q = q∈Q N q again. We test that 1 and 2 are substitutable players for the quotient game M, v N/L . Let because S, T are substitutable in (N, v). It is known that the Banzhaf value satisfies the equal treatment axiom, The Myerson value is efficient by components so Modified fairness. Let ij ∈ L and suppose i, j ∈ N 1 . We have Although the quotient game depends on the graph we get v N i ij /L N i ij \{ij} S = v (N/L) S for each S ⊆ (N 1 ) i . Now we use two properties of the Myerson value: decomposability and fairness, Proof of Theorem 2. It remains to prove the uniqueness. We prove it by induction in |N/L| = m, |N| and |L|. If m = 1 it means that L is connected. Suppose f 1 , f 2 different values over GCO satisfying connected efficiency and modified fairness (we only need these two axioms in this case). Let L be the graph with the minimum number of links such that f 1 (N, v, L) = f 2 (N, v, L).
Notice that L must have at least one link, otherwise, as L is connected, it would be a singleton and by connected efficiency, we have uniqueness. Taking into account the minimality of L, if ij is a link in L, then f 1 (N, v, L\{ij}) = f 2 (N, v, L\{ij}). Then, by modified fairness We suppose that f 1 = f 2 with |N/L| = p − 1. Now suppose that |N/L| = p > 1. We take the smallest N and L such that f 1 = f 2 . Hence there is a characteristic function v with f 1 (N, v, L) = f 2 (N, v, L). Linearity implies that there exists a unanimity game u T with T ⊆ N a non-emptyset such that We set M T = {S ∈ N/L : S ∩ T = ∅}, a non-emptyset because N/L is a partition of N. We follow the next steps to achieve a contradiction.
•
First we will prove that the payoff of a player in a null component for both values is zero and the different between the payoffs for both values is the same for all the players in a non-null component.
-If S / ∈ M T then all the players in S are null players for the unanimity game (N, u T ). The null component property says that for all i ∈ S -If S ∈ M T with |S| > 1 then for each i ∈ S there is j ∈ S\{i} with ij ∈ L. Taking into account the minimal election of N and L and the modified fairness Proof of Theorem 3. We see that Z satisfies all axioms. , by (16). This fact means that [ρ] t ρ = ρ t ρ 0 1 as crisp graphs and therefore ρ t ρ 0 1 is connected. Then ρ t ρ 0 t is also connected ∀t ∈ (0, 1] and using properties (C3) and (C4) of the Choquet integral, In the last equality we have used Theorem 1 to deduce ∑ i∈N δ i (N, v) ρ t ρ 0 t = v(N) for each t. Then by Lemma 1, Modified fuzzy fairness. Let i, j ∈ N. Theorem 1 showed that the Banzhaf-Myerson value verifies modified fairness. If L ⊆ L(N) is such that ij ∈ L then Suppose ρ a proximity relation with ρ(ij) > 0 and t ∈ (0, ρ(ij) − ρ * (ij)]. By (C3) we have Besides ρ ρ(ij) ρ(ij)−t x = [ρ] r . As r ≤ ρ(ij) then ij ∈ [ρ] r , thus the modified fairness of the Banzhaf-Myerson value showed in Theorem 1 implies We get Proof of Theorem 4. The existence was proven in the previous theorem. It remains to prove the uniqueness. Suppose F 1 and F 2 two proximity values satisfying the axioms of the statement. We will prove that they are equal by induction on |im(ρ)|. If |im(ρ)| = 1 then ρ is a cooperation structure and since the axioms coincide with their crisp versions we have F 1 (N, v, ρ) = F 2 (N, v, ρ). Suppose that F 1 = F 2 if |im(ρ)| < d.
Let ρ be a proximity relation over N with |im(ρ)| = d. It is possible to repeat the reasoning of Theorem 7 in [9] using linearity, null group, modified fuzzy fairness and substitutable leveled groups. Consequently it suffices to prove the uniqueness for a unanimity game u T , T = ∅. If we define M T = {S ∈ N/[ρ] 1 : S ∩ T = ∅} , it holds that for every i ∈ S ∈ N/[ρ] 1 with S / ∈ M T both values are equal, i.e., Moreover, there exists H ∈ R with Suppose that ρ is connected; N/[ρ] 1 is a partition of N. We have by fuzzy connected efficiency | 9,353.4 | 2020-07-20T00:00:00.000 | [
"Mathematics",
"Economics"
] |
Asymptotic Stability of Unicycle-Like Robots: The Bessel’s Controller
Email<EMAIL_ADDRESS>Abstract: Asymptotic stability of unicycle-like robots proved to be involved due to Brockett's condition. By using a smooth, time-invariant controller constructed out of Bessel’s functions, in this paper unicycle-like robots are uniformly-exponentially stabilized to the origin. The pure feedback controller obtained provides closed-form trajectories with the possibility of a simple and feasible (hardware) non-linear observer construction from posture angle measurements solely. Two examples are presented: Asymptotic steering of a unicycle to the origin using gyros and a perfect non-linear observer reconstruction states along with conclusions and future work.
Introduction
Modelling mechanical systems can be carried out in two main ways (Angeles and Kecskemethy, 1995): Dynamic models including forces and torques Kinematic models excluding forces and torques Both approaches aim to collect a system of Ordinary Differential Equations (ODE's) parameterized in its control inputs (Astolfi et al., 1997;Bloch, 2015).
These ODE systems are mainly non-linear, with a universal modeling given by Sarkar et al. (1994) in the case of kinematic rolling constraints.
It turns out that dynamic models represent the most general approach to account for all possible physical interactions that may occur. However, these models are out of as many ODE's as the mechanical systems' degrees of freedom (Kane and Levinson, 1985;Angeles and Kecskemethy, 1995).
At this point, two main challenges must be dealed with: A great amount of ODE's A control law rendering the system asymptotically stable/stable (Kostić et al., 2009;Udwadia and Kalaba, 1994;Skowronski, 2012) For these reasons, many researchers focus the attention on more tractable models, yet keeping the nonlinear richness with fewer amounts of ODE's (Muñoz-Lecanda and Yániz Fernandez, 2008).
This explains the great interest in controlling kinematic models (Siegwart et al., 2011) with the case of mobile robots as a subset of kinematic modeling, mainly, wheel planar kinematic models.
Moreover, it happens that these models can be classified into two universal classes: Holonomic and Nonholonomic (Garcia and Agamennoni (2012) for a universal classification and models).
On the other hand, it is well-known that a nonholonmic robot cannot be stabilized asymptotically with a smooth controller due to Brockett's condition (Brockett, 1983).
For this reason, many different techniques have been proposed to control nonholonomic robots avoiding the use of time-invariant controllers (Zambelli et al. (2015) and the references therein).
However, none of the available techniques considers the stabilization's problem in closed-form. In fact, according to Lizárraga (2004), it is not possible to track some desired trajectories with an equi-continuous control law.
To summarize the literature's drawbacks in controlling nonholonomic robots, either path-following or asymptotic stability: Planar curves must be parameterized to be followed f(x,y) = 0 (Morro et al. (2011)) Path's curvature must satisfy some specifications: f(x,y) 0 ( Morro et al. (2011)) Very oscillatory and slow motion (see for instance Moon Kim and Tsiotras (2002)) Brockett's condition Lizárraga's obstructions Closed-form solution's unavailability for a universal set of models Closed-form algorithms to track/follow any desired pre-specified trajectory In this paper, generalizing the solution presented in Garcia et al. (2008), a smooth time-invariant controller is presented to steer in closed-loop a unicycle-like robot to the origin uniformly exponentially stable.
The contributions in this paper are as follows: Closed-form solution to steer unicycle robots to the origin Continuous feedback controller with guaranteed stability The proof that a unicycle can be asymptotically stabilized measuring considering modularity A non-linear observer with angle output measurement (gyros) This paper is organized as follows: Section Rolling constraints presents the modeling of rolling constraints to be considered, trajectories' closed-form solution using a smooth, time-invariant controller, Section Unicycle's asymptotic stability presents the asymptotic stability analysis, Section Practical controller: Only gyros presents a practical algorithm, whereas Section Examples simulates in Matlab. Finally, Section Conclusions depicts some conclusions and future work.
Notations and Definitions
In this short section, some definitions are provided to use all along the paper:
Rotation Matrix
With I the identity matrix.
Matrix Transpose
Given a matrix nn A , the transpose is denoted by A'.
Derivatives with Respect to Time
Time derivatives are indicated as:
Rolling Constraints: Unicycle-Like Robots
As mentioned previously, mechanical systems that roll without slipping encompass the modeling for many mechanical systems (Bloch, 2015).
Moreover, according to Murray and Shankar (1993) any nonholonomic system can be written in a universal chain form, so unicycle models can be considered as a general nonholonomic dynamics.
In particular, unicycle-like robots represents a universal modeling for a wide variety of wheeled robots (Garcia and Agamennoni, 2012) and Fig. 1: where the control inputs (u1, u2) 1 1 .
Bessel's Functions Closed-form Solutions
Following the ideas in Garcia et al. (2008), a lemma can be proved.
Lemma 1
Given the dynamics in Equation 1 driven by the controller: For any arbitrary N , with a <0. The robot's trajectories are given by: With Ji the Bessel's functions of first kind and Ci arbitrary constants depending on the initial conditions.
Unicycle's Asymptotic Stability
Equation 1 defines a dynamics that it is endowed with uniform exponential stability by using Lemma 1 (Rugh (1995) for a definition on stability).
Theorem 1
The controller in Lemma 1 possess uniform exponentially stability to the origin.
Proof
The proof is in the Appendix. Clearly, it is a closed-loop and time-invariant controller, except for (0) = 0, (rendering the controller identically zero).
However, the posture angle is a modular quantity: This modularity property avoids Brockett's condition collision. On the other hand, in a paper by Aicardi et al. (1995) a closed-loop controller using Lyapunov techniques was presented, however that controller becomes singular at x(0) = 0, y(0) = 0, whereas the controller in this paper is well defined for the whole 3 .
Regular Embedded Sub-Manifold: Trajectories' First Integral
Notwithstanding that the closed-form solution obtained represents the complete system's time evolution; a geometrical point of view is of interest in what follows.
Considering the closed-form trajectories in Lemma 1 with N = 2: is nonsingular in the view of the Bourget's hypothesis proved in 1929 by Siegel (Watson, 1966).
Non-Linear Observer
Equation 3 can be utilized to derive a non-linear observer measuring only the angle posture (Luenberger (1966) and Isidori (1995) for linear and non-linear observers):
Practical Controller: Only Gyros
Once that uniform exponential stability has been proved in Theorem 1, a practical algorithm can be described to control in closed-loop a unicycle robot: Determine the constant vector C from Equation 3 given the initial conditions Use the controller in Lemma 1with N=2 and single gyro measurements: Notice that only the initial condition must be provided to initiate the algorithm, endowing the controller with a very strong property for the well-known SLAM problem (Lavalle, 2006).
Only SLAM must be made at t = 0 as opposed to the available literature where SLAM or time-tracking has to be performed on-line.
Examples
Using Matlab, Lemma 1 is implemented with both objectives: steering to the origin asymptotically stable and non-linear observer reconstruction.
Non-Linear Observer Verification
Equation 4 provides an interesting verification to numerically reconstruct robot's states. Then, Fig. 3 is obtained.
Discussion
Unicycle robot's asymptotic stability is not possible using a time-invariant feedback controller due to Brockett's condition.
In this paper, the modularity of the angular posture along with a novel pure feedback and time-invariant controller, allows asymptotic stability for unicycle robots.
It should be clear that modularity was not studied previously in the literature, excluding also closed form solutions addressed in this paper.
It turns out that, besides the fact of providing the important concept of a modular controller, SLAM and non-linear observers con be constructed in hardware using posture angle solely.
Conclusion
Nonholonomic trajectories of a unicycle-like robot is solved in closed-form using a smooth, closed-loop and time-invariant controller.
Uniform exponential stability was proved, even for this case (unicycle robot) where Brockett's condition is satisfied on the basis of modularity. In fact, postural angle's modularity was the cornerstone to avoid obstructions using smooth, closed-loop, time-invariant control laws.
Besides the wide variety of available literature, trajectories' closed-form knowledge allowing an explicit non-linear observer derivation to completely reconstruct states using only a single gyro sensor, makes a salient property of this paper.
Possible future work encompasses: Numerical algorithm to drive a set of multiple robots with additional constraints (formation of robots) Non-linear observer's numerical analysis robustness Real-time applications using on-board gyros and microprocessors Satellite control using gyros, applying the universal transformation in Murray and Shankar (1993) Optimal control
Acknowledgement
The author would like to acknowledge María de los Angeles, María de los Angeles and Alicia for their constant support.
Funding Information
This work is supported by Universidad Tecnológica Nacional-Facultad Regional Bahía Blanca under the project 5122TC.
Ethics
This article is original and contains unpublished material. The corresponding author confirms that all of the other authors have read and approved the manuscript and no ethical issues involved. | 2,083.2 | 2020-01-17T00:00:00.000 | [
"Mathematics"
] |
Revisiting Li3V2(PO4)3 as an anode – an outstanding negative electrode for high power energy storage devices
Monoclinic Li3V2(PO4)3 (LVP) has long been considered primarily as a cathode material for lithium-ion batteries (LIBs). However, due to its amphoteric nature, LVP can also host additional lithium ions. Nonetheless, its use as an anode material for LIBs has hardly been investigated. In this work, we synthesize a nanostructured Li3V2(PO4)3 material with an ionic liquid-derived carbon coating and test it as an anode material for LIBs. The nanostructured LVP shows excellent rate capability and delivers an exceptionally high capacity of about 100 mA h g 1 at 100 C. Fast lithiation/delithiation of the material is enabled by its nanorod-like structure, which allows rapid Li diffusion, and its high electronic conductivity due to an effective carbon coating. Furthermore, when cycled at 50 C, the capacity retention is 91% after 10 000 cycles, and ex situ XRD shows a good preservation of the LVP structure. Due to its excellent high rate capacity and longterm stability, nanostructured LVP is a very promising candidate for the use as a negative electrode in lithium-ion capacitors (LICs). We show that a LIC containing LVP as a negative electrode and activated carbon as a positive one displays an energy density of 33 W h kg 1 at a power density of 16 kW kg ; stable for 100 000 cycles.
Introduction
Electrochemical energy storage devices like lithium-ion batteries (LIBs) and supercapacitors (SCs) are in growing demand fuelled by new applications in areas such as electric mobility and renewable energies. 1,2 In these latter applications not only high energy, but also high power performance might be required. SCs are the devices of choice for high power applications, but due to their storage mechanism their energy is considerably lower than that of LIBs. 3,4 Therefore, in the last few years much effort has been made for the realization of advanced high power LIBs as well as on the development of innovative high power devices, e.g. lithium-ion capacitors (LICs), which are in most of the cases hybrids of LIBs and SCs. [5][6][7][8][9][10][11] High power LIBs presently available mainly contain anodes based on nanostructured lithium titanate (Li 4 Ti 5 O 12 , LTO). Nanostructured LTO can be lithiated/delithiated at high current rates, thus guaranteeing high power performance. However, the rather low capacity of this material and the high lithiation/ delithiation potential of ca. 1.5 V vs. Li/Li + limit the practical energy density of these high power LIBs. 12 Graphite is used as an anode in conventional LIBs as well as for the negative electrode of the commercially available LICs. Graphite displays higher specic capacity than LTO and its use allows the realization of devices with high operative voltage (about 4 V). However, the rate capability of graphite, especially for the lithiation process, is rather limited due to long diffusion pathways and the hindrance towards Li + intercalation caused by the staging mechanism, limiting the power performance of the devices containing this anode. 13 Taking these points into account, the development of new anode materials with high ionic and electronic conductivities and with a relatively low average lithiation/delithiation potential (as close as possible to 0.0 V vs. Li/Li + ) appears therefore of great importance for the realization of innovative high power devices.
Lithium vanadium phosphate (Li 3 V 2 (PO 4 ) 3 ; LVP) has attracted much attention as a cathode material owing to its high theoretical capacity of 197 mA h g À1 when charged up to 4.8 V vs. Li/Li + , high average potential, good stability and low costs. 14 The structure of monoclinic LVP (space group: P2 1 /n) is a threedimensional network consisting of VO 6 octahedra and PO 4 tetrahedra linked together via common oxygen atoms to form a (V-O-P-O) n bonding arrangement, which houses Li + ions in relatively large interstitial sites. [15][16][17] As a consequence, Li + ions can be reversibly extracted and re-inserted into the LVP structure with good ionic mobility, without causing too many structural changes of the LVP lattice. However, LVP displays a relatively low intrinsic electronic conductivity, which limits the performance of this material, particularly at high rates. Nonetheless, several studies showed that nanostructured and carbon-coated LVP particles might display high electronic conductivity and that these nanomaterials can be regarded as interesting materials also for high power applications. 14,18 A very interesting feature of LVP is its amphoteric nature. LVP can also host additional Li + ions and can therefore also be used as an anode material. LVP-based anodes can be used down to 0.0 V vs. Li/Li + , and they display two distinct potential regions of lithiation/delithiation: a two-phase region at high potentials (ca. 2.0-1.6 V vs. Li/Li + ) and a single-phase region below ca. 1.6 V. So far, only a rather limited number of studies considered the use of LVP as a negative electrode material for energy storage devices. [19][20][21][22][23][24][25] Nevertheless, considering the operative potential as well as the high ionic conductivity of LVP, anodes containing this amphoteric material could be of interest for the realization of innovative high power devices.
In a recent study, we reported about the ionic liquid-assisted synthesis of carbon coated LVP nanoparticles. 18 We showed that the use of ionic liquids as a template for the realization of nanostructured LVP might lead to the realization of nanomaterials with high ionic and electronic conductivities, which are very promising in view of the realization of advanced high power devices.
In this work, we investigate the use of nanostructured LVP as an anode material for high power devices. In the rst part of the manuscript, the diffusion processes as well as the structural changes occurring during the lithium insertion-extraction process on the LVP structure are investigated. In the second part, the rate performance and the cycling stability of LVP-based anodes are assessed. Finally, the use of LVP-based anodes for LICs is considered. The results of this study show that LVPbased negative electrodes might show excellent rate performance and cycle life and, therefore, are a promising candidate for the realization of advanced high power devices.
Materials synthesis
Nanostructured Li 3 V 2 (PO 4 ) 3 (LVP) with a carbon coating based on the ionic liquid N-butyl-N-methylpyrrolidinium bis(tri-uoromethanesulfonyl)imide was synthesized following our procedure described in ref. 18.
Materials characterization
The crystalline structure of the LVP powder was characterized by X-ray diffraction (XRD) using Cu Ka radiation on a Bruker D8 Advance (Germany) for 4 s at each 0.02 step width from 15 to 60 . To analyze structural changes during cycling, ex situ XRD measurements were carried out for electrodes (see below) that were cycled for 10 000 cycles between 3.0 and 0.0 V vs. Li/Li + at a rate of 50 C. These electrodes were recovered from cycled cells stopped at 3.0 V. The cells were disassembled in an argon-lled glove box and carefully washed with DMC in order to remove electrolyte residues. For comparison, the XRD pattern of a pristine electrode was also recorded. For the XRD measurements of the pristine and cycled electrodes, the step time was set to 6 s, the step width to 0.01 and the 2q range to 15-60 .
In situ XRD measurements of LVP upon galvanostatic lithiation and delithiation were performed using a self-designed in situ cell. The cell body is made of stainless steel covered internally by a Mylar foil for electrical insulation. For the electrode preparation, 65 wt% LVP, 25 wt% conducting agent (Super C65, TIMCAL) and 10 wt% binder (polyvinylidene uoride, PVDF) were mixed in N-methyl-2-pyrrolidone and stirred overnight. The obtained slurry was cast on a beryllium (Be) window, having a thickness of 250 mm (Brush Wellman), which served at the same time as a current collector and a "window" for the X-ray beam. The coated Be window was subsequently dried at 80 C for 30 min and at 40 C under vacuum overnight. Metallic lithium foil served as counter and reference electrodes. Two sheets of a Whatman glass ber lter served as a separator and were drenched with 500 mL of electrolyte (1 M LiPF 6 in EC : DMC (1 : 1 by weight)). The assembled cell was allowed to rest for 2 h. Subsequently, the cell was galvanostatically cycled at a rate of C/10 using a VSP potentiostat/galvanostat (Bio-Logic Science Instruments). XRD measurements were performed with the 2q range set to 15-47 . A complete scan was recorded every 30 minutes, including a rest period at the beginning of every scan. Aer discharging to a lower cut-off potential of 0.0 V vs. Li/Li + , the cell was charged to an upper cut-off potential of 3.0 V.
The Raman spectrum of LVP was collected with a SENTERRA Raman microscope (Bruker Optics) as reported in ref. 18. The morphology and chemical composition of the carbon-coated LVP sample were characterized with a scanning electron microscope (SEM, AURIGA, Carl Zeiss, equipped with an energydispersive X-ray analyser (EDX)). The amount of carbon in the nal product was evaluated by CHN analysis.
Electrochemical measurements
LVP electrodes were prepared by mixing 70 wt% LVP, 20 wt% conducting agent (Super C65, TIMCAL) and 10 wt% binder (polyvinylidene uoride, PVDF) in N-methyl-2-pyrrolidone followed by stirring overnight. The obtained slurry was cast on dendritic copper foil (Schlenk, Germany) with a laboratory scale doctor blade set to a thickness of 150 mm. The electrode sheets were dried at 80 C for 12 h. Disc electrodes with a diameter of 12 mm were cut out of the sheets and further dried at 120 C under vacuum for 24 h. The mass loading of the electrodes was ca. 1-1.5 mg cm À2 . All electrochemical measurements except for the in situ XRD measurements were carried out in 3-electrode Swagelok cells. The cells were assembled in an argon-lled glove box with oxygen and water levels below 1 ppm. The LVP electrodes were used as working electrodes and metallic lithium (Rockwood Lithium) was used as counter and reference electrodes. Whatman GF/D glass ber lters drenched with 200 mL of electrolyte (1 M LiPF 6 in EC : DMC (1 : 1 by weight)) were used as a separator.
Constant current cycling tests were performed on a MACCOR Battery tester 4300 in the potential range of 3.0 to 0.0 V vs. Li/Li + .
The current rate of 1 C corresponds to a specic current of 266 mA g À1 . The tests were carried out in climatic chambers set to 20 C. For the rate test, ve cycles were carried out at each current density. In Fig. 4, the potential proles of each h cycle are shown. The discharge capacity of each h cycle was also used to calculate the capacity retention. Prior to the rate test, the cell was activated for ve cycles at 0.1 C.
Cyclic voltammetry (CV) was performed on a VMP3 at scan rates of 0.05 to 0.6 mV s À1 in a potential range of 3.0 to 0.0 V vs. Li/Li + . The galvanostatic intermittent titration technique (GITT) was used to obtain diffusion coefficients of LVP over the whole potential range from 3.0 to 0.0 V vs. Li/Li + during both lithiation and delithiation. The measurements were carried out with a MACCOR Battery tester 4300 aer three charge-discharge cycles for activation carried out at a rate of 0.1 C with a VMP3. The electrodes were charged/discharged at 20 C with a current density of 0.1 C for a time s of 600 s followed by a relaxation period of 2 h at open-circuit potential (OCP). This chargedischarge step was continued until the desired cut-off potential of 0.0 or 3.0 V vs. Li/Li + was reached. Lithium diffusion coefficients D were then calculated from the GITT measurements with the following equation: 26 here m B is the mass of the active material, V m is the molar volume of LVP (derived from the unit cell volume of the material), M B is the molar mass of LVP, A is the electroactive area (as approximation, the geometric surface area of the electrodes of 1.13 cm 2 was taken), DE S is the potential difference between the equilibrium potentials before and aer excitation and DE s is the potential difference between the equilibrium potential before excitation and the excited potential. The LVP electrodes for the LIC experiments were prelithiated via a metallic Li electrode as described in the literature. 27,28 The activated carbon (AC) positive electrodes were prepared similar to the LVP electrodes. The composition was 90 wt% AC (DLC Super 30, Norit, USA, specic BET surface area: 1400 m 2 g À1 ), 5 wt% conducting agent (Super C65, TIMCAL) and 5 wt% sodium carboxymethylcellulose as a binder (CMC, Walocel CRT 2000 PA, Dow Wolff Cellulosics, Germany, dissolved in water). The balancing of the LIC full cells was based on the capacity achieved at a rate of 100 C for the LVP anodes and the corresponding current for the AC electrodes (around 25 mA in a potential window of 3.0 to 4.1 V vs. Li/Li + ) and added up to a mass ratio of LVP : AC of 1 : 2.22. Similar to the half cell tests, Whatman glass ber lters were used as a separator and 1 M LiPF 6 in EC : DMC (1 : 1 by weight) was used as the electrolyte. All LICs were cycled using a VMP3 multichannel potentiostat/ galvanostat (Bio-Logic Science Instruments) between 0.0 and 4.0 V. The used rates were 25 C, 50 C and 100 C. For better comparison, the balancing of all three rates was kept the same and the corresponding currents can be found in Table 1. The stated energy and power density were calculated from the constant current cycling results following a procedure reported before and are based on the active masses of both electrodes. 27 Depending on the rate, 30 000, 60 000 and 100 000 cycles were performed, respectively. The reported energy and power densities of the LIB and SC are taken from the literature. 27 All potentials reported in this work refer to the Li/Li + couple. Fig. 1a shows an X-ray diffraction (XRD) pattern of the carboncoated LVP nanostructure investigated in this study. The pattern clearly indicates the formation of a highly crystalline phase and the intense diffraction reections are in excellent accordance with monoclinic Li 3 V 2 (PO 4 ) 3 (JCPDS card no. 96962). No carbon phase is detected in the LVP composite, indicating that the carbon generated from the ionic liquidassisted synthesis is amorphous and its presence does not inuence the crystal structure of LVP. According to the results of the elemental analysis, the carbon content of the nal product is only 2.4 wt%. The scanning electron microscopy (SEM) images ( Fig. 1c and d) show the general morphology of LVP. As is visible, the LVP particles display a nanorod like structure; they are on average 80-100 nm thick and about 1 mm long. In our previous work we showed that the use of an ionic liquidassisted synthesis allows the realization of LVP nanoparticles with high electronic conductivity. The LVP nanoparticles shown in Fig. 1 display an electronic conductivity of 5.5 Â 10 À3 S cm À1 . This value is more than four orders of magnitude higher than that of uncoated LVP and is also signicantly higher than that of LVP nanoparticles obtained with conventional carbon precursors such as sucrose. 18 The high electronic conductivity of the material is partially attributed to a relatively uniform carbon coating as evidenced by elemental mapping (Fig. S1 †) and by the absence of strong signals corresponding to LVP (PO 4 3À stretching vibrations) 29 in the Raman spectrum of the material (Fig. 1b). Fig. 2 shows a cyclic voltammogram of a LVP-based anode in the potential range from 3.0 to 0.0 V vs. Li/Li + as obtained using a scan rate of 0.05 mV s À1 . As shown in the gure, two distinct potential regions can be distinguished. From ca. 2.0 to 1.6 V four oxidation and four reduction peaks can be seen in the voltammogram. The existence of these peaks in the CV indicates that Li + insertion/de-insertion takes place in a sequence of phase transitions in this potential region. It was suggested that approximately 0.5 Li + is inserted at every step, corresponding to the following composition changes: Li 3 V 2 (PO 4 ) 3 / Li 3.5 V 2 (PO 4 ) 3 / Li 4 V 2 (PO 4 ) 3 / Li 4.5 V 2 (PO 4 ) 3 / Li 5 V 2 (PO 4 ) 3 . 19 At potentials below 1.6 V, reversible insertion-extraction of additional lithium takes place in a solid solution. It was suggested that another two Li + can be reversibly inserted in this potential region, resulting in the formation of Li 7 V 2 (PO 4 ) 3 . 19 A wide irreversible current peak in the potential range from 0.9 to 0.6 V is visible in the voltammogram, which may be attributed to the decomposition of the electrolyte to form a solid electrolyte interphase (SEI) lm, resulting in irreversible capacity. With an increasing scan rate (see the inset of Fig. 2), highly symmetrical and clearly splitting anodic/cathodic peaks can still be exhibited.
Results and discussion
As mentioned in the introduction, while a large number of studies have been dedicated to the investigation of LVP-based cathodes, only a very limited number of studies have been dedicated to LVP-based anodes. Particularly, only few studies investigated in detail the evolution of the lithium diffusion coefficient over the potential range in LVP-based anodes. 20,24 Fig . 3 shows the evolution of the diffusion coefficient over the potential from 3.0 to 0.0 V vs. Li/Li + of the investigated LVP anodes, as obtained via GITT measurements. In order to obtain reliable results from GITT experiments, several requirements have to be fullled. 13,26 It is important to note that in our investigation these requirements were only partially fullled. Therefore, as already reported in the literature, the results of such a type of investigation can be considered as an indication of the variation of the lithium diffusion coefficient over the potential only. Nevertheless, it is also important to remark that these results can be still analyzed qualitatively and, from them, reasonable conclusions concerning the general trend of the lithium insertion process can be made. Fig. 3 reports the values obtained for the investigated LVP-anodes. As shown, two regions can be distinguished: the two-phase region at high potentials and the single-phase region at low potentials. The two-phase processes are accompanied by four minima in the D vs. potential plots, corresponding to the current maxima visible in the CV (Fig. 2). In this region, the diffusion coefficient shows a wide variation from 8.2 Â 10 À11 to 1.9 Â 10 À14 cm 2 s À1 during lithiation (Fig. 3a) and from 3.3 Â 10 À11 to 1.1 Â 10 À14 cm 2 s À1 during delithiation (Fig. 3b). Similar minima in the chemical diffusion coefficient are also commonly observed for other materials which show phase transition when strong attractive interactions between the Li + ions and the host matrix are present or some order-disorder transitions during lithiation/ delithiation take place. 30,31 A very different behavior was found for the low potential region (1.6-0.0 V vs. Li/Li + ). Here, only small variations in the diffusion coefficient were found, indicating a continuous energy distribution for the Li + insertion/ deinsertion process. The diffusion coefficient of LVP tends to decrease during lithiation in this region, reaching a value of 1.2 Â 10 À11 cm 2 s À1 at the cut-off potential of 0.0 V vs. Li/Li + . Delithiation of fully lithiated LVP is then easier with an initial D value of 1.5 Â 10 À10 cm 2 s À1 . However, the diffusion coefficient tends to decrease again with increasing potential, reaching a minimum of 4.3 Â 10 À12 cm 2 s À1 before the beginning of the two-phase region. Diffusion coefficients were also calculated from CV measurements for the two-phase region ( Fig. S2 and Table S1 †). Values in the order of 10 À11 to 10 À10 cm 2 s À1 , similar to those obtained from the GITT measurements in the singlephase region, were found by the CV method. 20 It is important to note that the lower potential region of LVPbased anodes appears especially interesting for high power applications. As recently shown for so carbon-based anodes, the presence of a continuous distribution of diffusion coefficients, which indicates a less hindered lithiation/delithiation process compared to that in the two-phase region, can be advantageous during tests at high current densities. 13 Taking this point into account, and considering the high electronic conductivity of the investigated LVP nanostructures, it is reasonable to expect a good power performance for such an anode material.
Indeed, the LVP-based anodes exhibit outstanding rate capabilities. Fig. 4a shows the capacity retention obtained from rate tests of the anode in half-cell conguration. At 1 C, the LVPanode displays a capacity of 239 mA h g À1 , which is close to the theoretical capacity of this anode material (equal to 266 mA h g À1 for the reversible insertion-extraction of four Li + ions). This value of capacity is not particularly impressive, as it is signicantly lower than that of the state of the art anode material graphite. Nevertheless, as shown in the gure, the LVP anode displays outstanding capacity retention when increasing the applied current. When a current density corresponding to 10 C is applied, the LVP anode displays a discharge capacity of 181 mA h g À1 , which corresponds to 76% of the capacity at a rate of 1 C. When the current density is increased to a value corresponding to 100 C, which is a value of current in the range of SC applications, the discharge capacity of LVP is 99 mA h g À1 , corresponding to a capacity retention of 41%. At low current densities, especially for the delithiation process, the plateaus corresponding to the two-phase process of lithiation/delithiation are visible in the potential proles (Fig. 4b). With increasing current densities, the potential plateaus become less pronounced, especially during lithiation. Furthermore, it is clearly visible that the capacity retention in the single-phase region is higher than that in the two-phase region. Hence, the average delithiation potential does not signicantly increase with the applied current density, which should help to improve the energy retention of a device using LVP as a negative electrode.
The high rate performance shown by the LVP anodes cannot be achieved by conventional anode materials such as graphite and, to the best of our knowledge, is among the highest reported for non-conventional anode materials during tests at high current densities. The unique combination of morphology, high ionic and electronic conductivities of the investigated nanoparticles is the origin of the impressive performance at high C-rates displayed by this material. The presence of nanorod like structures shortens the diffusion paths and enlarges the contact area between the active material and the electrolyte, leading to fast Li + ion diffusion. Furthermore, particularly below 1 V vs. Li/Li + , the lithium diffusion process is not highly hindered and does not limit the electrode performance when high current densities are applied. 13 At the same time, as the considered nanostructures display high electronic conductivity (see above), the electronic conductivity of the material does not limit the performance at high current density. About the latter point it is important to note that the investigated LVP anodes only have a carbon content of 2.4 wt%, indicating that the carbon coating of the LVP nanoparticles, which has been made using an ionic liquid as a precursor, can be considered extremely effective. 18 Considering these results, our LVP anode can certainly be considered as a very promising candidate for the realization of innovative high power devices. Importantly, as this anode can be cycled down to 0 V vs. Li/Li + , it is expected that devices containing this material will also display interesting values of energy.
Besides high charge and discharge capacities, a negative electrode material for high power applications should also display a high cycling stability, considering the high number of charge-discharge cycles high power LIBs or LICs are usually subjected to. The cycling stability of a material is strongly related to the mechanism of insertion and extraction of lithium into its crystalline structure. In order to have high cycling stability, this process should cause neither dramatic structural changes, nor huge volumetric expansion.
In order to investigate the structural variation of LVP anodes during lithiation and delithiation, we carried out an in situ XRD experiment. The voltage prole of the rst lithiation step down to 0.0 V is presented in Fig. 5b. The corresponding XRD patterns are presented in Fig. 5a. As shown above for the CV measurements, the two-phase and the solid solution region can be clearly distinguished also in the voltage proles. In the two phase region (3-1.6 V vs. Li/Li + ), several changes in the XRD When the cell is further lithiated from 1.6 to 0.0 V vs. Li/Li + , the XRD pattern remains almost constant, in line with the sloppy potential prole in this potential region, which indicates solid-solution behaviour. There are only some reections shiing to slightly lower 2q angles, indicating a small expansion of the structure due to the ongoing lithiation process. For example, the reection at 23.0 shis to 22.9 upon lithiation of the material from 1.5 to 0.0 V.
It is important to note that upon delithiation to 3.0 V vs. Li/Li + , the XRD pattern approximately returns to the pristine state, suggesting reversible changes of the LVP structure during charge and discharge (Fig. 5c). There are only some small differences of the XRD pattern at 3.0 V compared to the one of the pristine electrode (Fig. 5a). For example, the ratios of the intensities of the two double reections at ca. 20.5 and 24.1 are inversed. A similar trend in the positions of the major diffraction reections is observed before and aer the completion of the second cycle within the potential range of 3.0 to 0.0 V vs. Li/Li + . This nding supports the conclusion of a recent investigation that the structural integrity of the LVP anode is maintained in spite of the two-phase reaction mechanism. 19 Oen, Li + ion insertion accompanies a pronounced volumetric change, which can lead to limited cycling stability and practical capacity. However, the in situ XRD investigation indicates that LVP undergoes only small volume changes, and hence good reversibility upon full charging-discharging of LVP-based anodes can be expected. Fig. 6a shows the variation of the discharge capacity of the LVP anode during prolonged charge-discharge cycling carried out at current density corresponding to 50 C. As shown in the gure, during the initial cycles, the capacity of the LVP anode decreases from a value of ca. 220 mA h g À1 to a value of 150 mA h g À1 . During these initial cycles, the coulombic efficiency of the charge-discharge process is lower than 90%. This low value is most likely related to decomposition processes of the electrolyte and SEI formation. 32 Aer these initial cycles, the capacity of the LVP anode becomes extremely stable and aer 10 000 cycles the electrode is still able to deliver a capacity of about 135 mA h g À1 , corresponding to a capacity retention of more than 90% compared to the 100th cycle. During all these cycles, the coulombic efficiency of the charge-discharge process was always close to 100%. The XRD pattern of the cycled electrode (Fig. 6b) indicates that some changes of the LVP structure occurred during this high number of cycles. As shown in the gure, all major reections of partially delithiated LVP according to the in situ XRD experiment are also found for the electrode subjected to 10 000 cycles of charge-discharge. However, the lower intensity and the broadening of the reections indicate a reduction of the crystallinity of the material. Nevertheless, this reduction does not seem to have a pronounced effect on the electrochemical behavior of the LVP anode.
The results reported above clearly indicate that our LVP anode not only displays very high capacity during tests carried out at a high C-rate but also an extraordinary cycling stability. Consequently, this anode material can be regarded as one of the most promising candidates for the realization of innovative high power devices. With the aim to verify the performance of this material in a high power device, we realized a LIC containing LVP as a negative electrode and activated carbon (AC) as a positive electrode. In this kind of setup, prelithiation of the negative electrode is always necessary to introduce lithium into the system and to handle the irreversible capacity. Therefore, the lower initial efficiency of LVP does not represent an obstacle to its introduction in such devices. The LVP electrodes in the LIC full cells were prelithiated via the lithium reference electrode as described in the literature. 27,28 Aerwards, the LIC full cells were cycled between 0.0 and 4.0 V. Fig. 7a shows the evolution of the energy density and energy efficiency over cycling of the LVPbased LIC at different current densities of 2.07, 4.13 and 8.26 A g À1 (corresponding to a C-rate of the LVP negative electrode of 25 C, 50 C and 100 C, respectively and based on both active material masses). The number of cycles considered in these tests was dependent on the investigated current density (the higher the current, the higher the cycle number). As shown, aer a small fading of the energy density in the beginning, all investigated cells possess a very stable energy density over the following several ten thousands of cycles. It is important to note that the balancing of all cells is based on the half cell capacities at 100 C, which does not lead to a worsened performance of the 25 C and 50 C cells. As expected, due to the proportionality of the overpotential and voltage with the current, the energy density of the devices decreases with increasing applied current. Nevertheless, all LICs display outstanding energy density (referred to the weight of both active materials) as well as cycling stability. As shown in the gure, aer 30 000 cycles at 2.07 A g À1 the LVP-based LIC displays an energy density of 45 W h kg À1 . The same type of device displays an energy density of 40 W h kg À1 aer 60 000 cycles at 4.13 A g À1 . When the current is increased to 8.26 A g À1 , the LVP-based LIC is able to deliver, aer 100 000 cycles, an energy density of more than 30 W h kg À1 . The evolution of the energy efficiency of the three investigated cells is also given in the gure. Only a slight decrease of the energy efficiency can be observed at all three currents. Fig. 7b and c compare the voltage and potential proles of the LIC cycled at 100 C in the beginning and at the end of cycling. As shown in the gure, all proles represent each other very well and only minor changes can be detected. The biggest difference is a small upward shi of the end-of-discharge potentials of both electrodes (2.42 V vs. Li/Li + to 2.66 V vs. Li/Li + ). Very importantly, no further overpotentials (e.g. caused by calendar life or aging) evolve over cycling, which would certainly lead to a more severely decreased energy efficiency.
The energy density and the cycling stability of the investigated LVP-based LIC are among the highest so far reported for such type of high power devices, and, to the best of our knowledge, are the highest for systems containing non-carbonaceous anodes. Fig. 8 compares in a Ragone like plot the energy and power densities of the investigated devices, in the beginning, in the middle and in the end of cycling, with those of an activated carbon based supercapacitor (SC) (0.0-2.8 V, 1 M Et 4 NBF 4 in PC) and a graphite/LiCoO 2 based LIB (3.0-4.2 V, LP30). It is important to outline that all three devices are labmade, they have a comparable weight and they have been tested under similar conditions. The values of energy and power reported in the gure refer to the active materials only. Therefore, such a comparison has to be seen only as an indication about the characteristics of these devices. Nevertheless, this plot clearly shows that, using the LVP anode, it is possible to realize high power devices that are able to display very interesting values of energy and power that ll the gap between LIBs and SCs. The energy efficiency of the LIB and SC shown here are also given in the ESI (Table S2 †). As shown above, these devices are able to display this promising performance over several ten thousands of cycles, as requested for high power devices.
Conclusion
LVP is not only a promising cathode material for LIBs, but it can also host additional Li + ions due to its amphoteric nature. Lithium insertion into LVP in the anode potential region takes place with two different mechanisms. At high potentials, LVP undergoes a series of phase transitions. At lower potentials, LVP shows solid solution behavior. Lithium diffusion in this potential region is very fast, offering the opportunity to design a high power LVP material. Our nanorod-like carbon-coated LVP synthesized by an ionic liquid assisted method displays high electronic conductivity, hence, displaying outstanding rate capability as an anode material. At the very high current of 100 C, nanostructured LVP anodes display a capacity of about 100 mA h g À1 . Furthermore, the LVP anode displays superior longterm cycling stability: 91% capacity retention aer 10 000 cycles at 50 C. The excellent high rate capacity and cycling stability of our LVP also make this material an attractive candidate for the use as a negative electrode material in lithium-ion capacitors. We demonstrate here an LVP/activated carbon hybrid device with a similar power performance and a much improved energy density compared to that of conventional supercapacitors. | 7,995.4 | 2014-10-07T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Design of an HF-Band RFID System with Multiple Readers and Passive Tags for Indoor Mobile Robot Self-Localization
Radio frequency identification (RFID) technology has already been explored for efficient self-localization of indoor mobile robots. A mobile robot equipped with RFID readers detects passive RFID tags installed on the floor in order to locate itself. The Monte-Carlo localization (MCL) method enables the localization of a mobile robot equipped with an RFID system with reasonable accuracy, sufficient robustness and low computational cost. The arrangements of RFID readers and tags and the size of antennas are important design parameters for realizing accurate and robust self-localization using a low-cost RFID system. The design of a likelihood model of RFID tag detection is also crucial for the accurate self-localization. This paper presents a novel design and arrangement of RFID readers and tags for indoor mobile robot self-localization. First, by considering small-sized and large-sized antennas of an RFID reader, we show how the design of the likelihood model affects the accuracy of self-localization. We also design a novel likelihood model by taking into consideration the characteristics of the communication range of an RFID system with a large antenna. Second, we propose a novel arrangement of RFID tags with eight RFID readers, which results in the RFID system configuration requiring much fewer readers and tags while retaining reasonable accuracy of self-localization. We verify the performances of MCL-based self-localization realized using the high-frequency (HF)-band RFID system with eight RFID readers and a lower density of RFID tags installed on the floor based on MCL in simulated and real environments. The results of simulations and real environment experiments demonstrate that our proposed low-cost HF-band RFID system realizes accurate and robust self-localization of an indoor mobile robot.
Introduction
Most of the developed countries are facing the problems of the aging of the human population and labor shortage. Then, intelligent mobile service robots have been developed with the main purpose of solving these problems. Consequently, one of the tasks that needs to be accomplished on an urgent basis is highly accurate and robust self-localization of robots. Many types of research have been conducted on the self-localization of indoor mobile robots. Several of the systems developed for this purpose employ vision sensors [1,2], lasers [3], ultrasonic sensors [4], infrared sensors [5], radar [6] and ultrasonic technology [7]. However, these conventional approaches are not robust or accurate enough to localize an indoor mobile robot, given the presence of several kinds of disturbances. Vision sensors suffer from the change of illumination and environment conditions. Laser range finders (LRFs) cannot locate a robot accurately if the robot is surrounded by many unknown moving obstacles or 1. A novel particle reinitialization method based on MCL is proposed to enable rapid self-localization. 2. A likelihood function that considers the antenna size is proposed for accurate self-localization.
3. An HF-band RFID system with eight RFID readers placed in a new arrangement is developed by increasing the antenna size of the RFID reader.
Eventually, the proposed RFID system provides highly robust self-localization of an indoor mobile robot at low production cost.
Related Works
A basic RFID system consists of three components, a transceiver (commonly known as the RFID reader), a transponder (commonly known as the RFID tag) and an antenna. Depending on the communication method, RFID systems can be divided into two types. One type uses radio waves, and the other uses electromagnetic induction for communication between the transceiver and the transponder. Ultra-high-frequency (UHF)-band and super-high-frequency (SHF)-band RFID systems are based on the use of radio waves, which can realize long distance communication. HF-bandand low-frequency (LF)-band RFID systems use electromagnetic induction. The communication distance of HF-band and LF-band RFID systems is shorter than that of a UHF-band or SHF-band RFID system; however, an HF-band or LF-band RFID system is much more stable and accurate for tag detection and more robust against obstacles in the environment and environmental changes. LF-band, HF-band or UHF-band RFID systems have already been utilized for mobile robot self-localization. Details of related works on robot self-localization works are listed in Table 1.
Miguel Pinto et al. [18] used an omnidirectional camera and LRF sensors to realize robot self-localization. Their self-localization system performed well, and the average self-localization was less than 80 mm. However, in general, vision sensors suffer from the problem of illumination changes, and LRF sensors are unable to accurately locate the robot when numerous transparent walls are present in the robot's environment.
Dirk Hahnel et al. [8] proposed a combination system composed of a UHF-band RFID reader and an LRF. They used a probabilistic measurement model for RFID readers to locate RFID tags. In their system, two RFID reader antennas were installed on the robot, and tags were attached to walls, furniture, etc. The RFID system was used only to compensate for the global positioning of the localization based on the LRF. Then, the accuracy of their system was dependent on the LRF-based self-localization system, which is sensitive to the condition of unexpected obstacles in the robot's environment. It is preferable to construct a single RFID system without any other sensors, which can be used in a dark environment or in an environment containing several transparent walls, such as in hospitals and other public facilities.
Lei Yang et al. [19] proposed a hybrid particle filter method for object tracking using a UHF-band RFID system. This method is more computationally efficient than the particle filter while providing the same accuracy. The limitation of their system is that it can locate only the position of the robot and cannot estimate its orientation. However, for autonomous navigation of indoor mobile robots, high accuracy localization of the position and orientation of the robot is necessary. The UHF-band RFID system of Lei Yang [19] realized self-localization with an error of about 186 mm, in contrast to the much higher accuracy of localization of the position and orientation achieved in our previous work, where we propose the use of a novel particle reinitialization method based on MCL to enable rapid self-localization. Instead of using the conventional Gaussian function (i.e., Gaussian model) as the likelihood model, we newly design a likelihood model that is a combination of the Gaussian distribution and the step function, which can improve the self-localization accuracy.
Both Park et al. [13] and Han et al. [14] proposed HF-band RFID systems for self-localization. Park et al. [13] used only one RFID reader antenna in their system. However, it is difficult to estimate the orientation of the robot with only one reader antenna. Han et al. [14] proposed a new triangular pattern for arranging the RFID tags on the floor, instead of the conventional square pattern, and they achieved accurate localization with an localization error of about 16 mm. However, their system suffers from the same problem as that of Park et al. [13] because it also uses only one RFID reader. A common approach to solving this problem is to combine these systems with other sensors [20,21]. They developed a new localization method that uses trigonometric functions to estimate the position and orientation with only one RFID reader. Unfortunately, the system they developed cannot ensure the reliability of localization, especially when the robot moves quickly or when no tags are detected. HF-band RFID systems have also been utilized for object pose estimation [22] or communication robots [23]; however, the objectives of these works were different from those of our study.
Yang et al. [24] used one HF-band RFID reader with a large antenna size of 660 × 300 mm 2 . They derived an exponential-based function to reflect the relationship between RFID tag distribution and localization precision. They proposed an approach of using sparsely-distributed passive RFID tags. They used a simple and efficient localization algorithm proposed by Han et al. [14]. The RFID system with sparse RFID button tag distribution patterns realized better localization with precisions of about 36 mm and 38 mm in the x direction and y direction, respectively. However, this system was unable to realize real-time localization. The robot stayed for 40 s at each point for localization. In addition, the system could not estimate the orientation of the robot. Furthermore, installation of their sparse RFID tag distribution patterns in an indoor environment was difficult. Yang and Wu [25] proposed a particle filter algorithm using a position-information-based straight observation model and a 2D Gaussian-based motion model to locate the robot. They used a dense tag distribution. Other experimental conditions in their study were the same as those in Yang et al. [24]. They set the localization accuracy as 100 mm and the localization precisions of about 27 mm and 46 mm in the x direction and y direction, respectively. However, their system suffers from the same problems as did that of Yang et al. [24]. In the present research, we designed a novel HF-band RFID system with eight RFID readers having an appropriately-sized antenna to eliminate uncertainties and to realize real-time position and orientation localization.
Mohd Yazed Ahmad et al. [26] proposed a novel triangular-bridge-loop reader antenna for positioning and presented a method for the improvement of HF-RFID-based positioning. They used an HF-band RFID reader with a large-sized antenna having dimensions of 320 × 230 mm 2 , which was designed with a novel triangular-bridge-loop. In their system, passive RFID tags were sparsely distributed at a distance of about 1300 mm. Their system performed quite well and achieved an average positioning error of 40.5 mm, in contrast to the average positioning error of 124.1 mm in the case of a system employing a conventional reader antenna. However, their system has a limitation in that the speed of the robot is set to be consistent to ensure successful reading of the tags by the reader. If the robot moves faster, a faster reader and tag are required. If the robot moves slower or if an emergent circumstance occurs where the speed is required to be reduced, the system may not detect tags for a long time, as the distance between two tags is 1300 mm. Under this condition, only encoder data can be used for localization, which could make the localization difficult. In our proposed RFID system, there is no such speed limitation. We validate our proposed RFID system at various speeds and, thus, demonstrate its overall efficiency. An LF-band RFID system for indoor mobile robot self-localization has been proposed in some studies [12,27]. For example, Kodaka et al. [12] used a vehicle equipped with two RFID readers. In their study, RFID tags were installed on the floor, and the size of one tag was 260 × 260 mm 2 . The distance between adjacent tags was 300 mm. The MCL method was used for robot self-localization, and the localization error was less than 100 mm and 0.1 rad on average. Unfortunately, despite developing an active self-localization method [28], the researchers faced difficulty in increasing the localization accuracy given the dependence on the size of a tag. The communication range of RFID readers also affects the accuracy of self-localization.
Takahashi et al. [15] proposed an RFID-based self-localization system having eight RFID reader antennas at the bottom of the robot. In this system, the RFID tags were arranged in a lattice pattern, and a simple kinematics was used to localize the robot. The accuracy of self-localization was highly dependent on the density of the RFID tags. With eight RFID reader antennas and small-sized high-density RFID tags (100 tags/m 2 ), the localization error was less than 17 mm and 0.12 rad on average. However, if the reader failed to read the tag, the error became relatively large, and the system faced a problem of latency in scanning the tags given that scanning needs to be performed by one antenna after another. Therefore, the system became unstable at high speeds. Takahashi and Hashiguchi [16] developed a new mobile robot self-localization system using MCL based on the HF-band RFID system. This system included 96 RFID readers, and the density of the RFID tags was 100 tags/m 2 . They compared the performances of self-localization with an RFID system only and with an RFID system equipped with LRFs and found that in the absence of obstacles, both RFID systems were able to locate the robot accurately. However, in the presence of obstacles around the robot, the system using LRFs could not locate the robot accurately and stably, which demonstrated the efficiency of the RFID system used alone.
The limitation of their system is its high production cost. It is then necessary to redesign the system, reduce the number of RFID readers, use a low density of RFID tags and establish an efficient configuration of the system to cut down its production cost.
Self-Localization of an Indoor Mobile Robot by Using an RFID System with MCL
MCL, one of the probabilistic approaches [17], has been shown to be a good method for real-time self-localization of robots. Takahashi and Hashiguchi [16] applied MCL to their RFID-system-based self-localization. It is assumed that an RFID tag has a unique ID and that the tag ID map is maintained with the position of each tag. RFID readers work independently and asynchronously of each other. We briefly introduce the MCL using the RFID system [16] here. We define the world coordinate system w Σ and the robot coordinate system r Σ as shown in Figure 2. The robot position and orientation is defined in the world coordinate system at time t as w x t = ( w x t , w y t , w θ t ). z t = (r t , tag t ) is the measurement output at time t, and tag t is the tag detected with the RFID reader r t . A motion model w x t+1 = MotionModel( w x t ) is defined to estimate the next robot position and orientation of the robot, w x t+1 from w x t . A measurement model p(z t | w x t ) is also defined to calculate the posterior probability to receive the measurement output z t if the robot position and orientation are w x t . A set of particles is defined as a set of hypotheses of the robot position and orientation denoted at time t as X t = ( w x [1] t , w x [2] t , · · · , w x M t ), where M is the number of particles. The algorithm of MCL is given in Algorithm 1. The self-localization system updates the particles with a fixed sampling time ∆t. If no RFID reader detects a tag within the sampling time, the procedure of belief calculation (Step 4 in Algorithm 1) is skipped.
The motion model of an omnidirectional vehicle is given by Equation (1): where V = (v x , v y , ω), ∆t and N(0, σ) denote the velocity of the robot, sampling time and Gaussian distribution with the standard deviation σ = (σ x , σ y , σ ω ), respectively. The position of tag j detected by RFID reader antenna r i in world coordinates is w x tag j = ( w x tag j , w y tag j , w z tag j ) T . The position of RFID reader r i at time t in world coordinates w x r i = ( w x r i , w y r i , w z r i ) T is estimated by Equation (2): where ( r x r i , r y r i ) T is the position of the RFID reader antenna r i in the robot coordinate system and it is known in advance. We assume w z tag j and w z r i to be constant.
Algorithm 1 Monte Carlo localization.
1: Initialize particlesX t = X t = ( w x [1] t , w x [2] t , · · · , w x M t ) 2: for m = 1 to M do 3: Update particles with the motion model: Calculate the belief of each particle with the measurement model: add w x m t to X t 10: end for 11: for m = 1 to M do 12: if w m t < α(a constant), re-initialize w x t 13: end for 14: return X t reader: tag: Then, the weight of each particle, w m , is calculated using the measurement model p(z t | w x m t ). p(z t | w x m t ) is a likelihood function defined in Section 5. After the weights w m are calculated, the algorithm estimates the position of the robot as the weighted mean of the particles.
As shown in Steps 7-10 of Algorithm 1, particles are updated with a probability proportional to the weight w m .
Particle Reinitialization
In conventional studies, in scenarios where the initial position of the robot was unknown or the weights of all particles became too small during the transportation because of an unexpected disturbance in the robot's movement, the particles were distributed uniformly randomly in the possible exploration space. However, it is obviously undesirable to distribute the particles uniformly if the possible exploration space is too large. Once the robot detects one of the tags, it can narrow down its own possible position immediately according to the position of the detected tag and the reader that detects it. Figure 3 shows examples of the possible poses of the robot when it detects a tag if its own position is unknown. We propose a novel particle reinitialization method that is specific to the RFID system and that is aimed at the realization of highly efficient and accurate self-localization. The particle reinitialization process is performed from Step 11 onward in Algorithm 1. Once w m becomes too small for the robot's self-localization, particles w x t = ( w x t , w y t , w θ t ) will be reinitialized as given in Equation (4): where w θ t is generated using a uniform random function from −π to π. The re-sampling indicates that the robot is at a position where the RFID reader r i is just above the detected tag tag j . The proposed re-sampling leads the self-localization system to localize the robot itself quickly and stably because it can eliminate unnecessary particles distributed over the possible exploration space. In our research, the number of particles is set as 500.
HF-Band RFID Systems
In this study, we use multiple HF-band RFID readers to realize highly accurate and stable real-time localization. Generally, an RFID reader with a large antenna could provide a large area for tag detection. However, using a large antenna would also increase uncertainty during the localization. To eliminate uncertainty and realize highly accurate real-time localization, we first used 96 HF-band RFID readers with a small antenna.
System with 96 HF-Band RFID Readers
As shown in Figure 4a, the 96 small-sized HF-band RFID readers are arranged in a cross pattern. The RFID reader is small, with the size of one reader antenna being 30 × 30 mm 2 . Our RFID reader antenna is much smaller than the large antenna (660 × 300 mm 2 ) employed in a previous work [24]. In this system, l 1 = 44.5 mm and l 2 = 37.5 mm. Figure 4b shows the 96 RFID readers designed and developed by us previously [16]. The RFID system makes the self-localization stable and accurate. However, the production cost is too high for a service robot being applied to public facilities. To cut down the production cost while maintaining the high accuracy of the self-localization, we attempted to configure a low-cost HF-band RFID system by increasing the antenna size appropriately and reducing the number of RFID readers. We first reduced the number of readers to 24 (Figure 5a), which is a quarter of the original 96 RFID readers. This is because we increased the size of one antenna to 60 × 60 mm 2 , which is four-times the antenna size in the case of using 96 RFID readers. The 24-RFID-reader system performed well; further details of this system can be found elsewhere [29]. Then, we made a slight adjustment by reducing the number of readers to 20, as shown in Figure 5b. Specifically, we reduced the number of readers by four and instead placed four readers at the center in order to test the system. Further details have been reported elsewhere [30]. Both the 24-RFID-reader and the 20-RFID-reader systems were able to locate the robot accurately and stably. We wished to continue reducing the number of readers to six, i.e., a quarter of the 24 RFID readers. However, due to the robot, the readers have to be arranged in a cross pattern. Given the difficulty in arranging six readers in a cross pattern, we made a slight adjustment and used eight RFID readers instead. Figure 6 shows the newly-designed and built eight-RFID-reader system. The size of one RFID reader antenna in this system is 60 × 60 mm 2 , which is the same as the size of the antenna in the 24-RFID-reader system. For this new system, the intervals shown in Figure 6 are as follows: l 1 = 100 mm in the x direction and l 2 = 100 mm in the y direction.
Configurations of RFID Tags with Different Densities
The production cost depends on not only the configuration of the readers, but also the configuration of the tags embedded in the robot's environment. A configuration with fewer tags is less expensive. Figure 7a shows a small passive RFID tag used by us. The size of the passive RFID tag is 10 × 20 mm 2 . As shown in Figure 7, the RFID tags are arranged in a lattice pattern. We investigated the performances of RFID systems in the case of using RFID tags arranged in a lattice pattern with different densities: 400 tags/m 2 , 100 tags/m 2 , 25 tags/m 2 and 16 tags/m 2 . Figure 7b,c shows the configurations of RFID tags with densities of 100 tags/m 2 and 16 tags/m 2 , respectively.
RFID Tag Detection Model
An HF-band RFID reader detects a tag reliably if the tag is in the detection range. We model the detection range as follows. Figure 8 shows the model for ID tag detection. In this figure, the detection area is represented by the sphere drawn using the solid black line. Specifically, this sphere represents the detection range of one RFID reader. The radius of the detection range is denoted by R. The red dot represents the center of the detection range, which is just below the RFID reader antenna at a distance of h c . The height of one RFID reader antenna is given as w z r i = h a . Tags are embedded in the carpet on the floor. A tag is detected if and only if it is within the detection range of one RFID reader antenna, which is illustrated in Figure 8 and expressed in Equation (5). In the simulation part, we use this tag detection model to simulate the RFID reader.
( w x r i − w x tag j ) 2 + ( w y r i − w y tag j ) 2 + ( w z r i − h c − w z tag j ) 2 < R 2 (5) Figure 8. ID tag detection model for a small-sized RFID reader antenna. Figure 9 shows the tag detection area of one RFID reader antenna whose size is 60 × 60 mm 2 . Figure 10 shows the cross-sectional views of the tag detection area and the success rates at heights of 15 mm and 20 mm. The z-axis represents the tag detection rate when the tag is located at (x, y). Use of a likelihood function for the measurement model is crucial for ensuring the accuracy of self-localization. Conventional studies employed a Gaussian distribution as the likelihood function. Figure 10 illustrates that the detection rate is almost 100% if the tag is in the detection range and almost 0% if the tag is outside the range. This suggests that a Gaussian distribution is unsuitable for the RFID system. Therefore, we investigated two likelihood functions: a Gaussian distribution and a combination of the Gaussian distribution and step function.
Two Different Likelihood Models
We establish two different likelihood functions for the measurement models of MCL, as shown in Figure 11. One likelihood function is defined by the distance between a reader and the ID tag detected by it. This likelihood function often uses the Gaussian model, as shown in Figure 11a. Particles tend to gather around the center of the Gaussian distribution because the weights of the particles are calculated by the Gaussian distribution function N(µ, σ), and the closer a particle to µ, the higher is its assigned weight. Figure 11b shows a combination of the Gaussian distribution and the stepwise function, hereafter referred to as the combination model, newly designed in this study. The combination likelihood function is defined as given in Equation (6): where β is a constant. The weight of the particles in the range defined between the center and σ is one. Otherwise, the weight of the particles reduces according to the distance between the reader and the tag. We evaluate these two different likelihood function models in both a simulation and a real environment.
Simulations of Self-Localization by a 96 HF-Band RFID Reader System Using Two Likelihood Models
In the simulations of the 96 HF-band RFID reader system, the height h a of the antenna is set to 20 mm, and the detection radius R is 15 mm. The center of the detection area of a reader antenna is just below the antenna at a distance h c of 8 mm. Table 2 presents the simulation results obtained using the 96 HF-band RFID readers with the two different likelihood models. The RFID tags are arranged in a lattice pattern with four different densities, as described in Section 4.3. The results in the table illustrate that the 96 HF-band RFID reader system performs highly accurate self-localization with average errors of less than 5 mm in both the x and y directions, irrespective of whether the Gaussian model or the combination model is used. The variances and maximum errors listed in Table 2 also demonstrate that the proposed system is quite stable. Though the data in the table show that both the likelihood models provide almost the same accuracy of self-localization, the average errors in the case of using the combination model are slightly smaller than those in the case of using the Gaussian model. This result supports the hypothesis that the combination model works better than the Gaussian model. In general, the self-localization accuracy decreases when the tag density decreases, and from the data in Table 2, it is seen that the density of 400 tags/m 2 provides the best accuracy. Further, the density of 100 tags/m 2 is better than densities of 25 tags/m 2 and 16 tags/m 2 . However, Table 2 reveals that the average error for the density of 16 tags/m 2 is slightly smaller than that for the density of 25 tags/m 2 . The self-localization performance is easily affected by the configurations of the readers and tags and the likelihood models. Additionally, the routes of the robot also affect the accuracy, especially under conditions of using low densities of tags, such as 25 tags/m 2 and 16 tags/m 2 , because the self-localization is based on the reading of the tags' information by the RFID readers. This is the reason why the average errors in the case of 16 tags/m 2 are slightly smaller than those in the case of 25 tags/m 2 . Table 3 presents the simulation results for the 96 HF-band RFID reader system without the particle reinitialization. The simulations were performed with a tag density of 100 tags/m 2 and using the two likelihood models. The average errors, maximum errors and variances in this simulation are much larger than the values listed in Table 2. This is because the particles would be reinitialized under the condition that their weight is too small for the system to estimate the position of the robot. Unnecessary particles also would be eliminated to make the system locate the robot quickly and accurately. This proves that our proposed particle reinitialization method enables more accurate and stable self-localization.
Simulations of Self-Localization by the Eight HF-Band RFID Reader System Using Two Different Likelihood Models
In the simulation with the eight HF-band RFID reader system, R = 30 mm, h c = 24 mm and w z r i = 24 mm. Table 4 lists the simulation results in the case of using the eight HF-band RFID reader system. The average errors in the simulation are less than 10 mm in both the x and y directions for both the Gaussian and the combination models. As was the case with the self-localization using the 96 HF-band RFID reader system, the self-localization using the eight HF-band RFID reader system was also highly accurate and stable, despite the reduction in the number of readers from 96 to eight. Furthermore, as was the case with the simulation using the 96 HF-band RFID reader system, in this case, as well, the density of 400 tags/m 2 provided the best self-localization accuracy. Further, the density of 100 tags/m 2 provided better accuracy than did the densities of 25 tags/m 2 and 16 tags/m 2 . The simulation results show that the average self-localization errors when using the combination model are smaller than those when using the Gaussian model under all of the tag arrangement conditions. This also proves our hypothesis that the combination model performs better self-localization than does the Gaussian model. It is found from Table 4 that the average errors for 25 tags/m 2 are slightly larger than those for 16 tags/m 2 , which is the same as the result seen in Table 2. As described in Section 5.3, this result can be attributed to the routes of the robot selected by us. Table 4. Self-localization performances of the eight HF-band RFID reader system using two likelihood models.
Experiments in a Real Environment
The eight RFID reader system ( Figure 6) was attached at the bottom of our omnidirectional vehicle, as shown in Figure 12. As mentioned earlier, Figure 9 shows the tag detection area of one RFID reader antenna of the eight HF-band RFID reader system, and Figure 10 shows the cross-sectional view of the tag detection area and the success rate at heights of 15 mm and 20 mm. Figure 10 demonstrates that the combination likelihood model is more suitable than the Gaussian likelihood model at the heights of 15 mm and 20 mm. This also proves our hypothesis in simulations that the combination model performs better than the Gaussian model. Next, we verified whether or not the combination model performs better than the Gaussian model in a real environment. As shown in Figure 13a, we verified the eight HF-band RFID reader system using the two different likelihood models in the real environment. The height of the readers was set as 15 mm because the reader antenna has a better tag detection success rate. The speed of the robot was set as 100 mm/s. The eight RFID readers and the 96 RFID readers detected ID tags every 50 ms. The frequency of the eight RFID readers and 96 RFID readers was the same, i.e., 13.56 MHz. ID tags were embedded in the carpet on the floor with densities of 100 tags/m 2 and 16 tags/m 2 in the lattice pattern. We made the robot run the path shown in Figure 13, from Point 1 to Point 8. It was difficult to determine the localization error for every position along which the robot moved. To calculate the localization errors, we marked eight points at which the positions were already known, as shown in Figure 13. The robot stayed for 20 s at each of the eight points to perform self-localization, so that it could collect enough data to calculate the self-localization errors. We first verified our new eight HF-band RFID reader system. We also performed a comparison experiment with the 96 HF-band RFID reader system. To analyze the self-localization errors, we acquired a large amount of data through calculations at the eight points shown in Figure 13.
The experimental results for the 96 HF-band RFID reader system are presented in Table 5. At the density of 100 tags/m 2 , the average self-localization errors are smaller than 15 mm in the x direction and smaller than 25 mm in the y direction for both likelihood function models. The comparison of these two likelihood models reveals that in the x direction, the average self-localization error when using the combination model is 8.2 mm, which is smaller than that when using the Gaussian model. In the y direction, both the likelihood models perform self-localization with almost the same accuracy, with errors of 22.3 mm and 23.8 mm for the Gaussian and combination models, respectively. At the density of 16 tags/m 2 , the combination model performs better than the Gaussian model, where, as shown in Table 5, the average self-localization errors are 26.2 mm in the x direction and 18.0 mm in the y direction for the Gaussian model and 17.7 mm in the x direction and 13.5 mm in the y direction for the combination model. From these average self-localization errors, it can be said that the self-localization at the density of 16 tags/m 2 is accurate enough: in the x direction, the errors at this density are only slightly larger than those at 100 tags/m 2 , and in the y direction, the errors at this density are smaller than those at 100 tags/m 2 . However, the self-localization at this density was not stable, as the maximum errors and variance were very large, as seen in Table 5. This is because the system could not detect tags at some places when the density of the ID tags became as low as 16 tags/m 2 . The detection area of one RFID reader is narrow so that the size of the 96 RFID reader antenna is small. Eventually, the self-localization errors increase correspondingly. With the configuration of 96 small RFID readers and the ID tag density of 16 tags/m 2 , the system is unable to locate the robot stably. Table 6 presents the results of self-localization errors in the case of the eight HF-band RFID reader system. The results show that the eight HF-band RFID reader system with large antennas performs highly accurate self-localization. As is seen from the table, the average self-localization errors in the case of both of the likelihood models at the density of 100 tags/m 2 are almost the same. However, as can be seen from the table, even though both of the likelihood models perform well at the density of 16 tags/m 2 , the combination model performs better than the Gaussian model. The maximum errors and variance at the density of 16 tags/m 2 are much larger than those at 100 tags/m 2 ; however, the system still maintains stable localization. From Tables 5 and 6, it can be seen that at the density of 100 tags/m 2 , the average errors in the x direction are much smaller than those in the y direction. At the density of 16 tags/m 2 , the average errors in the x direction are larger than those in the y direction. As the arrangement of the RFID reader antennas is the same in both the x and y directions, we consider that these differences in average errors were caused by the ID tag installation. Because it is difficult to ensure the installation of every two adjacent tags at the same interval, the installation errors cannot be prevented.
From a comparison of the two RFID systems, we found that both systems performed highly accurate self-localization at the density of 100 tags/m 2 and that there was only a slight difference in the average self-localization errors when the combination model was used. The eight HF-band RFID reader system performed slightly better than the 96 HF-band RFID reader system. The difference between the two RFID systems was that the eight HF-band RFID reader system could locate the robot accurately and stably under the condition of using a low density of ID tags, 16 tags/m 2 , whereas the system with the 96 small RFID readers could not. This is because the eight RFID readers are equipped with enlarged antennas in order to widen the tag detection range of a single RFID reader. The efficient RFID readers of the two RFID systems that detect ID tags are almost the same. Moreover, we used MCL for self-localization, which could enable the robot to be located precisely even if only one or two tags were detected. The experimental results demonstrate the efficiency of our newly-developed eight HF-band RFID reader system with large antennas. This system performs robot self-localization stably and accurately, which proves that eight RFID readers with large antennas, instead of 96 RFID readers, can provide sufficient self-localization accuracy for robot localization. Moreover, because we use eight RFID readers, the production cost of the system can be reduced significantly in comparison to that of the 96 HF-band RFID reader system.
Trajectory of Real-Time Self-Localization
To verify the real-time self-localization performance of the developed eight HF-band RFID reader system, we obtained the statistics of the real-time localization of the robot. Figure 14 shows the trajectories of our proposed eight HF-band RFID reader system. Figure 14a shows the real-time self-localization trajectory obtained using the conventional Gaussian model, whereas Figure 14b shows that obtained using our proposed likelihood model. The comparison of Figure 14a,b reveals that the proposed likelihood model works better than the conventional method employing the Gaussian model. As seen in both figures, Figure 14a,b, the robot could not perform high-accuracy localization in the area indicated by the circle. This is because the system could not detect ID tags, since they had an installation error. On the other side, our eight RFID reader antenna is small; once tags are distributed in the gap of the detection area, the system could not detect tags. The reader detection area is shown in Figure 9, and the installed RFID tags are shown in Figure 13.
In our experiments, we also validated the performance of the proposed eight HF-band RFID reader system at different speeds. Figure 14b shows the real-time localization at a speed of 100 mm/s. Figure 15 shows the real-time localization at speeds in the range of 50 mm/s to 350 mm/s at intervals of 50 mm/s. From Figure 15a to 15f, it is clear that the localization accuracy does not decrease with an increase in the speed. Furthermore, as shown in Figure 15, to a certain extent, the system even performs better at high speeds. This is because when the robot moves at low speeds, the noise of the motors has a much greater effect on the system performance. In contrast, when the robot moves at high speeds, the noise of the motors has less time to have an effect on the system performance. Above all, the proposed RFID system was able to realize highly accurate and stable localization at various speeds. This proves the efficiency and stability of the proposed RFID system.
Conclusions
In this study, we achieved stable and accurate self-localization of an indoor mobile robot by using a newly-developed RFID system and investigated the self-localization performance at different configurations of the RFID system. We eventually determined a novel configuration of the RFID system that has low production cost and provides highly accurate and stable self-localization. In order to make the self-localization realized by the proposed RFID system more accurate and efficient, we applied an efficient particle reinitialization method to MCL. We designed two different likelihood models, a Gaussian model and a combination model (a combination of the Gaussian distribution and the step function), to investigate their influence on the self-localization performance. We verified both of the likelihood models experimentally in a real environment. The experimental results proved our hypothesis that the combination model performs better than the Gaussian model. Results of both simulations and real environment experiments demonstrate that the proposed configuration consisting of eight HF-band RFID readers provides sufficiently high accuracy of self-localization. | 9,466.8 | 2016-07-29T00:00:00.000 | [
"Business",
"Computer Science"
] |
Synthesis of Active Graphene with Para-Ester on Cotton Fabrics for Antistatic Properties
The excellent electrical properties of graphene provide a new functional finishing idea for fabricating conductive cotton fabrics with antistatic properties. This work develops a novel method for synthesizing active graphene to make cotton fabrics conductive and to have antistatic properties. The graphite was oxidized to graphene oxide (GO) by the Hummers method, and was further acid chlorinated and reacted with the para-ester to form the active graphene (JZGO). JZGO was then applied to cotton fabrics and was bonded to the fiber surface under alkaline conditions. Characterizations were done using FT-IR, XRD and Raman spectroscopy, which indicated that the para-ester group was successfully introduced onto JZGO, which also effectively improved the water dispersibility and reactivity of the JZGO. Furthermore, this study found that the antistatic properties of the fabric were increased by more than 50% when JZGO was 3% by weight under low-humidity conditions. The washing durability of the fabrics was also evaluated.
Introduction
Functional cotton fabrics are widely used in manufacture and daily life, and their antistatic property is one of their most important functions. This is because cotton fabrics at low temperatures and in humid environments risk causing sparks due to electrostatic discharge, which may lead to dangerous burning and explosion hazards [1]. Therefore cotton fabrics usually require an antistatic finishing process during manufacture, such as applying an antistatic agent (e.g., alkoxysilane, chitosan) to the surface [1,2]. It is also found that antistatic effects can be further enhanced by simply coating a layer of conductive compounds on the surface of cotton fabrics [3][4][5][6]. However, the simple coating or addition of antistatic agents cannot maintain the antistatic effect for long due to the problem of fading or being washed away, which is also called the fastness [7]. Therefore, the effort to develop a new method for fabricating conductive cotton fabrics combining good antistatic performance and fastness is attracting researchers worldwide.
Preparation of Graphene Oxide
Two grams of natural flake graphite and 1 g of NaNO 3 were added to 50 mL of concentrated H 2 SO 4 (98%) in a three-necked flask with stirring and kept below 4 • C for 1 h. Then 6 g of KMnO 4 (in batches, finished in 30 min) was added to the mixture, with the temperature below 10 • C and reacted for 2 h. The three-necked flask was transferred to a water bath with a constant temperature of 35 • C for 1 h. Ninety-two milliliters of deionized water was slowly added into the mixture and the temperature was increased and maintained at 95 • C for 30 min. Deionized water (200 mL) and H 2 O 2 (10 mL) with the concentration of 30% were added to the mixture until no more bubbles were generated. The product was collected by filtration while it was still warm and washed with HCl of 10% and deionized water several times until the centrifuged supernatant was neutral. The precipitate of the product was removed and dispersed ultrasonically and dried for 24 h to obtain the product, graphene oxide (GO). Figure 1 illustrates the synthesis reaction mechanism. Nanomaterials 2020, 10, x 3 of 12 Figure 1. Reaction mechanism for synthesis of graphene oxide.
Preparation of Active Graphene
One hundred milligrams of graphene oxide was dispersed in 20 mL sodium sulfoxide (SOCl2), and then 0.5 mL of N,N-dimethylformamide (DMF) was added into the mixture. The mixture was heated up to 60 °C and kept for 24 h. After the reaction was completed, the temperature was increased to 90 °C , and the excess SOCl2 was removed to obtain acid chlorinated graphite oxide (GOCl). Half a gram of purified para-ester was then dissolved in 20 mL of DMF, before it was poured into the GOCl. The temperature of the reaction was kept at 90 °C to continue the reaction for another 24 h. After the reaction, the precipitates were washed with deionized water, collected, and then heated in an oven at 60 °C for 8 h to obtain active graphene (JZGO). The synthesis mechanism is illustrated in Figure 2.
Preparation of Active Graphene
One hundred milligrams of graphene oxide was dispersed in 20 mL sodium sulfoxide (SOCl 2 ), and then 0.5 mL of N,N-dimethylformamide (DMF) was added into the mixture. The mixture was heated up to 60 • C and kept for 24 h. After the reaction was completed, the temperature was increased to 90 • C, and the excess SOCl 2 was removed to obtain acid chlorinated graphite oxide (GOCl). Half a gram of purified para-ester was then dissolved in 20 mL of DMF, before it was poured into the GOCl. The temperature of the reaction was kept at 90 • C to continue the reaction for another 24 h. After the reaction, the precipitates were washed with deionized water, collected, and then heated in an oven at 60 • C for 8 h to obtain active graphene (JZGO). The synthesis mechanism is illustrated in Figure 2.
Preparation of Active Graphene
One hundred milligrams of graphene oxide was dispersed in 20 mL sodium sulfoxide (SOCl2), and then 0.5 mL of N,N-dimethylformamide (DMF) was added into the mixture. The mixture was heated up to 60 °C and kept for 24 h. After the reaction was completed, the temperature was increased to 90 °C , and the excess SOCl2 was removed to obtain acid chlorinated graphite oxide (GOCl). Half a gram of purified para-ester was then dissolved in 20 mL of DMF, before it was poured into the GOCl. The temperature of the reaction was kept at 90 °C to continue the reaction for another 24 h. After the reaction, the precipitates were washed with deionized water, collected, and then heated in an oven at 60 °C for 8 h to obtain active graphene (JZGO). The synthesis mechanism is illustrated in Figure 2.
Active Graphene Modification onto Cotton Fabric
The reaction of active graphene with cotton fabric was divided into the following two parts: under the alkaline condition, the H atom on the α-carbon of the para-ester became active in the electron-withdrawing action of the sulfone group, and the elimination reaction was easily carried out with the sulfate group to form the vinyl sulfone group. Active graphene generated vinyl sulfone, which reacted with the hydroxyl through covalent bonding on cotton fiber in alkaline conditions, so that graphene tablets were grafted to cotton fabric. The reaction process is shown in Figure 3. The active graphene (JZGO) agent was then made of 1% and 3% (o.w.f = of the weight of fabrics) active graphene, and sodium carbonate (10 g/L). The bath ratio of reagent to water was 1:20. The traditional padding-dry-bake process [42] was used to apply the active graphene onto the surface of the cotton fabrics. The above active graphene JZGO was well dispersed in an ultrasonic bath for 20 min before it was applied on to the fabrics. Cotton fabrics were then dipped into the well-dispersed active graphene suspension, before they were put through rollers to remove the excess water. The fabrics with JZGO were dried in the oven at 60 • C, and further baked at 150 • C for 3 min. Post-treatments, including rinsing with cold water, soap, hot water, and cold water, were carried out before the final drying of the fabrics.
Active Graphene Modification onto Cotton Fabric
The reaction of active graphene with cotton fabric was divided into the following two parts: under the alkaline condition, the H atom on the α-carbon of the para-ester became active in the electron-withdrawing action of the sulfone group, and the elimination reaction was easily carried out with the sulfate group to form the vinyl sulfone group. Active graphene generated vinyl sulfone, which reacted with the hydroxyl through covalent bonding on cotton fiber in alkaline conditions, so that graphene tablets were grafted to cotton fabric. The reaction process is shown in Figure 3. The active graphene (JZGO) agent was then made of 1% and 3% (o.w.f = of the weight of fabrics) active graphene, and sodium carbonate (10 g/L). The bath ratio of reagent to water was 1:20. The traditional padding-dry-bake process [42] was used to apply the active graphene onto the surface of the cotton fabrics. The above active graphene JZGO was well dispersed in an ultrasonic bath for 20 min before it was applied on to the fabrics. Cotton fabrics were then dipped into the well-dispersed active graphene suspension, before they were put through rollers to remove the excess water. The fabrics with JZGO were dried in the oven at 60 °C , and further baked at 150 °C for 3 min. Post-treatments, including rinsing with cold water, soap, hot water, and cold water, were carried out before the final drying of the fabrics.
Characterization of Surface Modification
The surface functional groups of graphene oxide (GO) and active graphene (JZGO) were analyzed by a Varian 640 infrared spectrometer, Varian Co., Atlanta, GA, USA. The test wavelength range was 400-4000 cm −1 , the test resolution was 4 cm −1 , and the scanning frequency was 32 times. The samples were analyzed by inVia-Reflex laser microscopy Raman spectroscopy. The excitation wavelength was 532 nm and the test range was 1000-3500 cm −1 .
The crystallite sizes of the samples (GO and JZGO) and the change of the interlayer distance between the samples before and after the reaction were measured by D/max-2550VB+/PC X-ray diffractometer, Rigalcu Co., Tokyo, Japan. The test uses Cu-Kα radiation, tube pressure 40 kV, tube flow 200 mA, wavelength λ = 1.54 Å, and scanning angle range of 5-90 • . The surface morphology of the samples was characterized by a HITACHI / TM-1000 scanning electron microscope, HITACHI, Tokyo, Japan. The thermogravimetric curve of the sample was measured by a TG 209 F1 thermal analyzer, NETZSCH Co., Selb, Germany. The temperature range was from room temperature to 900 • C under a gas atmosphere of N 2 with a gas flow rate of 10 mL/min.
Properties of Modified Cotton Fabrics
The antistatic properties of the fabric were measured by a YG (B) 342E fabric electrostatic tester, Wenzhou Darong Textile Instrument Co., Ltd., Wenzhou, China. The fabric areas were 45 mm × 45 mm, pre-dried at 50 • C for 30 min, and then placed in a condition of 40% humidity for 5 h. The sample was put into the instrument when its humidity was 40%. The static voltage data of each sample was measured three times. The fastness to soaping/washing was done using a soaping solution containing 5 g/L prepared with the standard soap sheet and 2 g/L of sodium carbonate, with the bath ratio of 1:50, washing for 30 min at a temperature of 60 • C.
Morphology of the Graphene Oxide and the Active Graphene
The SEM (Scanning Electron Microscope) image in Figure 4a shows the morphology of the graphene oxide (GO). The morphology of the GO shows a fluffy appearance with a large number of folds on the surface, as well as the edge curling and a large slice area. This was caused by the introduction of oxygen atoms. The oxidation reaction made the original flat graphite sheet surface wrinkled and the edge of the layer became curly. The surface morphology of the active graphene (JZGO) shown in Figure 4b is similar to that of GO, the surface of which is also wrinkled and has a granular para-ester. Nanomaterials 2020, 10, x 5 of 12
Characterization of Surface Modification
The surface functional groups of graphene oxide (GO) and active graphene (JZGO) were analyzed by a Varian 640 infrared spectrometer, Varian Co., Atlanta, GA, USA. The test wavelength range was 400-4000 cm −1 , the test resolution was 4 cm −1 , and the scanning frequency was 32 times. The samples were analyzed by inVia-Reflex laser microscopy Raman spectroscopy. The excitation wavelength was 532 nm and the test range was 1000-3500 cm −1 .
The crystallite sizes of the samples (GO and JZGO) and the change of the interlayer distance between the samples before and after the reaction were measured by D/max-2550VB+/PC X-ray diffractometer, Rigalcu Co., Tokyo, Japan. The test uses Cu-Kα radiation, tube pressure 40 kV, tube flow 200 mA, wavelength λ = 1.54 Å, and scanning angle range of 5-90°. The surface morphology of the samples was characterized by a HITACHI / TM-1000 scanning electron microscope, HITACHI, Tokyo, Japan. The thermogravimetric curve of the sample was measured by a TG 209 F1 thermal analyzer, NETZSCH Co., Selb, Germany. The temperature range was from room temperature to 900 °C under a gas atmosphere of N2 with a gas flow rate of 10 mL/min.
Properties of Modified Cotton Fabrics
The antistatic properties of the fabric were measured by a YG (B) 342E fabric electrostatic tester, Wenzhou Darong Textile Instrument Co.,Ltd, Wenzhou, China. The fabric areas were 45 mm × 45 mm, pre-dried at 50 °C for 30 min, and then placed in a condition of 40% humidity for 5 h. The sample was put into the instrument when its humidity was 40%. The static voltage data of each sample was measured three times. The fastness to soaping/washing was done using a soaping solution containing 5 g/L prepared with the standard soap sheet and 2 g/L of sodium carbonate, with the bath ratio of 1:50, washing for 30 min at a temperature of 60 °C.
Morphology of the Graphene Oxide and the Active Graphene
The SEM (Scanning Electron Microscope) image in Figure 4a shows the morphology of the graphene oxide (GO). The morphology of the GO shows a fluffy appearance with a large number of folds on the surface, as well as the edge curling and a large slice area. This was caused by the introduction of oxygen atoms. The oxidation reaction made the original flat graphite sheet surface wrinkled and the edge of the layer became curly. The surface morphology of the active graphene (JZGO) shown in Figure 4b is similar to that of GO, the surface of which is also wrinkled and has a granular para-ester. To further study the structure and the relationship between GO and JZGO, FT-IR, Raman and XRD characterizations were carried out. The infrared spectra of GO and JZGO were obtained and are compared in Figure 5. The spectra of the GO show main absorption peaks in the vicinity of 3428 cm −1 , 1716 cm −1 , 1626 cm −1 , 1400 cm −1 , 1227 cm −1 and 1079 cm −1 . Here, 1628 cm −1 represents the stretching vibration of the carboxyl group C=O at the edge of the GO; 1626 cm −1 is the stretching vibration peak of C=C of the carbon ring; 1400 cm −1 is the bending vibration peak of hydroxyl -OH in GO; 1227 cm −1 is the stretching vibration peak of C-O-C on the GO surface; 1079 cm −1 is the stretching vibration peak of C-OH [33]. The spectra indicated that the oxygen-containing functional groups, such as carboxyl groups (-COOH), hydroxyl groups (-OH), and epoxy groups (C-O-C), were introduced to the graphite [33]. A large number of hydroxyl groups after oxidation were introduced to the surface and the edge of the graphite sheet. The carbon atoms (C = C) connected to hydroxyl groups turned into C-C. At the same time, the hydroxyl group was dehydrated into an epoxy group, and the hydroxyl group at the edge was converted to a carboxyl group. Adjacent carboxyl or carbonyl groups were decarboxylated, thereby removing a portion of the functional groups, and the carbon content of the GO was gradually reduced.
FT-IR, Raman and XRD Characterizations of the Graphene Oxide and the Active Graphene
To further study the structure and the relationship between GO and JZGO, FT-IR, Raman and XRD characterizations were carried out. The infrared spectra of GO and JZGO were obtained and are compared in Figure 5. The spectra of the GO show main absorption peaks in the vicinity of 3428 cm −1 , 1716 cm −1 , 1626 cm −1 , 1400 cm −1 , 1227 cm −1 and 1079 cm −1 . Here, 1628 cm −1 represents the stretching vibration of the carboxyl group C=O at the edge of the GO; 1626 cm −1 is the stretching vibration peak of C=C of the carbon ring; 1400 cm −1 is the bending vibration peak of hydroxyl -OH in GO; 1227 cm −1 is the stretching vibration peak of C-O-C on the GO surface; 1079 cm −1 is the stretching vibration peak of C-OH [33]. The spectra indicated that the oxygen-containing functional groups, such as carboxyl groups (-COOH), hydroxyl groups (-OH), and epoxy groups (C-O-C), were introduced to the graphite [33]. A large number of hydroxyl groups after oxidation were introduced to the surface and the edge of the graphite sheet. The carbon atoms (C = C) connected to hydroxyl groups turned into C-C. At the same time, the hydroxyl group was dehydrated into an epoxy group, and the hydroxyl group at the edge was converted to a carboxyl group. Adjacent carboxyl or carbonyl groups were decarboxylated, thereby removing a portion of the functional groups, and the carbon content of the GO was gradually reduced.
Active graphene (JZGO) showed absorption peaks near 3431 cm −1 , 1716 cm −1 , 1623 cm −1 , 1384 cm −1 , 1317 cm −1 , 1137 cm −1 , and 778 cm −1 . Here, 3431 cm −1 is the stretching vibration peak of N-H; 1716 cm −1 is the stretching vibration peak of C=O; 1623 cm −1 is the stretching vibration peak of C=C; 1384 cm −1 is the stretching vibration peak of S=O; 1317 cm −1 is C-N stretching vibration peak; 1137 cm −1 is the asymmetric stretching vibration peak of -SO2-; and 778 cm −1 is the stretching vibration peak of S-O. This indicated that GO reacted with the para-ester. As shown in Figure 6, the Raman spectra of graphene (GO) showed two characteristic peaks: 1354 cm −1 (D peak) and 1601 cm −1 (G peak). D peaks occurred when the graphite sample was defective or the Raman scattered light was collected at the disordered structure. After the oxidation of graphite, the G peak was broadened and the D peak was broadened and enhanced. This was because the carbon atom in the graphite sheet was bonded to the oxygen-containing group; and sp 3 hybridization, a relatively disordered structure occurred, destroying the long-range order and symmetry of the graphite lattice. The degree of disorder is expressed as the ratio of the intensity of the D peak to the G peak, which is R = ID/IG. The intensity of the D peak of the JZGO increased and the peak width was narrowed down. The corresponding R value became larger, which means that the JZGO was more disordered than the GO. The more defects in the JZGO structure, the more subtropical carbon was added. This also indicated that the para-ester was grafted onto the GO [33]. Active graphene (JZGO) showed absorption peaks near 3431 cm −1 , 1716 cm −1 , 1623 cm −1 , 1384 cm −1 , 1317 cm −1 , 1137 cm −1 , and 778 cm −1 . Here, 3431 cm −1 is the stretching vibration peak of N-H; 1716 cm −1 is the stretching vibration peak of C=O; 1623 cm −1 is the stretching vibration peak of C=C; 1384 cm −1 is the stretching vibration peak of S=O; 1317 cm −1 is C-N stretching vibration peak; 1137 cm −1 is the asymmetric stretching vibration peak of -SO 2 -; and 778 cm −1 is the stretching vibration peak of S-O. This indicated that GO reacted with the para-ester.
As shown in Figure 6, the Raman spectra of graphene (GO) showed two characteristic peaks: 1354 cm −1 (D peak) and 1601 cm −1 (G peak). D peaks occurred when the graphite sample was defective or the Raman scattered light was collected at the disordered structure. After the oxidation of graphite, the G peak was broadened and the D peak was broadened and enhanced. This was because the carbon atom in the graphite sheet was bonded to the oxygen-containing group; and sp 3 hybridization, a relatively disordered structure occurred, destroying the long-range order and symmetry of the graphite lattice. The degree of disorder is expressed as the ratio of the intensity of the D peak to the G peak, which is R = I D /I G . The intensity of the D peak of the JZGO increased and the peak width was narrowed down. The corresponding R value became larger, which means that the JZGO was more disordered than the GO. The more defects in the JZGO structure, the more subtropical carbon was added. This also indicated that the para-ester was grafted onto the GO [33]. In Figure 7, the change of the characteristic diffraction peak reflects the transition from graphite to graphene oxide and from graphene oxide to active graphene. It can be seen from the curve of G (Graphene) that graphite exhibits a narrow and sharp characteristic diffraction peak at 2θ of 26.22°, representing the crystal interplanar spacing of d = 0.3396 nm, which suggests a typical graphite crystal structure. From the curve of GO (Graphene oxide), the characteristic diffraction peak of graphite disappears and a new diffraction peak appears at 2θ of 9.88°, which represents the crystal interplanar spacing of d = 0.8945 nm, indicating that due to the introduction of oxygen functional groups, the graphite hexagonal crystal structure was damaged, and the layer spacing of graphite lattice along the c-axis direction increased. These oxygen-containing groups were combined with water molecules through hydrogen bonds to impart hydrophilicity to the graphene, while further increasing the pitch of the graphite layer. JZGO showed a new diffraction peak with a weak intensity at 2θ of about 23°, and the spacing of the crystal layer was much lower than that of GO, but it was higher than natural flake graphite.
Dispersion and Thermal Stability Characterizations
As shown in Figure 8a, after 8 h of ultrasound treatment the dispersion became homogenous, while after resting for 24 h (Figure 8b), most of natural flake graphite and oxidized expanded graphite had settled. In contrast, the GO and JZGO were still evenly dispersed without precipitation. This was because the surface of the GO contained a large number of oxygen-containing functional groups such as carboxyl groups (-COOH), hydroxyl groups (-OH) and epoxy groups (C-O-C), which made the In Figure 7, the change of the characteristic diffraction peak reflects the transition from graphite to graphene oxide and from graphene oxide to active graphene. It can be seen from the curve of G (Graphene) that graphite exhibits a narrow and sharp characteristic diffraction peak at 2θ of 26.22 • , representing the crystal interplanar spacing of d = 0.3396 nm, which suggests a typical graphite crystal structure. From the curve of GO (Graphene oxide), the characteristic diffraction peak of graphite disappears and a new diffraction peak appears at 2θ of 9.88 • , which represents the crystal interplanar spacing of d = 0.8945 nm, indicating that due to the introduction of oxygen functional groups, the graphite hexagonal crystal structure was damaged, and the layer spacing of graphite lattice along the c-axis direction increased. These oxygen-containing groups were combined with water molecules through hydrogen bonds to impart hydrophilicity to the graphene, while further increasing the pitch of the graphite layer. JZGO showed a new diffraction peak with a weak intensity at 2θ of about 23 • , and the spacing of the crystal layer was much lower than that of GO, but it was higher than natural flake graphite. In Figure 7, the change of the characteristic diffraction peak reflects the transition from graphite to graphene oxide and from graphene oxide to active graphene. It can be seen from the curve of G (Graphene) that graphite exhibits a narrow and sharp characteristic diffraction peak at 2θ of 26.22°, representing the crystal interplanar spacing of d = 0.3396 nm, which suggests a typical graphite crystal structure. From the curve of GO (Graphene oxide), the characteristic diffraction peak of graphite disappears and a new diffraction peak appears at 2θ of 9.88°, which represents the crystal interplanar spacing of d = 0.8945 nm, indicating that due to the introduction of oxygen functional groups, the graphite hexagonal crystal structure was damaged, and the layer spacing of graphite lattice along the c-axis direction increased. These oxygen-containing groups were combined with water molecules through hydrogen bonds to impart hydrophilicity to the graphene, while further increasing the pitch of the graphite layer. JZGO showed a new diffraction peak with a weak intensity at 2θ of about 23°, and the spacing of the crystal layer was much lower than that of GO, but it was higher than natural flake graphite.
Dispersion and Thermal Stability Characterizations
As shown in Figure 8a, after 8 h of ultrasound treatment the dispersion became homogenous, while after resting for 24 h (Figure 8b), most of natural flake graphite and oxidized expanded graphite had settled. In contrast, the GO and JZGO were still evenly dispersed without precipitation. This was because the surface of the GO contained a large number of oxygen-containing functional groups such as carboxyl groups (-COOH), hydroxyl groups (-OH) and epoxy groups (C-O-C), which made the
Dispersion and Thermal Stability Characterizations
As shown in Figure 8a, after 8 h of ultrasound treatment the dispersion became homogenous, while after resting for 24 h (Figure 8b), most of natural flake graphite and oxidized expanded graphite had settled. In contrast, the GO and JZGO were still evenly dispersed without precipitation. This was because the surface of the GO contained a large number of oxygen-containing functional groups such Nanomaterials 2020, 10, 1147 8 of 12 as carboxyl groups (-COOH), hydroxyl groups (-OH) and epoxy groups (C-O-C), which made the GO more hydrophilic and easy to disperse. For the same reason, the carboxyl group in the active graphene (JZGO) was converted into a sulfonic acid group (-SO 3 H) and made the JZGO also easily dispersed. Nanomaterials 2020, 10, x 8 of 12 GO more hydrophilic and easy to disperse. For the same reason, the carboxyl group in the active graphene (JZGO) was converted into a sulfonic acid group (-SO3H) and made the JZGO also easily dispersed. In Figure 9, the oxidation occurred at three stages when significant mass loss was observed, where 20 to 200 °C corresponds to the desorption process of free water and combined water in GO. At 220 to 300 °C , the weight loss of GO was very rapid due to the decomposition of oxygen-containing functional groups, like hydroxyl (-OH), epoxy (C-O-C) which became small molecules of gas, e.g., carbon dioxide and water vapor. After this, weight loss became slower because the remainder was mainly the relatively stable GO carbon skeleton. Above 700 °C , the pyrolysis of carbon structure started, which damaged the carbon frame structure of the graphite through strong oxidation, thereby decreasing its thermal stability. The JZGO lost mass at two stages: the first stage (20 to 250 °C ) was the desorption process of free water and bound water, and the second stage (250 to 360 °C ) was mainly caused by the oxygencontaining functional group and the grafted ester of the JZGO. From this part of the mass loss rate, the introduction amount of the para-ester on the JZGO was roughly estimated to be about 4%. In Figure 9, the oxidation occurred at three stages when significant mass loss was observed, where 20 to 200 • C corresponds to the desorption process of free water and combined water in GO. At 220 to 300 • C, the weight loss of GO was very rapid due to the decomposition of oxygen-containing functional groups, like hydroxyl (-OH), epoxy (C-O-C) which became small molecules of gas, e.g., carbon dioxide and water vapor. After this, weight loss became slower because the remainder was mainly the relatively stable GO carbon skeleton. Above 700 • C, the pyrolysis of carbon structure started, which damaged the carbon frame structure of the graphite through strong oxidation, thereby decreasing its thermal stability. Nanomaterials 2020, 10, x 8 of 12 GO more hydrophilic and easy to disperse. For the same reason, the carboxyl group in the active graphene (JZGO) was converted into a sulfonic acid group (-SO3H) and made the JZGO also easily dispersed. In Figure 9, the oxidation occurred at three stages when significant mass loss was observed, where 20 to 200 °C corresponds to the desorption process of free water and combined water in GO. At 220 to 300 °C , the weight loss of GO was very rapid due to the decomposition of oxygen-containing functional groups, like hydroxyl (-OH), epoxy (C-O-C) which became small molecules of gas, e.g., carbon dioxide and water vapor. After this, weight loss became slower because the remainder was mainly the relatively stable GO carbon skeleton. Above 700 °C , the pyrolysis of carbon structure started, which damaged the carbon frame structure of the graphite through strong oxidation, thereby decreasing its thermal stability. The JZGO lost mass at two stages: the first stage (20 to 250 °C ) was the desorption process of free water and bound water, and the second stage (250 to 360 °C ) was mainly caused by the oxygencontaining functional group and the grafted ester of the JZGO. From this part of the mass loss rate, the introduction amount of the para-ester on the JZGO was roughly estimated to be about 4%. The JZGO lost mass at two stages: the first stage (20 to 250 • C) was the desorption process of free water and bound water, and the second stage (250 to 360 • C) was mainly caused by the oxygen-containing functional group and the grafted ester of the JZGO. From this part of the mass Nanomaterials 2020, 10, 1147 9 of 12 loss rate, the introduction amount of the para-ester on the JZGO was roughly estimated to be about 4%. Compared to the graphene oxide (GO), the weight loss rate of the JZGO was much smaller. This was because the amount of the oxygen-containing functional groups of JZGO, like hydroxyl (-OH), epoxy (C-O-C), was much less than in GO. Therefore, the thermal stability of JZGO was significantly improved. The reason why JZGO had a lesser amount of oxygen-containing functional groups was due to the acyl chloride reaction for preparing the JZGO. During the reaction, the strong dehydrating agent, thionyl chloride (SOCl 2 ) had reacted with some carboxyl groups (-COOH), hydroxyl (-OH) and other groups which resulted in the reduction of the amount of the oxygen-containing functional groups and thus the thermal stability of the JZGO was improved.
Morphology of the Modified Cotton Fabrics
The morphology of the modified cotton fabrics with the active graphene is shown in the SEM image of Figure 10. The cotton fabric surface was smooth before the modification reaction (Figure 10a), while the morphology of the modified cotton fabric surface was rough (Figure 10b), because the sheet of graphite was covered by the cotton fiber surface.
Nanomaterials 2020, 10, x 9 of 12 Compared to the graphene oxide (GO), the weight loss rate of the JZGO was much smaller. This was because the amount of the oxygen-containing functional groups of JZGO, like hydroxyl (-OH), epoxy (C-O-C), was much less than in GO. Therefore, the thermal stability of JZGO was significantly improved. The reason why JZGO had a lesser amount of oxygen-containing functional groups was due to the acyl chloride reaction for preparing the JZGO. During the reaction, the strong dehydrating agent, thionyl chloride (SOCl2) had reacted with some carboxyl groups (-COOH), hydroxyl (-OH) and other groups which resulted in the reduction of the amount of the oxygen-containing functional groups and thus the thermal stability of the JZGO was improved.
Morphology of the Modified Cotton Fabrics
The morphology of the modified cotton fabrics with the active graphene is shown in the SEM image of Figure 10. The cotton fabric surface was smooth before the modification reaction ( Figure 10a), while the morphology of the modified cotton fabric surface was rough (Figure 10b), because the sheet of graphite was covered by the cotton fiber surface.
Characterizations of the Antistatic Properties of the Modified Cotton Fabrics
As shown in Figure 11, the active graphene-modified cotton fabric showed a significant improvement on the antistatic effects in low humidity conditions. When adding 1% of active graphene, the static voltage of cotton fabric decreased from 13.0 V to 8.0 V; if the amount of active graphene increased to 3%, the static voltage of cotton fabric was reduced by about 50%, and the antistatic effect of cotton fabric was greatly improved. The static voltage was maintained at 7.3 V after three soap washings, which might be owing to the covalent bond of active graphene and cotton fabric.
Characterizations of the Antistatic Properties of the Modified Cotton Fabrics
As shown in Figure 11, the active graphene-modified cotton fabric showed a significant improvement on the antistatic effects in low humidity conditions. When adding 1% of active graphene, the static voltage of cotton fabric decreased from 13.0 V to 8.0 V; if the amount of active graphene increased to 3%, the static voltage of cotton fabric was reduced by about 50%, and the antistatic effect of cotton fabric was greatly improved. The static voltage was maintained at 7.3 V after three soap washings, which might be owing to the covalent bond of active graphene and cotton fabric.
Conclusions
In this study, the GO was prepared by the traditional Hummers method, and the carboxyl group was activated by acyl chloride reaction, then further reacted with the para-ester to obtain JZGO. Compared with GO, the JZGO was further disordered, and the interlayer spacing was further reduced, which was due to the introduction of a sulfonic acid group with good hydrophilic properties. In the process of acid chlorination, the oxygen-containing functional groups in graphene oxide was also removed, which gave JZGO better thermal stability. The JZGO reacted with the hydroxyl through covalent bonding on cotton fiber in alkaline conditions. In this research, JZGO was coated onto the cotton fabric using the conventional padding-baking process. When the amount of JZGO was up to 3% (o.w.f), the antistatic effect of cotton fabric was significantly improved. The active graphene-modified cotton fabric achieved good antistatic properties under low humidity conditions, and good fastness to washing.
Conclusions
In this study, the GO was prepared by the traditional Hummers method, and the carboxyl group was activated by acyl chloride reaction, then further reacted with the para-ester to obtain JZGO. Compared with GO, the JZGO was further disordered, and the interlayer spacing was further reduced, which was due to the introduction of a sulfonic acid group with good hydrophilic properties. In the process of acid chlorination, the oxygen-containing functional groups in graphene oxide was also removed, which gave JZGO better thermal stability. The JZGO reacted with the hydroxyl through covalent bonding on cotton fiber in alkaline conditions. In this research, JZGO was coated onto the cotton fabric using the conventional padding-baking process. When the amount of JZGO was up to 3% (o.w.f), the antistatic effect of cotton fabric was significantly improved. The active graphene-modified cotton fabric achieved good antistatic properties under low humidity conditions, and good fastness to washing. | 8,220.8 | 2020-06-01T00:00:00.000 | [
"Materials Science"
] |
Enabling High‐Speed Computing with Electromagnetic Pulse Switching
Communication and transfer of information from one block to another within a system is fundamental for high‐speed and efficient computing. Herein, a simple approach for computing, without using conventional electrical charge/discharge‐based primitive operations, in which information is represented in electromagnetic energy steps travelling in sections of transmission lines, is proposed. These steps are formed by transverse electromagnetic (TEM) square pulses with a polarity representing the values of Boolean variables. Logical operations between variables are realized at the crossing point of inter‐connected transmission lines by exploiting the known laws of reflection and transmission of TEM waves allowing power division and/or recombination. A series and parallel configurations for at‐will square pulse manipulation are discussed, offering new possibilities for future electromagnetic pulse‐based computing systems.
The history of computing has seen the pace and rhythm of innovation that has no precedent in any other technology created by humans before. This progress can be mainly attributed to the highly productive synergy between semiconductor technology and computer science innovations. For instance, metal-oxidesemiconductor field-effect transistors (MOS-FETs) are at the core of this synergy allowing the development of the well-known Boolean logic gates in a very compact way (with sizes of few hundreds or even just a few tens of square nanometers). [1] Since its conception, this technology has been fundamental for the creation of digital computing systems and devices as we know them, leading to the emergence of silicon chips, at the size of a few square centimeters, housing processors and memory units pulses in inter-connected transmission lines to provide an atwill processing (switching or transfer) of data from one point to another in the system as the core of computing is about how different flows of information interact in a system. Hence, our proposed platform does not require the use of charge/dischargebased circuit elements, thus giving rise to a fundamentally new paradigm for high-speed computing. The use of square EM pulses for representing information values involved in logical operation does not conflict with the use of electrical circuits that apply voltage or currents to the transmission lines at the source points. In fact, such elements are necessary for creation of EM pulses. They can be designed using the existing electronic circuit methods, such as using drivers and buffers on transmission lines. This aspect is however outside the scope of this paper.
The schematic representation of our technique is shown in Figure 1, where multiple transmission lines are connected at a crossing point or a junction. Transverse electromagnetic (TEM) square pulses are used as excitation signals having two main polarity levels (0/±1). Logic operations between pulses allow us to transfer/redirect information from one port to another without the need of using semiconductor technologies for switching processes. The most attractive feature of our approach is its relative simplicity, not only in terms of bringing computing down to the level of quantized EM energy in terms of pulses but also from the point of view of the perspectives of fabrication technology. We propose two different yet related configurations: multi-transmission line crossing in i) series and in ii) parallel ( Figure 1B and C, respectively). In these scenarios, TEM square pulses can be introduced from multiple ports, processed and re-directed as needed based on the inherent properties of transparency and/or reflection [24,25] at the crossing points depending on their polarity, characteristics of the medium (permittivity, ε, and permeability, µ), and the topology of the transmission lines (series or parallel connections). Note that, as it will be described below, the speed of the TEM square pulses will depend on the speed of light within the medium filling the transmission lines ( µε = v 1/ ), enabling great opportunities for high-speed computing applications.
To begin with, let us consider N interconnected transmission lines ( Figure 1B,C) with the input and output vectors of TEM square pulses x = [x 1 ,x 2 ,…, x N ] and y = [y 1 ,y 2 ,…, y N ] T , respectively, each term representing each port (from port 1 to N). Here, we assign the polarity +/to the square pulses by drawing a parallel arrow starting from the zero-amplitude toward the nonzero amplitude. For the series connection ( Figure 1B), arrows traveling clockwise/anticlockwise are mapped as pulses with +/polarity, respectively (see Figure 1D). For the parallel connection ( Figure 1C), our mapping assigns a +/polarity to those pulses with their parallel arrow directed toward the top/ bottom metallic line, respectively ( Figure 1E). As an example, we show in Figure 1B,C all +/square pulses within the series/ parallel configuration, respectively.
Let us first focus on the series configuration shown in Figure 1B. A TEM square pulse incident from port 1 travels toward the crossing region along a transmission line with an impedance Z 1 . At the crossing point, it encounters a change of impedance equivalent to the sum Z i = Z 2 + Z 3 + … + Z N . [26] Assuming that all transmission lines are equal (Z i = (N − 1) Z 1 , with same geometries and filling materials) the reflection and transmission coefficients at such series crossing are ρ = (N − 2) N −1 and γ total = 2(N − 1)N −1 , respectively, with square pulses traveling toward ports 2 to N having a coefficient γ = 2N −1 (considering (N − 1) transmission lines). Similarly, for the parallel crossing ( Figure 1C), the incoming square pulse will observe a change of imped- (toward port 1) and transmission coefficients (toward ports 2 to N) ρ = (2 − N) N −1 and γ = 2N −1 , respectively. Our computing model from Figure 1A is then realized using parallel plate waveguides as transmission lines with dimensions d and w connected in series or parallel (Figure 2). Here we discuss the particular case of a 4-port configuration, what we call Catt junction. [27] Other cases such as 3 and 8 interconnected waveguides can also be found as Supplementary Information for completeness. All the waveguides have the same dimensions (d and w, see Supplementary Information). The material filling the waveguides is homogeneous, isotropic, and dispersionless (air in our case with relative permittivity ε r = µ r = 1). Hence, the TEM square pulses will travel with the speed of light in vacuum v = c. The series crossing is illustrated in Figure 2A-C. The structure is excited by a single TEM square pulse of 0.5 ns duration inserted from port 1 (with and ε and μ as the absolute values of permittivity and permeability, respectively) and propagates along the z axis. Based on the discussion above, the reflection coefficient at such series crossing is ρ = 1/2 with a transmission to ports 2 to 4 of γ = 1/2. Meaning that the total energy of the incident square pulse will be divided at the crossing point into four equal square pulses of the same polarity (given that ρ > 0) traveling in all directions, each of them with 25% energy from the incident pulse. Our numerical simulation results for the outof-plane H y component of the magnetic field at two different times are illustrated in Figure 2C. The incident square pulse is applied from port 1 by using a positive/zero voltage on the top/bottom metal plate of the waveguide producing a square EM pulse with the electric field polarized along the x axis traveling toward positive z (see insets in left panel of Figure 2C for the direction of the E and H field). Note that the H y field distribution for the incoming pulse (at a time t = 1.8 ns) has a negative amplitude because of its inwards direction (negative y). The four generated square EM pulses, after the incident pulse passes the crossing point, are shown in the right panel of Figure 2C where a snapshot of the H y field distribution at a time t = 2.4 ns is presented. Note that the polarity of the pulses is the same. The direction of H y for the reflected pulse in port 1 is changed (now with a positive amplitude) meaning that the reflected pulse, traveling toward negative z, preserves the direction of the electric field (along x axis, E x ) and hence the positive/zero voltage distribution on the top/bottom metal plates as the initial incident pulse, as expected (ρ > 0). [Our numerical simulation results for the in-plane E x and E z field distribution at different times for the series connection can be found in the Supplementary Information.
Our results of the parallel crossing using parallel plate waveguides are presented in Figure 2F. Following the same process, the reflection coefficient for the signal towards port 1 is now ρ = − 1/2 with a transmission to ports 2 to 4 of γ = 1/2, again meaning square pulses with 25% of the incident energy traveling toward each port. To demonstrate this, our numerical simulation results for the out-of-plane electric field distribution (E y ) on the xz plane are presented in Figure 2F at two snapshots in time (t = 1.8 and t = 2.4 ns as in Figure 2C). These results are calculated at y = 0, that is, at the center of the structure shown in Figure 2D. At t = 1.8 ns the incident pulse travels from port 1 along positive z. Note that the incident pulse has a negative amplitude of E y since it is oriented inwards (negative y) with the magnetic field along the negative x axis (see insets in the same figure). At t = 2.4 ns, the incident square pulse has passed the crossing point and four square pulses are generated, each of them traveling in one of the four waveguides. However, note that while the polarity of the pulses remains the same for ports 2-4, the reflected pulse traveling toward port 1 is modified with E y now with a positive amplitude, confirming that ρ < 0. For completeness, our numerical simulation results for the inplane H x and H z field distribution at different times for the parallel connection which can be found in the Supplementary Information.
What will happen if several pulses arrive at the crossing point from different sections at the same time? To answer this, we can simply apply the principle of superposition to all the TEM square pulses produced after each of the incident pulses have passed the crossing point, in a similar fashion as the well-known Huygens's principle of wave propagation and the transmission line matrix (TLM) method [28,29] for the modeling and numerical calculations of electromagnetic fields. A convenient way for capturing this cause-effect behavior is via a scattering matrix form, by considering that the output vector of pulses is where A = I − γ J and A = − I + γ J for the series and parallel crossing scenarios, respectively, and I and J are the identity and all-ones matrices, respectively, with sizes N × N. The input/ output signals in A are mapped as columns/rows, respectively. The complete derivation can be found as Supplementary Information. Finally, note that matrix A in Equation (1) is involutory (A 2 = I). Thus, if we apply the output vector y as our new input vector x', the original input vector x = y' can be restored. Meaning that our computation operator of pulse crossing is reciprocal.
We apply these concepts to realize a logic switch for transferring and re-direction of data, in line with refs. [9,27,30,31], by sending incident square pulses from different ports with the same or different polarities, as a fundamental operation in computing systems. As in Figure 2, we will focus again on a Catt junction (4-port), but our approach can be applied to any N-port transmission line configuration; as those also presented in Supplementary Information for completeness. As in Figure 1,2, we separately consider two types of logic switches: series and parallel. For each of them, our model considers incident square pulses from two different ports under two main situations: i) when the two pulses arrive at the crossing point from opposite ends (180 degrees spatially) as illustrated in Figure 3A,B,E,F for a series and parallel logic switches respectively; and ii) when the pulses arrive from orthogonal ports (90° spatially) as shown in Figure 3C,D,G,H for the two proposed logic switches, respectively. To fully address the needs of a Boolean switch, we present in Figure 3 multiple scenarios for the polarity of the square pulses. Note that our model can be extended to consider any number of excitation ports, an example of three ports excitation in an 8-waveguide crossing configuration is shown in Supplementary Information.
Consider two incident square pulses applied from ports 1 and 3 (180° spatially) using the series crossing model as our series logic switch ( Figure 3A,B). In the first scenario, Figure 3A, the two square pulses are considered to have a + and − polarity respectively, as defined in Figure 1B,D, with the blue/red pulse coming from port 1/3, respectively (the incident pulses are depicted as dotted lines in Figure 3). We show in Figure 3A how the two square pulses are divided after reaching the crossing point, each of them into four pulses with equal energy, according to the pulse division rules described earlier in Figure 2. As a result, the eight generated square pulses are recombined in the parallel plate waveguides such that pulses Following the same process for the second case illustrated in Figure 3B, the two pulses coming from port 1 and 3 have both a positive (+) polarity (x = [1, 0, 1, 0] ) and again each of them generates four equal pulses after passing the crossing point. Opposite to the results from Figure 3A, in this scenario there is a destructive interference for the pulses traveling toward ports 1 and 3 while it is constructive for the pulses toward ports 2 and 4 with y = [0, −1, 0, −1]. What would happen if the pulses are inserted from orthogonal ports? This second realization is represented in Figure 3C,D for a series crossing configuration considering square pulses with opposite and equal polarities, respectively. As observed in Figure 3C Figure 3B).
We can now move onto the parallel crossing model. Illustrated in Figure 3E,F are cases of square pulses for the first realization (ports 1 and 3 as excitation ports) considering equal and different polarities, respectively. Once the incoming pulses of the same polarity ( Figure 3E) have passed the crossing point, there is a destructive interference between the signals traveling toward the excitation ports (zero transmission in these ports) while it is constructive for ports 2 and 4. Using Equation (1), this can be analytically formulated as For pulses with opposite polarity ( Figure 3F) transmission occurs only toward the excitation ports. For completeness, the performance of the parallel crossing for the second realization, excitation from orthogonal ports, is illustrated in Figure 3G,H demonstrating how the series and parallel models for N connected transmission lines (parallel plate waveguides in our work) can act as logic switching devices.
Interestingly, note that all the cases shown in Figure 3 involve a decision-making process where we can interpret the pulse from port 1 as a data-sampling token (see Figure 1A). By exploiting such simple logic switching mechanism with EM waves, one can enable the operation If … Then … Else to be performed on a data value, for instance a pulse coming from port 3, resulting in a Boolean operation True or False, that is, we achieve an elementary decision-making action as a fundamental computing operation for future computing applications with EM signals.
As a final demonstration of our technique for transferring and switching of information using square TEM pulses in transmission lines, we carried out full-wave numerical simulations for the series and parallel crossings via the transient solver of the commercial software CST Studio Suite. [32] Our numerical simulation results are shown in Figure 4. Let us first evaluate the response of the logic switch using the series crossing. The results of the out-of-plane magnetic field (H y ) distribution on the xz plane at a time t = 1.8 ns (before the incident pulses have reached the crossing point) are shown in the top-left panel of Figure 4A,B when the excitation is applied from ports 1,3 and ports 1,4 using square pulses with different and equal polarities, respectively. The power distribution on the xz plane at a time after passing the crossing point (t = 2.4 ns) is shown in the bottom left panel of both Figure 4A,B. As observed, the transmitted pulses propagate only toward the incident ports (ports 1 and 3) and toward the non-incident ports (ports 2 and 3) when using 180° or orthogonal excitation ports, respectively. For completeness, the numerical simulation results of the voltage as a function of time in each port are shown in the same figures, demonstrating an agreement with the configurations discussed in Figure 3A,D. [More results for the series crossings from Figure 3B,C can be found in the Supplementary Information].
We also calculated numerically the response of the parallel crossing model and our simulation results are shown in Figure 4C and D for pulses with equal and opposite polarities, respectively. The results of the out-of-plane electric field (E y ) distribution on the xz plane at t = 1.8 ns (before the incident pulses reach the crossing point between the waveguides) and the power distribution at a time t = 2.4 ns (after passing the crossing point) are shown in the top-left and bottom-left panels from Figure 4C and D, respectively. Our results are in agreement with Figure 3: square pulses with equal polarity ( Figure 4C) are only allowed to be transmitted toward nonexciting ports (ports 2 and 4) while square pulses with different polarity ( Figure 4D) are only transmitted toward the incident ports (ports 1 and 4). This performance can be corroborated with the voltage at each port as a function of time also plotted in Figure 4C,D, demonstrating an excellent agreement with the configurations discussed in Figure 3E,H. (More results for the parallel crossing model from Figure 3F,G can be found in Supplementary Information). As our approach is scalable, the dimensions of the waveguides in all the proposed configurations can be reduced to deal with shorter square pulses. See Supplementary Information for a demonstration of a ×0.01 downscaled example using a dispersive model for the metallic plates. [33][34][35] Finally, as in all technologies, there are some challenges that our proposed technique may face. For instance, our computing approach relies on the control of the phase of the input TEM square pulses excited from multiple ports. However, this can be addressed by current technology where voltage can be accurately controlled. Moreover, in this manuscript we have provided the fundamental theory around TEM square pulsebased computing which can be further exploited if the manipulation of the phase of the source is not possible. As it is known, for instance, the required phase of the pulses can be manipulated at will by changing the length of the transmission lines and/or by using different materials filling the transmission lines. [26,36] Our logic switching platform provides a fundamental pathway for elementary decision-making computations based on TEM signal interactions at cross points of simple transmission lines (such as parallel plate waveguides). With nosemiconductor technologies involved, here the decisions and/ or switching of data are carried out without relying on charge/ discharge-based elements, an important feature for high-speed computing applications. The combination of such a simple logic switching technique in a multiple port connection with a series-parallel configuration and its integration with other
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 4,614.4 | 2020-11-09T00:00:00.000 | [
"Computer Science"
] |
Valuation of Patents-Comparative Analysis
Introduction: There has been a common notion worldwide that more investment into research and development leads to more knowledge production as well as more technological advancement. Thus in last one century there were various scholars like Karl Marx, Karl Polanyi, Max Weber, Joseph Shumpeter, Christopher Freeman etc who tried to collect data at different level to understand the relationship of investment with economical advancement of the society through knowledge production and technological capability enhancement. In this process of studying the economical change, need to indentify common indicator that can connect the dots with Research & Development, Investment, Production as well as Technological advancement was felt . As a result of which various scholars started considering Intellectual Property Rights in form of Patents as an indicator to study economical advancement, since the patent data was comparatively easily available and initial correlation between investment and patent could be established. But there were two schools of thoughts one who argued that if patents help technological advancement of the society then at what rate and does this advancement helps in knowledge production. On the other hand other school of thought argued that patents limits technological advance ment of the society by creating artificial scarcity through introducing market restriction for others. Objective & Methodology: Scholars like Jacob Schmookle , Grilliches & Pakes, F.M.Scherer, Edwin Masfield as well as Free man and Pavitt carried out pioneer work in field of quantitative analysis of patents to understand its impact over knowledge production, technological advancement and finally to the economy at large. Though they all tried to work with different methodology which has been discussed in the paper, they were all concerned about the fact that how rate of technological advancement in a society can be quantified and most importantly to understand that does patents really help technological advancement or rather simply supports monopoly. Conclusion: After doing comparative analysis of their work one thing which is very clear that all the early research pointed to the fact that ‘instead of considering reforms to strengthen patents, we should move in opposite direction to strengthen technological advancement.’
INTRODUCTION
Since last one century there has been various studies to understand how economy works and comparative analysis of various economies have been carried out by scientists like Karl Marx, Karl Polanyi, Max Weber, Joseph Schumpeter, Christopher Freeman, Heiner Flass beck and other scientists.
By the end of First World War, there was huge investment flowing into research and development with the assumption that it will lead to knowledge production and technological advancement.Yet by the end of Second World War scientific community started realizing the need of consolidated study to verify the fact that investment in research and development has positive impact over nature of economy which in return is decided by technological advancement and knowledge production.Now to understand change in economy because of technological advancement in any society there was need to identify a common indicator that can connect the dots with Research and Development Investment, Knowledge Production, Technological Advancement and finally with nature of economy."In the desert of data patent statistics loom up as a mirage of wonderful plenitude and objectivity" thus scientists focused upon patents statistics for their further research.
Technology Advancement Equation-All most all the econometric analysis that were carried out initially were based upon below mentioned assumptions, which we will see further in the paper.
1/ P r C d e e
Saying so patented products and services goes through test of novelty hence showing resources and investment efforts put into its development by its parent organization, thus patents were considered as indicator by almost all the school of thoughts working in this area.
Patents are forms of immaterial "property" that grant their owners exclusive control over the production and sale for given time period, preventing others from producing and selling the patented products.
Although term "intellectual property" is commonly used in legal fields, it is complex in economics since it becomes difficult to justify intellectual property rights with the same arguments that are used to justify private property in tangible goods.
Ipso facto from the time property rights were being granted to intellectual properties, there has been discussion and research going on over the scope of these rights.As a result of which scientists have started working upon rate of technological changes, which could in return help them answers of the above mentioned questions and help the society at large.Another reason was that scientist wanted to know economic process that causes reduction cost of existing products and services, and leads to development of new set of products and services.
HISTORICAL BACKGROUND
Property Theory Difference-Even though scientists started taking patents as common indicator for analysis of above mentioned questions.There existed basic anomaly with regards to whole concept of Property Theory.
According to economic theory of property, safeguard of private property rights only for goods which are scarce benefits the society at large, thus there is no such needs to define property rights over goods which are present in abundance. 1e same concept of property rights was used in structuring the whole Intellectual Property Rights, yet there was very little emphasis given upon the fact that IPR does not necessarily arise from scarcity of objects, rather their only purpose has today become to create artificial scarcity and thus generating monopoly for holders of those rights.
Hence in a way here law itself is creating artificial scarcity is creating abundant value for people holding rights for these scarce resource leading to market economy rather than making a free market economy. 2 Joseph Schumpeter proclaimed "carrying out innovations is the only function which is fundamental in history." 3 Growth in any economy comes from 3 sources : increase of input of production, efficiency improvements and innovation.Of these, innovation is the biggest difference between development and developing economics thus making it an important area of further research.' Hayek argued that "it seems to me beyond doubt that in these fields, a slavish application of the concept of property as it has been developed for material things has done a great deal to foster the growth of monopoly, and the here drastic reforms may be required if competition is to be made to work." According to Joel Mokyr "A patent system may have been s stimulus to invention, but it was clearly not necessary factor." 4 Doguhlass North argued that "failure to developed systematic property rights in innovation up until fairly modern times was a major source of the slow pace of the technological change." 5Again it is important to stress that technological change is not the only source of productivity growth, and sometimes it is not even the major source.North's work where he shares his study of productivity changes in ocean shipping, which found the major source of rise in total factor productivity from 1600 to 1850 were not technological development, but the decline of piracy in number of voyages and an increased load factor on return trips.
Thus at the end of day following questions remained unanswered in absence of quantitative and qualitative research.These were 1.How to calculate rate of technological advancement?
2. Analyzing process that causes reduction in cost of existing products and services.
3. Analyzing process that causes development of new set of products and services.
4. Do patents really help technological advancement and in return support free market economy or supports monopoly.
What is the cost benefit analysis of patents over technological advancement?
PATENT VALUATION Following group of scientists were the pioneer in field of quantitative analysis of patents to understand its impact over knowledge production, technological advancement and finally to the economy at large.Before discussing in detail about their work, I would like to share the issues that must have came up in front of these scientists while thinking about which methodology to choose while using patents as an indicator to gauge economy.
First and major issue must have been data collection.
Till date when we talk about patent statistics, there is lot of complexity with regards to its arrangement.Point is for any given product or process type, which method to be chosen.Wether one should go for statistics based upon technology type, sector wise, geography wise, industry wise, or on the basis of research and development investment.
Methods of Evaluation-To resolve this problem basically four methods have been tried for the econometric analysis till date.Point worth mentioning here is that, econometric analysis techniques and tools those were not developed at the same time patent statistics as well as R&D investment statistics were scattered and needed enormous amount of efforts for compilation itself.Further more in all the econometric analysis number of samples were decided based solely upon their availability and method of sampling used was mostly convenient sampling.All most whole patent statistics worked upon initially were taken from countries like America, Japan and UK etc.
Hence these methods were: 1.By analyzing variation in R&D expenditure and comparing it with number of patents applied and granted through time series analysis.
2. By analyzing number of patents granted company wise and comparing it with number of new technologies/ products launched by the company.
3. By segregating patents sector wise say Agriculture and then comparing it with expenditure over that particular sector.
By segregating patents industry wise say
Manufacturing and then comparing it with expenditure over that particular industry.
Major Issues in quantification:
There were basically two major issues associated with quantification of patent statistics by above mentioned ways.
Intrinsic Variability
Talking about classification, even when scientists used a formulated structure and worked over limited data type for patents limiting their research to particular sector, or industry etc they had to face issues of patent classification and sub classification.So even if one selected particular industry, sector or company one has to decide how to arrange different sub classes of those statistics.
Another major issue was to decide upon the intrinsic variability of different patents, meaning how one can decide which patent is more valuable than other one.We will now see how these scientists carried forward their work.productivity, almost half of it was found to be due growth in the quality of labour force, capital allocation, economies of scales etc and hence they concluded that at most maximum quarter of total productivity can be attributed to patented inventions. 6win Mansfield conducted two studies to find better insight into the relationship of patents and innovations.
In his first study he took 31 patented innovations in 4 industries: chemicals.Pharmaceuticals, electronics and machinery.Major purpose of study was to answer that what proportion of innovations would be delayed, or not introduced at all, if they could not be patented?
In drug industry firms said half of patented innovations would not have been introduced without patent protection.Excluding drug innovations, the lack of patent protection would have affected less than 1/4 th of the patented innovation in taken samples.
In his second test according to the obtained from random samples of 100 firms from 12 manufacturing industries, patent protection was judged to be essential for the development or introduction of 1/3 rd or more inventions during 1981-83 in only 2 industries-pharmaceuticals and chemicals.On the other hand, in 7 industries (electrical equipment, office equipment, motor vehicles, instruments, primary metals, rubber, and textile), patent protection was estimated to be essential for the development and introduction of less than 10% of their inventions.Indeed in these industries patent protection was not essential for the development or introduction of any of their inventions during that period.
Fredric Michael Scherer is an economist at JFK School of Government at Harvard University.He studied pharmaceutical patents along with William Comanor and tried to correlate the statistics of all new products introduced by different firms in subsequent years and found close relationship between patent applications (not grants) with new products.
Taking further his own research he studied the incentive effects of compulsory licensing decrees.By reading literature he fanned out to interview 22 American corporations, most of which were under compulsory licensing decrees.He received mail questionnaires from 69 companies holding 45,500 patents, and conducted statistical analysis of patenting trends of those data.
On close analysis he discovered that with rare exceptions, whether or not well-established corporations could expect K here is net acceleration of economically valuable knowledge used as measure of inventive output and Z's are various level of Z that could be various measure of growth, productivity as well as profitability.Hence for any given research work whose success is linked with expectations of economic benefits for the inventor, only when this expectation exceeds a particular threshold level patent will be applied for else not.
Hence number of patents applied for depends upon number of successful projects with economic value of patents exceeding threshold limit.
I
In the time series dimensions, they found that number of patents received per R&D dollar spent kept on decreasing.
Which clearly showed that though small firms were most beneficiary when it came to receiving large number of patents, in case of larger firms main driver for innovation was something different which was keeping them alive with technological advancement along with economies of scale.patent protection was typically unimportant in their decisions to invest in research and the development of new products and processes.
He further concluded that for those 69 companies prior compulsory licensing decrees had little or no unfavourable impact on research and development decisions, although they had led to less patenting of the inventions actually made and hence greater reliance on secrecy, especially on (concealable) process as distinguished from readily observed product inventions.
CONCLUSION
After closed analysis of all the above mentioned econometric analysis few points are very clear.
• Creation of time series equation for finding relationship between patents and innovation is extremely complex process and has been proven to give ambiguous results.• Cost benefit analysis of investment in patent regime and its impact over investment in research and development industry needs to be done.• 'Patent are not always the savior of innovations.' Even in case of industries like pharmaceuticals patent protection was estimated to be essential for the development and introduction of less than 10% of their inventions.• Though patent can be considered as 'input indicator' that to for limited capital goods industry yet it can't be considered as an indicator of output.• In most of the cases correlation between total productivity and total patent granted is minimal, thus showing negative correlation.• Taking 1-2% growth rate per year across industries in respect to total factor productivity, all most half of it was found to be due to growth in the quality of labor force, capital allocation, economies of scales etc. • At maximum only quarter of total productivity can be attributed to patented inventions.• Prior compulsory licensing decrees had little or no unfavorable impact on research and development decisions, although they had led to less patenting of the inventions actually made and hence greater reliance on secrecy, especially on (concealable) process as distinguished from readily observed product inventions.Saying so, one thing which is very clear that all the early research point to the fact that 'instead of considering reforms to strengthen patents, we should move in opposite direction to strengthen technological advancement.'ACKNOWLEDGEMENT I would like to thanks scientists Jacob Schmookler ,Grilliches & Pakes, F.M.Scherer, Edwin Masfield as well as Free man & Pavitt whose published work were pioneer in the patent evaluation studies and gave better insight to world regarding issues surrounding the concept of patent evaluation and how the impact the economy.Lastly I would like to thank my faculty members at Centre for Studies in Science Policy, JNU for continued support and encouragement.
CONFLICT OF INTEREST
This statement is to certify that author has seen and approved the manuscript being submitted.He warrant that the present manuscript where a detailed comparative analysis of patent evaluation methodology by different scientists in past has been presented is the Authors' original work and proper source of earlier published works if it has been used as reference has been mentioned in the manuscript.He warrants that the article has not received prior publication and is not under consideration for publication elsewhere.
Research and Development Investment = r Knowledge Production = k Reduced cost of Product and services because of technological advancement = c Development cost of new product and services = d Patents granted Thus k = r + e 1 (e 1 = observational error) & P = k + 1/C + d + e 2 (e 2 = observational error)
Jacob
Schmookler was first scientist to work upon econometric analysis of technological advancement at industry level along the time span of 1800-1950.He raised two questions, first being 'what are the determinants of variation in the rate of technological progress over time and between industries?' and second being 'how technological changes fits into the process of economic growth?' Wether they really help technological advancement of the society, if yes with what rate and does this advancement helps in knowledge production.On the other hand there is a school of thought that's says that Intellectual Property Rights like patents actually curb the technological advancement of the society rather creates artificial scarcity and supports monopolistic market. | 3,980.2 | 2017-01-10T00:00:00.000 | [
"Economics"
] |
THE DIRICHLET-TO-NEUMANN MAP FOR SCHRÖDINGER OPERATORS WITH COMPLEX POTENTIALS
Let Ω ⊂ Rd be a bounded open set with Lipschitz boundary and let q : Ω → C be a bounded complex potential. We study the Dirichlet-toNeumann graph associated with the operator −∆ + q and we give an example in which it is not m-sectorial.
There are various extensions of the Dirichlet-to-Neumann operator.The first one is where the operator −∆ in (1) is replaced by a formally symmetric pure secondorder strongly elliptic differential operator in divergence form.Then one again obtains a self-adjoint version of the Dirichlet-to-Neumann operator, which enjoys a description with a form by making the obvious changes in (2).Similarly, if one replaces the operator −∆ in (1) by a pure second-order strongly elliptic differential operator in divergence form (which is possibly not symmetric), then the associated Dirichlet-to-Neumann operator is an m-sectorial operator.
There occurs a significant difference if one replaces the operator −∆ in (1) by a formally symmetric second-order strongly elliptic differential operator in divergence form, this time with lower-order terms.Then it might happen that D is no longer a self-adjoint operator, because it could be multivalued.Nevertheless, it turns out that D is a self-adjoint graph, which is lower bounded (see [6] Theorems 4.5 and 4.15, or [8] Theorem 5.7).
The aim of this note is to consider the case where the operator −∆ in ( 1) is replaced by −∆ + q, where q : Ω → C is a bounded measurable complex valued function; in a similar way a general second-order strongly elliptic operator in divergence form with lower-order terms could be considered.In Section 2 the form method from [3,4,5,6] will be adapted and applied to the present situation in an abstract form, and in Section 3 the Dirichlet-to-Neumann graph D associated with −∆ + q will be studied.Although one may expect that D is an m-sectorial graph it turns out in Example 3.7 that this is not the case in general.
2.
Forms.In this section we review and extend the form methods and the theory of self-adjoint graphs.
Let V and H be Hilbert spaces.Let a : V × V → C be a continuous sesquilinear form.Continuous means that there exists an M > 0 such that |a(u, v)| ≤ M u V v V for all u, v ∈ V .Let j ∈ L(V, H) be an operator.Define the graph D in H × H by D = {(ϕ, ψ) ∈ H × H : there exists a u ∈ V such that j(u) = ϕ and a(u, v) = (ψ, j(v)) H for all v ∈ V }.
We call D the graph associated with (a, j).
In general, if A is a graph in H, then the domain of A is dom A = {x ∈ H : (x, y) ∈ A for some y ∈ H} and the multivalued part is mul A = {y ∈ H : (0, y) ∈ A}.
We say that A is single valued, or an operator, if mul A = {0}.In that case one can identify A with a map from dom A into H.
Clearly mul D = {0} if j(V ) is not dense in H.If (ϕ, ψ) ∈ D, then there might be more than one u ∈ V such that j(u) = ϕ and a(u, v) = (ψ, j(v)) H for all v ∈ V .For that reason we introduce the space We say that the form a is j-elliptic if there exist µ, ω > 0 such that Re a(u) for all u ∈ V .Graphs associated with j-elliptic forms behave well.
Theorem 2.1.Suppose that a is j-elliptic and j(V ) is dense in H. Then D is an m-sectorial operator.Also W j (a) = {0}.
If Ω ⊂ R d is a bounded open set with Lipschitz boundary, V = H 1 (Ω), H = L 2 (Γ), j = Tr and a is as in ( 2), then D is the Dirichlet-to-Neumann operator as in the introduction; cf.Section 3 for more details.
In general the form a is not j-elliptic.An example occurs if one replaces a in ( 2) by , where ∆ D is the Laplacian on Ω with Dirichlet boundary conditions.Then (3) fails for every µ, ω > 0 if u is a corresponding eigenfunction and j = Tr .In addition, the graph associated with (a, j) is not single valued any more.We emphasize that we are interested in the graph associated with (a, j).To get around the problem that the form a is not j-elliptic, it is convenient to introduce a different Hilbert space and a different map j.
Throughout the remainder of this paper we adopt the following hypothesis.
Hypothesis 2.2.Let V , H and H be Hilbert spaces and let a : V × V → C be a continuous sesquilinear form.Let j ∈ L(V, H) and let D be the graph associated with (a, j).Furthermore, let j ∈ L(V, H) be a compact map and assume that the form a is j-elliptic, that is, there are μ, ω > 0 such that for all u ∈ V .
As example, if Ω ⊂ R d is a bounded open set with Lipschitz boundary as before, then one can choose V = H 1 (Ω), H = L 2 (Γ), H = L 2 (Ω), j = Tr and j is the inclusion map from H 1 (Ω) into L 2 (Ω).For a one can choose a continuous sesquilinear form on H 1 (Ω) like in (2).We consider this example in more detail in Section 3.
In general, if A is a graph in H, then A is called symmetric if (x, y) H ∈ R for all (x, y) ∈ A. The graph A is called surjective if for all y ∈ H there exists an x ∈ H such that (x, y) ∈ A. The graph A is called self-adjoint if A is symmetric and for all s ∈ R \ {0} the graph A + i s I is surjective, where for all λ ∈ C we define the graph (A + λ I) by A symmetric graph A is called bounded below if there exists an ω > 0 such that (x, y) H + ω x 2 H ≥ 0 for all (x, y) ∈ A. Under the above main assumptions we can state the following theorem for symmetric forms.We next wish to study the case when a is not symmetric.Proposition 2.4.Adopt Hypothesis 2.2.Then the graph D is closed.
for all v ∈ V , where the orthogonal complement is in V .We first show that ( j(u n )) n∈N is bounded in H. Suppose not.Set τ n = j(u n ) H for all n ∈ N. Passing to a subsequence if necessary, we may assume that τ n > 0 for all n ∈ N and lim n→∞ for all n ∈ N. Let μ, ω > 0 be as in (4).Then for all n ∈ N. Since ( ψ n H ) n∈N is bounded and ψn H j τn < 1 for all large n ∈ N, it follows that (Re a(w n )) n∈N is bounded.Together with (4) it then follows that (w n ) n∈N is bounded in V .Passing to a subsequence if necessary there exists a w ∈ W j (a) ⊥ such that lim n→∞ w n = w weakly in V .Then j(w) = lim n→∞ j(w n ) in H since j is compact.So j(w) H = 1 and in particular w = 0.Alternatively, for all v ∈ V it follows from (6) that Moreover, j(w) = lim n→∞ 1 τn j(u n ) = lim n→∞ 1 τn ϕ n = 0, where the limits are in the weak topology on H.So w ∈ W j (a).Therefore w ∈ W j (a) ∩ W j (a) ⊥ = {0} and w = 0.This is a contradiction.So ( j(u n )) n∈N is bounded in H.
Let n ∈ N. Then with v = u n in (5) one deduces that where we used (4) in the last step.Hence (Re a(u n )) n∈N is bounded.Using again (4) one establishes that (u n ) n∈N is bounded in V .Passing to a subsequence if necessary, there exists a u ∈ V such that lim u n = u weakly in V .Then j(u) = lim j(u n ) = lim ϕ n = ϕ weakly in H. Finally let v ∈ V .Then (5) gives So (ϕ, ψ) ∈ D and D is closed.
Proposition 2.5.Adopt Hypothesis 2.2.Suppose j is compact.Then the map where u ∈ W j (a) ⊥ is the unique element such that j(u) = ϕ and a(u, v) = (ψ, j(v)) H for all v ∈ V .We first show that the graph of Z is closed.Let ((ϕ n , ψ n )) n∈N be a sequence in D, let (ϕ, ψ) ∈ H × H and u ∈ V .Suppose that lim ϕ n = ϕ, lim ψ n = ψ in H and lim u n = u in V , where Hence Z(ϕ, ψ) = u and Z has closed graph.
The closed graph theorem, together with Proposition 2.4 implies that Z is continuous.Since j is compact, the composition We say that A has compact resolvent if (A − λ I) −1 is a compact operator for all λ ∈ ρ(A).
For the sequel it is convenient to introduce the space V j (a) = {u ∈ V : a(u, v) = 0 for all v ∈ ker j}.
Theorem 2.7.Adopt Hypothesis 2.2.If V j (a) ∩ ker j = {0} and ran j is dense in H, then D is an m-sectorial operator.
Note that the operator A D in the next lemma is the Dirichlet Laplacian if a is as in (2) and j is the inclusion map from H 1 (Ω) into L 2 (Ω).
Lemma 2.8.Adopt Hypothesis 2.2.Suppose that j(ker j) is dense in H and j is injective.Then the graph A D associated with (a| ker j×ker j , j| ker j ) is an operator and one has the following.
(a)
ker If ker A D = {0} and ran j is dense in H, then mul D = {0}.
Proof.The graph A D in H × H associated with (a| ker j×ker j , j| ker j ) is given by Now suppose that k ∈ mul A D .Let u ∈ ker j be such that j(u) = 0 and a(u, v) = (k, j(v)) H for all v ∈ ker j.The assumption that j is injective yields u = 0 and hence 0 = a(u, v) = (k, j(v)) H for all v ∈ ker j.Since j(ker j) is dense in H it follows that k = 0. Therefore mul A D = {0} and A D is an operator.'(a)'.'⊃'.Let u ∈ V j (a) ∩ ker j.Then u ∈ ker j.Moreover, a(u, v) = 0 for all v ∈ ker j.So j(u) ∈ dom A D and A D j(u) = 0. Therefore j(u) ∈ ker A D .
The converse inclusion can be proved similarly.'(b)'.Since A D has compact resolvent, this statement follows from part (a) and the injectivity of j.
In Corollary 3.4 we give a class of forms such that the converse of Lemma 2.8(c) is valid.
We conclude this section with some facts on graphs.In general, let A be a graph in H.In the following definitions we use the conventions as in the book [22] of Kato.The numerical range of A is the set 2 ) such that (x, y) H ∈ Σ θ for all (x, y) ∈ A − γ I and A − (γ − 1)I is invertible.The graph A is called quasi-accretive if there exists a γ ∈ R such that Re(x, y) H ≥ 0 for all (x, y) ∈ A − γ I.The graph A is called quasi m-accretive if there exists a γ ∈ R such that Re(x, y) H ≥ 0 for all (x, y) ∈ A − γ I and A − (γ − 1)I is invertible.Clearly every m-sectorial graph is sectorial and quasi m-accretive.Moreover, every sectorial graph is quasi-accretive.Lemma 2.9.Let A be a graph.
(a)
If not dom A ⊥ mul A, then the numerical range of A is the full complex plane.
(b)
If A is a quasi-accretive graph, then dom A ⊥ mul A. 3. Complex potentials.In this section we consider the Dirichlet-to-Neumann map with respect to the operator −∆ + q, where q is a bounded complex valued potential on a Lipschitz domain.
Throughout this section fix a bounded open set Ω ⊂ R d with Lipschitz boundary Γ.Let q : Ω → C be a bounded measurable function.Choose V = H 1 (Ω), H = L 2 (Γ), j = Tr : H 1 (Ω) → L 2 (Γ), H = L 2 (Ω) and j the inclusion of V into H.Then j and j are compact.Moreover, ran j is dense in H by the Stone-Weierstraß theorem.Define a : H Then a is a sesquilinear form and it is j-elliptic.Let D be the graph associated with (a, j).Note that all assumptions in Hypothesis 2.2 are satisfied.In order to describe D, we need the notion of a weak normal derivative.
Let u ∈ H 1 (Ω) and suppose that there exists an f ∈ L 2 (Ω) such that ∆u = f as distribution.Let ψ ∈ L 2 (Γ).Then we say that u has weak normal derivative ψ if for all v ∈ H 1 (Ω).Since ran j is dense in H it follows that ψ is unique and we write ∂ ν u = ψ.
The alluded description of the graph D is as follows.
Proof.The easy proof is left to the reader.
Let A D = −∆ D + q, where ∆ D is the Laplacian on Ω with Dirichlet boundary conditions.Then A D is as in Lemma 2.8.Moreover, (A D ) * = −∆ D + q.Proposition 3.2.Let u ∈ ker A D .Then u has a weak normal derivative, that is, ∂ ν u ∈ L 2 (Γ) is defined.Similarly, if u ∈ ker(A D ) * , then u has a weak normal derivative.
The claim for (A D ) * follows by replacing q by q.
Note that the right hand side is indeed defined and it is a subspace of L 2 (Γ) by Proposition 3.2.
1 .
Introduction.The classical Dirichlet-to-Neumann operator D is a positive selfadjoint operator acting on functions defined on the boundary Γ = ∂Ω of a bounded open set Ω ⊂ R d with Lipschitz boundary.The operator D is defined as follows.Let ϕ, ψ ∈ L 2 (Γ).Then ϕ ∈ dom D and Dϕ = ψ if and only if there exists a u ∈ H 1 (Ω) such that Tr u = ϕ, −∆u = 0 weakly on Ω,
(c)If A is a quasi m-accretive graph, then mul A = (dom A) ⊥ .Proof.'(a)'.There are x ∈ dom A and y ∈ mul A such that (x, y ) H = 0. Without loss of generality we may assume that x H = 1.There exists a y ∈ H such that (x, y) ∈ A.Then (x, y + τ y ) ∈ A for all τ ∈ C.So (x, y + τ y ) H ∈ W (A) for all τ ∈ C.'(b)'.This follows from Statement (a).'(c)'.By Statement (b) it remains to show that (dom A) ⊥ ⊂ mul A. By assumption there exists a γ ∈ R such that Re(x, y) H ≥ 0 for all (x, y) ∈ A − γ I and A − (γ − 1)I is invertible.Without loss of generality we may assume that γ = 0. Let y ∈ (dom A) ⊥ .Define x = (A + I) −1 y.Then x ∈ dom A and (x, y − x) ∈ A. So − x 2 H = Re(x, y − x) H ≥ 0 and x = 0. Then (0, y) ∈ A and y ∈ mul A as required. | 3,795.4 | 2017-04-01T00:00:00.000 | [
"Mathematics"
] |
Rapid and Easy Detection of Microcystin-LR Using a Bioactivated Multi-Walled Carbon Nanotube-Based Field-Effect Transistor Sensor
In this study, we developed a multi-walled carbon nanotube (MWCNT)-based field-effect transistor (MWCNT-FET) sensor with high sensitivity and selectivity for microcystin-LR (MC-LR). Carboxylated MWCNTs were activated with an MC-LR-targeting aptamer (MCTA). Subsequently the bioactivated MWCNTs were immobilized between interdigitated drain (D) and source (S) electrodes through self-assembly. The top-gated MWCNT-FET sensor was configured by dropping the sample solution onto the D and S electrodes and immersing a Ag/AgCl electrode in the sample solution as a gate (G) electrode. We believe that the FET sensor’s conduction path arises from the interplay between the MCTAs, with the applied gate potential modulating this path. Using standard instruments and a personal computer, the sensor’s response was detected in real-time within a 10 min time frame. This label-free FET sensor demonstrated an impressive detection capability for MC-LR in the concentration range of 0.1–0.5 ng/mL, exhibiting a lower detection limit of 0.11 ng/mL. Additionally, the MWCNT-FET sensor displayed consistent reproducibility, a robust selectivity for MC-LR over its congeners, and minimal matrix interferences. Given these attributes, this easily mass-producible FET sensor is a promising tool for rapid, straightforward, and sensitive MC-LR detection in freshwater environments.
Introduction
Warm weather, eutrophication, and excessive nutrient richness can trigger cyanobacterial outbreaks in freshwater systems.Cyanobacteria produce various microcystins (MCs) [1].One particularly toxic variant is Microcystin-LR (MC-LR), characterized by leucine (L) and arginine (R) located at the second and fourth positions of its five non-proteinogenic amino acids.Its lethal dose (LD 50 ) is quantified at 43 µg/kg [2].Consequently, the World Health Organization (WHO) advises maintaining MC-LR levels in drinking water under 1 µg/L [3].
While high-performance liquid chromatography paired with tandem mass spectrometry (LC/MS/MS) excels as the premier analytical method for MC-LR quantification-boasting a detection limit of ~0.01 ng/mL [4]-its inapplicability for field tests at contaminated sites indicates that water samples often need transport to labs.This limitation underscores the importance of developing field-deployable MC-LR detection methods, particularly for remote areas.
Aptamers, which are synthetic single-stranded RNA or DNA molecules, have drawn significant scientific interest because of their specific interactions with target molecules, analogous to antibodies [19].DNA aptamers, owing to their ease of synthesis, chemical modifiability, robust stability, and reversible denaturation, are currently prime candidates for biomolecular recognition using biosensing techniques [20].Notably, an MC-LRtargeting aptamer (MCTA; DNA oligonucleotide, 5-NH 2 -C 6 -AN 6 ) was identified from random DNA/RNA sequence pools using the SELEX selection process [9].
The integration of one-dimensional (1-D) nanomaterials into field-effect transistors (FETs) can amplify the detection sensitivity and speed, considering their large surface area and superior physicochemical properties [21][22][23][24].Single-walled carbon nanotubes (SWCNTs) have emerged as the most promising 1-D material for field-effect transistor (FET) biosensor applications because of their large surface area, high aspect ratio, high conductivity, and good physical stability [22].The adsorption of biological molecules onto SWCNTs results in a significant electric-field perturbation during electron transport in the CNT owing to the electrostatic gating effect, gate coupling, and changes in carrier mobility [23,24].Notably, the single-molecule detection of proteins and DNA has been achieved using CNT-based FET biosensors [19,25].Furthermore, one research group [8] demonstrated modified SWCNTs, which were grown between the drain (D) and source (S) electrodes and subsequently bioactivated with MC-LR antibodies, for use as FET-sensing elements for MC-LR detection.
Multi-walled carbon nanotubes (MWCNTs) traditionally offer a lower efficacy than FET elements because the electrical properties of the internal layer tubes of MWCNTs are not readily modulated by the applied gate field.This intrinsic inefficacy problem can be compensated using extrinsic functionalization methods.MWCNTs modified with metallic nanoparticles (like Pt and Au) were developed by our group as possible active elements for sugar-detecting FET sensors [26,27].The electrical conduction path in the nanoparticleattached MWCNTs is assumed to involve contact between the metallic surfaces, which enables the adsorption of organic chemicals.
Based on this finding, we postulate that if the electrical conduction path of p-type conducting MCTA-activated MWCNTs involves overlapping connections between immobilized MCTAs, such bioactivated MWCNTs can potentially be employed as excellent active elements in FET devices.The development of a biochemically activated MWCNT-based paper-type immunosensor for assaying prostate-specific antigen (PSA) [28] and MC-LR [29] was explored in our previous studies; the sensor exhibited acceptable low detection limits (1.18 ng/mL for PSA and 0.19 ng/mL for MC-LR).Although this method is straightforward for detecting the target compounds, it requires a long detection time (90 min) and yields a high detection limit comparable to that of LC/MS/MS-based MC-LR detection (0.01 g/mL).
In this study, we demonstrate further advancement of the MC-LR assay using a reliable and rapid-response top-gated FET sensor.The biosensor was assembled using bioactivated MCTA-MWCNTs between two Au-based D and S electrodes on a SiO 2 /Si wafer.Scheme 1 shows schematic diagrams of (a) the bioactivation of MWCNTs with MCTA through an imide formation reaction between H 2 N-of the MCTA and HOOC-of the MWCNTs, and (b) the selective interactions between MC-LR and MCTAs.Bovine serum albumin (BSA) was coated on the MWCNT surfaces (Scheme 1a) to prevent the random adsorption of undesired chemicals and avoid electrical conduction between the MWCNT filaments owing to its insulating characteristics.Therefore, the entanglement between the p-type MCTA-MWCNTs is the putative electrical connection route.The primary detection mechanism is the untangling of intercrossed MCTAs upon the selective capture of MC-LR, as shown in Scheme 1b.The targeted MC-LR could be captured only at the activated site (-MCTA) via a lock-and-key mechanism of the bio-selective reaction, which led to an increase in the potential barrier in the electrical conduction path between the MWCNTs, and a consequent increase in electrical resistivity (ρ).The change in ρ is an essential mechanism for detecting MC-LR using these bioactivated MWCNT [29].The change in the potential barrier between MCTAs will be more sensitive in the top-gated FET configuration than in the compressed bulk of the MCTA-MWCNTs, which is the key design principle of this study.In the FET configuration, the response signals can be readily enhanced by applying a higher gate voltage [30].This method can be applied for the high-speed and rapid (<10 min) detection of MC-LR at any outbreak location.
site (-MCTA) via a lock-and-key mechanism of the bio-selective reaction, which led to an increase in the potential barrier in the electrical conduction path between the MWCNTs, and a consequent increase in electrical resistivity (ρ).The change in ρ is an essential mechanism for detecting MC-LR using these bioactivated MWCNT [29].The change in the potential barrier between MCTAs will be more sensitive in the top-gated FET configuration than in the compressed bulk of the MCTA-MWCNTs, which is the key design principle of this study.In the FET configuration, the response signals can be readily enhanced by applying a higher gate voltage [30].This method can be applied for the high-speed and rapid (<10 min) detection of MC-LR at any outbreak location.
Functionalizaiton of Multi-Walled Carbon Nanotubes
The bioactivation of the MWCNTs was conducted following a previously reported method [29].The MWCNTs were carboxylated by c-HNO3 treatment, as shown in Scheme 1, in which MWCNTs (300 mg) were reacted with HNO3 (150 mL, 3 M) for seven days at 130 °C, and subsequently washed via more than ten cycles of centrifugation to achieve a pH of approximately 7. MCTA was immobilized on the MWCNTs by the formation of an
Functionalizaiton of Multi-Walled Carbon Nanotubes
The bioactivation of the MWCNTs was conducted following a previously reported method [29].The MWCNTs were carboxylated by c-HNO 3 treatment, as shown in Scheme 1, in which MWCNTs (300 mg) were reacted with HNO 3 (150 mL, 3 M) for seven days at 130 • C, and subsequently washed via more than ten cycles of centrifugation to achieve a pH of approximately 7. MCTA was immobilized on the MWCNTs by the formation of an amide group via the reaction of a mixture containing carboxylated MWCNTs (5.2 mg), N-(3-dimethylaminopropyl)-N-ethylcarbodiimide hydrochloride (EDC; 4 µL), and MC-LRtargeting aptamers (MCTAs; 2 mL, 66 nM) in 16 mL of 0.1 M MES for 24 h.The MWCNTs were coated with BSA to prevent undesired interactions between the surfaces of the MWC-NTs and analytes or interfering agents.The samples prepared at this stage were denoted as MCTA-MWCNTs to indicate the attachment of MCTAs to the MWCNTs.After each step of the preparation procedure, the modified MWCNTs were analyzed using Fourier transform infrared spectrometry (FT-IR; JASCO FT/IR-4100, Easton, MD, USA).The chemical environment of the MCTA-attached MWCNTs was confirmed by X-ray photoelectron spectroscopy (XPS; Multilab2000, Thermo Scientific, Waltham, MA, USA), as shown in Figure 1.The XPS profiles of the samples were deconvoluted to confirm the bioactivation, which was analyzed primarily based on the FT-IR spectra in our previous study [29].
targeting aptamers (MCTAs; 2 mL, 66 nM) in 16 mL of 0.1 M MES for 24 h.The MWCNTs were coated with BSA to prevent undesired interactions between the surfaces of the MWCNTs and analytes or interfering agents.The samples prepared at this stage were denoted as MCTA-MWCNTs to indicate the attachment of MCTAs to the MWCNTs.After each step of the preparation procedure, the modified MWCNTs were analyzed using Fourier transform infrared spectrometry (FT-IR; JASCO FT/IR-4100, Easton, MD, USA).The chemical environment of the MCTA-attached MWCNTs was confirmed by X-ray photoelectron spectroscopy (XPS; Multilab2000, Thermo Scientific, Waltham, MA, USA), as shown in Figure 1.The XPS profiles of the samples were deconvoluted to confirm the bioactivation, which was analyzed primarily based on the FT-IR spectra in our previous study [29].
Fabrication of MWCNT-Based Top-Gate FET Device
The MCTA-MWCNTs were stored in water (0.5 mg/mL) and diluted to 0.05 mg/mL prior to the FET assembly.Interdigitated Au electrodes (DH gate-Hxq315, Guangdong, China; 150 nm thick Au layer on 10 nm thick Cr layer on SiO2/Si) with a 20 µm wide gap were used as the D and S electrodes.The MWCNTs were examined using scanning electron microscopy (SEM, Hitachi S2400, Tokyo, Japan).A Ag/AgCl electrode was used as the top-gate (G) electrode.The FET was assembled using a probe station by controlling the 1 µm scale with the XYZ stage.Interdigitated Au electrode was prepared as follows.The Au electrode was cleaned with a Au cleaning solution (Sigma-Aldrich, St. Louis, MO, USA) for 5 s, washed with distilled water, and dried with pure N2 (99.99%,Hanagas, Gimhae, Republic of Korea).One drop (1 µL) of the MCTA-MWCNT suspension (0.05 mg/mL) was placed on the cleaned Au electrodes and subsequently dried in a convection oven at 50 °C for 30 min to immobilize the MCTA-MWCNTs between the D and S electrode fingers (Figure S1).A rectangular well (2 × 4 × 2 mm; 16 µL) made of polydimethylsiloxane (PDMS; Dow Corning Sylgard 184, Midland, MI, USA) was used as the solution container
Fabrication of MWCNT-Based Top-Gate FET Device
The MCTA-MWCNTs were stored in water (0.5 mg/mL) and diluted to 0.05 mg/mL prior to the FET assembly.Interdigitated Au electrodes (DH gate-Hxq315, Guangdong, China; 150 nm thick Au layer on 10 nm thick Cr layer on SiO 2 /Si) with a 20 µm wide gap were used as the D and S electrodes.The MWCNTs were examined using scanning electron microscopy (SEM, Hitachi S2400, Tokyo, Japan).A Ag/AgCl electrode was used as the top-gate (G) electrode.The FET was assembled using a probe station by controlling the 1 µm scale with the XYZ stage.Interdigitated Au electrode was prepared as follows.The Au electrode was cleaned with a Au cleaning solution (Sigma-Aldrich, St. Louis, MO, USA) for 5 s, washed with distilled water, and dried with pure N 2 (99.99%,Hanagas, Gimhae, Republic of Korea).One drop (1 µL) of the MCTA-MWCNT suspension (0.05 mg/mL) was placed on the cleaned Au electrodes and subsequently dried in a convection oven at 50 • C for 30 min to immobilize the MCTA-MWCNTs between the D and S electrode fingers (Figure S1).A rectangular well (2 × 4 × 2 mm; 16 µL) made of polydimethylsiloxane (PDMS; Dow Corning Sylgard 184, Midland, MI, USA) was used as the solution container and positioned on an individual Au electrode.The G electrode was immersed in the test solution to configure the top-gated FET.These FETs offer advantages in terms of requiring a significantly smaller gate voltage range (V G ~±0.5 V) compared to that of the back-gating operation (±20 V) [31,32].In addition, the sensitivity of top-gated FET sensors is typically ten times higher than that of back-gated FET sensors.The fabricated FET sensor maintained its detection characteristics for at least 60 days.
Characterization and Measurement of the Device
The MC-LR solutions with varying concentrations, ranging from 0 to 1.0 ng/mL were prepared in 1 × phosphate-buffered saline (PBS).Sixteen microliters of the MC-LR solution was loaded into a PDMS well on the Au electrode.Subsequently, the I ds -V g characteristics with a constant V ds of −0.5 V and I ds -V ds characteristics with V g varying from 0 to −1 V were instantaneously measured using a computer interface connected to a constant-current source (digital multimeter; Keithley 196, Cleveland, OH, USA) and a DC power source (T7 with LJTick-DAC, Labjack, Lakewood, CO, USA).After each measurement, the Au electrode was replaced with another subsequent concentration.In this study, the Au electrodes were reused three times after rinsing with the cleaning solution.Unless otherwise specified, all measurements were performed at least in triplicate at room temperature.The limit of detection (LOD) of the device was determined as follows: LOD = 3.3 × SD of intercept/|slope| from the g m /g mo vs. MC-LR concentration plot, where SD is standard deviation.
To validate the performance of the sensor for real samples, we collected environmental waters in sterile glass bottles in October 2021 from two dams, the Yeongju Dam (YD, 36 • 72 ′ N 128 • 66 ′ E) and the Andong Dam (AD, 36 • 58 ′ N 128 • 77 ′ E), located upstream of the Nakdong River (Republic of Korea).Algal blooms were severe in the YD over eight years until late autumn owing to eutrophication, mainly caused by anthropogenic activities from industrial and agricultural complexes [33].However, algal blooms in AD usually decreased or disappeared during autumn.All water samples were filtered using a microfilter paper with 0.45 µm sized pores.The assay was completed within 10 min.The MC-LR levels in freshwater were assayed using LC-MS/MS (Waters XEVO TQ-S Micro, Milford, MA, USA) to confirm the performance of the fabricated FET biosensor.The water components were quantified using the Korean standard procedure for drinking water analysis [34].The electrical characteristics of the FET were evaluated using an electrometer (Keithley K617, Cleveland, OH, USA), digital voltmeter (Keithley K2182, Cleveland, OH, USA), and digital I/O board (LabJack U12, Lakewood, CO, USA) with a personal computer for data acquisition.A printed circuit board and universal-serial-bus-controllable multifunction digital acquisition board (LabJack T7) can be used as replacement for the probe station and other instruments.
Fabrication of the FET Sensor
The bioactivation of the MWCNTs was confirmed by XPS analysis, as shown in Figure 1.In Figure 1a, the O 1s band of the MWCNTs is significantly weak, which is owed to adventitious carbon contamination.Upon carboxylation (COOH-MWCNTs), the O 1s band shifts slightly to a higher binding energy.The intensity of the O 1s band was significantly higher for the MCTA-MWCNTs and BSA-coated MCTA-MWCNTs because of the presence of oxygen atoms in the MCTAs and BSAs.As shown in Figure 1b, the N 1s band of these samples originated from MCTAs and BSA.The N 1s band position (400 eV) corresponds to N in the amide (O=C-N-) group, and the shoulder peak (~402 eV) corresponds to protonated N in the amide group N [35].In Figure 1c, the band position of the shoulder peak of C 1s, which corresponds to the oxygenated carbon species, was enhanced and shifted to a higher energy by bioactivation.These results confirm bioactivation and agree with the FT-IR analysis described previously [29].Figure 1d shows the SEM image of the bioactivated MWCNTs.The diameter of the MCTA-MWCNTs was similar to that of the pristine MWCNTs.
Fabrication of the bioactivated MWCNT-based FET sensor is illustrated in Figure 2. Figure 2a shows an optical image of the D and S electrodes situated on a SiO 2 /Si wafer, comprising 60 interdigitated fingers.The magnified SEM image in Figure 2a shows the MCTA-MWCNTs immobilized randomly between the D and S electrode fingers.Figure 2b shows a schematic diagram of the MWCNT-FET.After immobilization of the MCTA-MWCNTs, the electrical resistance (R) between the D and S electrodes decreased from >40 MΩ to ~4 KΩ.The R value after a subsequent wash with PBS was not significantly altered (<3%).This indicated that the spatial arrangement of the immobilized MCTA-MWCNTs remained unaltered after washing (Figure S1).The Au cleaning procedure facilitated the activation of the Au surface by removing adsorbed organic chemicals.The cleaned Au surface demonstrated increased reactivity with the organic functional moieties of the activated MWCNTs, such as -OH, -COOH, and -NH 2 [36].Within two hours of the self-assembly reaction under ambient laboratory conditions, the contact angle of a water droplet on the Au surface increased from <5 • to ~60 • [36].These data indicate that the cleaned Au surface was hydrophilic, but was altered to a hydrophobic surface by the deposition of MWCNTs.Notably, immersion in a Au cleaning solution or gentle wiping using a cotton swab can detach the immobilized MCTA-MWCNTs.Therefore, meticulous handling is imperative to ensure consistent fabrication of the sensor.
Figure 2a shows an optical image of the D and S electrodes situated on a SiO2/Si wafer, comprising 60 interdigitated fingers.The magnified SEM image in Figure 2a shows the MCTA-MWCNTs immobilized randomly between the D and S electrode fingers.Figure 2b shows a schematic diagram of the MWCNT-FET.After immobilization of the MCTA-MWCNTs, the electrical resistance (R) between the D and S electrodes decreased from >40 MΩ to ~4 KΩ.The R value after a subsequent wash with PBS was not significantly altered (<3%).This indicated that the spatial arrangement of the immobilized MCTA-MWCNTs remained unaltered after washing (Figure S1).The Au cleaning procedure facilitated the activation of the Au surface by removing adsorbed organic chemicals.The cleaned Au surface demonstrated increased reactivity with the organic functional moieties of the activated MWCNTs, such as -OH, -COOH, and -NH2 [36].Within two hours of the self-assembly reaction under ambient laboratory conditions, the contact angle of a water droplet on the Au surface increased from <5° to ~60° [36].These data indicate that the cleaned Au surface was hydrophilic, but was altered to a hydrophobic surface by the deposition of MWCNTs.Notably, immersion in a Au cleaning solution or gentle wiping using a cotton swab can detach the immobilized MCTA-MWCNTs.Therefore, meticulous handling is imperative to ensure consistent fabrication of the sensor.
Characteristics of Device Performance
We evaluated the performance of the FET sensor in detecting MC-LR.Figure 3a shows plots of the D-S current (Ids) as a function of the gate voltage (Vg) of the fabricated MWCNT-FET recorded at a constant Vds (−0.5 V) across varying concentrations of the MCTAs (0-1 ng/mL) in PBS buffer.The plot highlights the ambipolar electric-field effect of the FET, with the lower and higher Vg sides of the Dirac point (lowest Ids value) exhibiting p-and n-characteristics, respectively.Such an ambipolar conductance has frequently been observed in liquid gate (top-gate) FETs [32].As the MC-LR concentration increased, the magnitude of the slope decreased in both regions.This trend suggests that the resistivity (ρ) of the MCTA-MWCNTs increases upon MC-LR binding.The preferential affinity between MC-LR and MCTAs was markedly stronger than that between the non-specific interactions among MCTAs.Hence, MCTAs predominantly capture MC-LR rather than remaining in the entangled state.The capturing event causes gradual loosening of the MCTA entanglement, which increases the potential barrier for conduction and, in turn, R of the FET.Moreover, the increment in ρ results in a decrease in the magnitude of the slope of the Ids vs. VG profile.Figure 3a reveals that the p-characteristic region of the
Characteristics of Device Performance
We evaluated the performance of the FET sensor in detecting MC-LR.Figure 3a shows plots of the D-S current (I ds ) as a function of the gate voltage (V g ) of the fabricated MWCNT-FET recorded at a constant V ds (−0.5 V) across varying concentrations of the MCTAs (0-1 ng/mL) in PBS buffer.The plot highlights the ambipolar electric-field effect of the FET, with the lower and higher V g sides of the Dirac point (lowest I ds value) exhibiting pand n-characteristics, respectively.Such an ambipolar conductance has frequently been observed in liquid gate (top-gate) FETs [32].As the MC-LR concentration increased, the magnitude of the slope decreased in both regions.This trend suggests that the resistivity (ρ) of the MCTA-MWCNTs increases upon MC-LR binding.The preferential affinity between MC-LR and MCTAs was markedly stronger than that between the non-specific interactions among MCTAs.Hence, MCTAs predominantly capture MC-LR rather than remaining in the entangled state.The capturing event causes gradual loosening of the MCTA entanglement, which increases the potential barrier for conduction and, in turn, R of the FET.Moreover, the increment in ρ results in a decrease in the magnitude of the slope of the I ds vs. V G profile. Figure 3a reveals that the p-characteristic region of the MWCNT-FET was more sensitive than the n-characteristic region when the FET sensor was exposed to an MC-LRspiked PBS solution.Notably, the Dirac point, positioned at −0.17 V to the FET in pure PBS buffer, shifted to a more negative value (−0.22 V) in the MC-LR-spiked PBS solution (≥0.1 ng/mL).The fabricated MCTA-MWCNT FET is an n-doped p-type FET [37][38][39][40][41][42][43][44][45].The left shift of the Dirac point from −0.17 V to −0.22 V implied that more electrons were doped into MWCNTs when MC-LR, which is weakly and negatively charged at pH 7.2, was bonded to the MC-TA aptamer.The working principle of this FET can be explained by charge transfer from either the MC-LR or MC-LR-MCTA aptamer and not by the electrostatic gating effect [23,38,42,[44][45][46][47].Consistent with other observations [48,49], the n-characteristic region is less defined probably because of the adsorption of oxygen in the solution.Although a quasi-linear correlation exists between I ds and the MC-LR level at a fixed V g in the n-type region (Figure 3a), this relationship is less defined in the dilute MC-LR concentration region (<0.2 ng/mL).Therefore, the electrical characteristics of the p-type region were adopted as the sensing signals for the FET sensor.
adsorption of oxygen in the solution.Although a quasi-linear correlation exists between Ids and the MC-LR level at a fixed Vg in the n-type region (Figure 3a), this relationship is less defined in the dilute MC-LR concentration region (<0.2 ng/mL).Therefore, the electrical characteristics of the p-type region were adopted as the sensing signals for the FET sensor.
In contrast, Figure 3b delineates the relationship between Ids and Vds of the FET exposed to 0.1 ng/mL of MC-LR, with Vg ranging from 0 V to −1 V.In Figure 3b, the gated effect was discernibly evident at Vg = −0.5 V, with the negative value implying that the active element (MCTA-MWCNTs) in the MWCNT-FET manifests p-type conduction characteristics [32].In contrast, Figure 3b delineates the relationship between I ds and V ds of the FET exposed to 0.1 ng/mL of MC-LR, with V g ranging from 0 V to −1 V.In Figure 3b, the gated effect was discernibly evident at V g = −0.5 V, with the negative value implying that the active element (MCTA-MWCNTs) in the MWCNT-FET manifests p-type conduction characteristics [32].
Sensitivity and Selectivity
The sensing performance of the FET is typically determined by its transconductance, which is denoted by g m [26,27].This parameter, g m , is defined as the slope of the I ds vs. V g plot at a constant V ds , expressed as g m = ∆I ds /∆V g .We obtained g m from the p-characteristic region of Figure 3a, since the response in the p-type region was more stable and sensitive.In this region, the dependency of g m on the hole mobility, µ h , is governed by the equation, g m = µ h (C/L 2 )V ds , where C represents the capacitance of the device and L is the length of the conductor [26,27].Considering the values of C, L, and V ds remain constant for the FET, g m is a function of µ h .This relationship suggests that the ratio g m /g mo , representing the relative transconductance of the MC-LR-dosed FET device (where g mo denotes the transconductance, g m , of the background PBS solution), can serve as the sensing signal of the sensor.The relative hole mobility of the sensing component in the FET sensor, represented by µ h /µ oh (where µ oh signifies the µ h for PBS solution), undergoes a decline owing to the enhanced potential barrier amid the MCTA-MWCNTs upon capturing MC-LR by the MCTAs, as previously discussed.However, the electrical-field effects (shift in Dirac point, increase in ρ, and decrease in relative transconductance with an increase in MC-LR concentration) were not observed in the BSA-MWCNT-FET (FET assembled with BSA-coated pristine MWCNTs).Such inactive FET signals (Figure S2) indicated the absence of bioactive sites in the BSA-coated MWCNTs.
Figure 3c delineates the g m /g mo values of the FET as a function of the spike level of MC congeners (MC-LR, MC-YR, and MC-LY) in PBS solution at fixed V ds = −0.5 V and V g = −0.5 V. MC-LR, MC-YR, and MC-LY are commonly found MC congeners in natural freshwater [4].The g m /g mo for MC-LR was deduced from the p-characteristic side of the Dirac point, as shown in Figure 3a.Additionally, the g m /g mo values of MC-YR and MC-LY were estimated from the corresponding I ds vs. V g plots (Figure S3).The g m /g mo curve for MC-LR, which represents the sensing signal, was fitted using an exponential function for the data in the range of 0 to 1 ng/mL, y = A 1 exp(−x/t 1 ) + y o , where A 1 = 0.891, y o = 0.109, and t 1 = 0.219 (with R 2 = 0.98).A linear relationship can be established at the low concentrations of 0.1, 0.2, 0.3, and 0.5 ng/mL of MC-LR (Figure 3c), with the linear regression characterized by slope, intercept, and R 2 values of −1.10, 0.0374, and 0.97, respectively (Table S1).The initial data point (x = 0 ng/mL) was excluded for the optimal fit.Based on this linear model, the sensor's detection limit of the sensor was estimated to be 0.11 ng/mL.The performance of the MCTA-MWCNT-FET sensor fabricated in this study was compared with other electrochemical sensors with high sensitivity (Table 1).The proposed FET sensor has a narrower detection range and a higher detection limit than most other sensors except for the MWCNT and SWCNT immunosensors [50,51].This result is supported by the fact that graphene-and SWCNT-based sensors are, generally, more sensitive than MWCNT-based sensors [52][53][54].However, the detection limit of the proposed FET sensor was low for practical use.In the concentration range of 0.0-1.0ng/mL, MC-YR and MC-LY exhibited nearly invariant and relatively small values of the g m /g mo , whereas g m /g mo decreased consistently with increasing MC-LR concentration.This observation indicates the selectivity of the fabricated FET sensor for MC-LR in the presence of its congeners.
Thus, the selectivity of the MC-LR sensor in the presence of potential interferents was corroborated.Figure 4a shows the g m /g mo values corresponding to the PBS solutions spiked at concentrations of 0, 1.0, 2.0, and 3.0 ng/mL of MC-LY against a constant MC-LR concentration of 0.3 ng/mL.The observed g m /g mo values indicated that the FET signal remained unaffected within the error range even when the MC-LY concentration surpassed that of MC-LR by a factor of ten. Figure 4b shows the selective detection capability of the FET sensor for various MC-LR concentrations at a constant MC-LY concentration of 2.0 ng/mL.The black dotted line in the plot, symbolizing the FET sensing signal for MC-LR in PBS buffer, presents a linear relationship between g m /g mo and the MC-LR concentration, as shown in Figure 3a.Remarkably, both profiles coincided over almost the entire concentration range, except for <0.2 ng/mL of MC-LR.These data confirmed the low selectivity of the FET sensor at low concentrations.The selectivity of the sensor was observed at other concentrations of MC-YR and MC-LY (Figures S3 and S4).However, the red profile represents the FET-sensing signal for MC-LR in 2.0 ng/mL MC-LY in PBS.
tion range.In the concentration range of 0.0-1.0ng/mL, MC-YR and MC-LY exhibited nearly invariant and relatively small values of the gm/gmo, whereas gm/gmo decreased consistently with increasing MC-LR concentration.This observation indicates the selectivity of the fabricated FET sensor for MC-LR in the presence of its congeners.
Thus, the selectivity of the MC-LR sensor in the presence of potential interferents was corroborated.Figure 4a shows the gm/gmo values corresponding to the PBS solutions spiked at concentrations of 0, 1.0, 2.0, and 3.0 ng/mL of MC-LY against a constant MC-LR concentration of 0.3 ng/mL.The observed gm/gmo values indicated that the FET signal remained unaffected within the error range even when the MC-LY concentration surpassed that of MC-LR by a factor of ten. Figure 4b shows the selective detection capability of the FET sensor for various MC-LR concentrations at a constant MC-LY concentration of 2.0 ng/mL.The black dotted line in the plot, symbolizing the FET sensing signal for MC-LR in PBS buffer, presents a linear relationship between gm/gmo and the MC-LR concentration, as shown in Figure 3a.Remarkably, both profiles coincided over almost the entire concentration range, except for <0.2 ng/mL of MC-LR.These data confirmed the low selectivity of the FET sensor at low concentrations.The selectivity of the sensor was observed at other concentrations of MC-YR and MC-LY (Figures S3 and S4).However, the red profile represents the FET-sensing signal for MC-LR in 2.0 ng/mL MC-LY in PBS.
Matrix Effect and Actual Application
Biochemical assays of natural waters can be affected by diverse chemical and physical contaminants present in the water [58].Thus, it is imperative to carefully examine the potential matrix effects, especially when the assay is geared toward detecting trace levels of
Matrix Effect and Actual Application
Biochemical assays of natural waters can be affected by diverse chemical and physical contaminants present in the water [58].Thus, it is imperative to carefully examine the potential matrix effects, especially when the assay is geared toward detecting trace levels of chemical constituents in natural water resources.Notably, the MCTA-MWCNT-based FET displayed a conspicuous absence of the matrix effect.Figure 5a indicates that the g m /g mo values corresponding to 0, 0.2, 0.3, 0.5, and 1.0 ng/mL of MC-LR-spiked tap water filtered with 0.45 µm sized pores aligned with those from equivalent MC-LR levels in PBS.This observation suggests that the calibration curve obtained using PBS buffer as a surrogate for tap water sourced from the Nakdong River (Busan, Republic of Korea) remains valid for assessing MC-LR levels in water from this river.
Water samples from the two reservoirs were collected to represent conditions with and without algal blooms.The samples collected from the Yeongju Dam (designated YD1 and YD2) displayed discernible greenish particulate matter, indicative of an algal bloom.However, samples from the Andong Dam (designated AD) presented no evident particulate debris.Optical images of the residues on the filter paper revealed differences between the water samples from the two dams (Figure S5).The two filters with green residues correspond to the YD water samples.The color observed in the UV-vis spectrum, as presented in Figure S5, is consistent with the known absorption properties of chlorophyll A and B [59].
However, samples from the Andong Dam (designated AD) presented no evident particulate debris.Optical images of the residues on the filter paper revealed differences between the water samples from the two dams (Figure S5).The two filters with green residues correspond to the YD water samples.The color observed in the UV-vis spectrum, as presented in Figure S5, is consistent with the known absorption properties of chlorophyll A and B [59].The LC-MS/MS analysis revealed that the concentration of MC-LR in both water samples was below the detection limit of the instrument (0.01 ng/mL).This observation may be odd because the two samples, YD1 and 2, were obtained from moderately algal blooming water bodies.We speculate that microcystins are intracellular toxins that are usually released when cyanobacteria are lysed or old [60].However, the reading of the fabricated FET sensor indicated that the MC-LR concentration in the sample was <0.031 ng/mL, as shown in Figure 5.This false-positive reading can be attributed to the MC-LR concentration residing below the error range of the detection limit.Although the ultimate detection limit of the fabricated FET sensor does not match that of the LC-MS/MS method, the sensor can be applied for the rapid preliminary evaluation of environmental water at point sources.Consequently, the designed bioactivated MWCNT-FET sensor is a time-saving, convenient, inexpensive, and reliable environmental sensor for detecting MC-LR in freshwater systems.
Conclusions
A bioactivated multiwalled carbon nanotube (MWCNT)-based label-free field-effect transistor (FET) sensor was developed to detect microcystin-LR (MC-LR).The active element of the FET was MWCNTs activated with an MC-LR-targeting aptamer (MCTA).This sensor exhibits high sensitivity, excellent selectivity even in the presence of other MC congeners, and no matrix effects even in the presence of other MC congeners.This environmental FET sensor was found to assay the MC-LR level in the range of 0.1-0.5 ng/mL, with The LC-MS/MS analysis revealed that the concentration of MC-LR in both water samples was below the detection limit of the instrument (0.01 ng/mL).This observation may be odd because the two samples, YD1 and 2, were obtained from moderately algal blooming water bodies.We speculate that microcystins are intracellular toxins that are usually released when cyanobacteria are lysed or old [60].However, the reading of the fabricated FET sensor indicated that the MC-LR concentration in the sample was <0.031 ng/mL, as shown in Figure 5.This false-positive reading can be attributed to the MC-LR concentration residing below the error range of the detection limit.Although the ultimate detection limit of the fabricated FET sensor does not match that of the LC-MS/MS method, the sensor can be applied for the rapid preliminary evaluation of environmental water at point sources.Consequently, the designed bioactivated MWCNT-FET sensor is a time-saving, convenient, inexpensive, and reliable environmental sensor for detecting MC-LR in freshwater systems.
Conclusions
A bioactivated multiwalled carbon nanotube (MWCNT)-based label-free field-effect transistor (FET) sensor was developed to detect microcystin-LR (MC-LR).The active element of the FET was MWCNTs activated with an MC-LR-targeting aptamer (MCTA).This sensor exhibits high sensitivity, excellent selectivity even in the presence of other MC congeners, and no matrix effects even in the presence of other MC congeners.This environmental FET sensor was found to assay the MC-LR level in the range of 0.1-0.5 ng/mL, with a detection limit of 0.11 ng/mL, within 10 min.This methodology holds promise for the expeditious and sensitive detection of MC-LR in algal bloom locations in freshwater systems.
Figure 1 .
Figure 1.X-ray photoelectron spectroscopy (XPS) profiles near binding energy of (a) oxygen, (b) nitrogen, and (c) carbon for the functionalized and pristine MWCNTs.High-resolution N 1s spectra for MWCNTs and COOH-MWCNTs were not obtained, because of the lack of detectable signals in the survey spectrum.(d) Scanning electron microscopy (SEM) image of the MCTA-MWCNT filaments with a diameter of ~26 nm, close to that of the pristine MWCNTs.
Figure 1 .
Figure 1.X-ray photoelectron spectroscopy (XPS) profiles near binding energy of (a) oxygen, (b) nitrogen, and (c) carbon for the functionalized and pristine MWCNTs.High-resolution N 1s spectra for MWCNTs and COOH-MWCNTs were not obtained, because of the lack of detectable signals in the survey spectrum.(d) Scanning electron microscopy (SEM) image of the MCTA-MWCNT filaments with a diameter of ~26 nm, close to that of the pristine MWCNTs.
Figure 2 .
Figure 2. (a) Optical image of the drain (D) and source (S) Au electrodes of the field-effect transistor (FET) sensor.The magnified SEM image reveals a gap of 20 µm between D and S. (b) Schematic diagram of the top-gated FET fabricated in this study.In this figure, the electrode was simplified for better visualization.
Figure 2 .
Figure 2. (a) Optical image of the drain (D) and source (S) Au electrodes of the field-effect transistor (FET) sensor.The magnified SEM image reveals a gap of 20 µm between D and S. (b) Schematic diagram of the top-gated FET fabricated in this study.In this figure, the electrode was simplified for better visualization.
Figure 3 .
Figure 3. (a) Ambipolar characteristic of the MCTA-MWCNT FET at a constant drain-source potential (Vds = −0.5 V) with varying concentrations of MC-LR in PBS buffer.(b) Negative gate-potentialcontrolled I-V characteristic between the drain/source electrodes of the FET obtained using 0.1 ng/mL of MC-LR in PBS buffer.(c) Relative transconductance (gm/gmo) of the FET for MC congeners.The black line is the linear regression of values from 0.1, 0.2, 0.3, 0.5 ng/mL of MC-LR.(d) Sensitivity (ΔR/Ro) of the FET for MC congeners.The line through the MC-LR data in (c) represents the fit described in Results 3.3.
Figure 3 .
Figure 3. (a) Ambipolar characteristic of the MCTA-MWCNT FET at a constant drain-source potential (V ds = −0.5 V) with varying concentrations of MC-LR in PBS buffer.(b) Negative gatepotential-controlled I-V characteristic between the drain/source electrodes of the FET obtained using 0.1 ng/mL of MC-LR in PBS buffer.(c) Relative transconductance (g m /g mo ) of the FET for MC congeners.The black line is the linear regression of values from 0.1, 0.2, 0.3, 0.5 ng/mL of MC-LR.(d) Sensitivity (∆R/Ro) of the FET for MC congeners.The line through the MC-LR data in (c) represents the fit described in Results 3.3.
Figure 3d illustrates the
Figure 3d illustrates the selectivity of the FET sensor, represented as ∆R/R o × 100, in response to varying concentrations of MC-LR, MC-YR, and MC-LY.Despite the substantial structural similarities among MC-LR, MC-YR, and MC-LY, the MCTA-MWCNT FET sensor demonstrated a pronounced selectivity toward MC-LR in the examined concentration range.In the concentration range of 0.0-1.0ng/mL, MC-YR and MC-LY exhibited nearly invariant and relatively small values of the g m /g mo , whereas g m /g mo decreased consistently with increasing MC-LR concentration.This observation indicates the selectivity of the fabricated FET sensor for MC-LR in the presence of its congeners.Thus, the selectivity of the MC-LR sensor in the presence of potential interferents was corroborated.Figure4ashows the g m /g mo values corresponding to the PBS solutions
Figure 5 .
Figure 5. (a) Relative transconductance of the FET sensor for tap water and MC-LR in PBS buffer.(b) Detection signals of the MCTA-MWCNT FET for PBS buffer, and Andong dam (AD) and Yeongju dam (YD1 and YD2) water samples.
Figure 5 .
Figure 5. (a) Relative transconductance of the FET sensor for tap water and MC-LR in PBS buffer.(b) Detection signals of the MCTA-MWCNT FET for PBS buffer, and Andong dam (AD) and Yeongju dam (YD1 and YD2) water samples.
Table 1 .
Comparison of analytical performance of the MCTA-MWCNT with other electrochemical sensors for detection of microcystin-LR 1 . | 8,954.4 | 2024-01-01T00:00:00.000 | [
"Chemistry",
"Engineering",
"Environmental Science",
"Materials Science"
] |
Two new forms of ordered soft separation axioms
Abstract The goal of this work is to introduce and study two new types of ordered soft separation axioms, namely soft Ti-ordered and strong soft Ti-ordered spaces (i = 0, 1, 2, 3, 4). These two types are formulated with respect to the ordinary points and the distinction between them is attributed to the nature of the monotone neighborhoods. We provide several examples to elucidate the relationships among these concepts and to show the relationships associate them with their parametric topological ordered spaces and p-soft Ti-ordered spaces. Some open problems on the relationships between strong soft Ti-ordered and soft Ti-ordered spaces (i = 2, 3, 4) are posed. Also, we prove some significant results which associate both types of the introduced ordered axioms with some notions such as finite product soft spaces, soft topological and soft hereditary properties. Furthermore, we describe the shape of increasing (decreasing) soft closed and open subsets of soft regularly ordered spaces; and demonstrate that a condition of strong soft regularly ordered is sufficient for the equivalence between p-soft T1-ordered and strong soft T1-ordered spaces. Finally, we establish a number of findings that associate soft compactness with some ordered soft separation axioms initiated in this work.
Introduction
The study of the concept of topological ordered spaces was presented for the first time by Nachbin [1]. He has constructed this concept by adding a partial order relation to the structure of a topological space. With regard to Nachbin's definition of topological ordered spaces, two points can be considered, the first one is that the topology and the partial order relation operate independently of one another, and the second one is that the topological ordered spaces are one of the generalizations of topological spaces. After Nachbin's work, many researchers carried out various studies on ordered spaces (see, for example, [2][3][4][5]).
Zadeh [6] introduced the notion of fuzzy sets in 1965 as mathematical instruments for dealing with uncertainties. To put a topological structure to fuzzy set theory, Chang [7] has defined fuzzy topological spaces. Then Katsaras [8] combined a partial order relation and a fuzzy topology to define a fuzzy topological ordered space.
In 1999, the notion of soft sets was proposed by Molodtsov [9] to overcome problems associated with uncertainties, vagueness, impreciseness and incomplete data. This notion includes enough parameters which make it a suitable alternative for the previous mathematical approaches such as fuzzy and rough sets. The useful applications of soft sets to several directions contribute to progress work on it rapidly (see, for example, [10,11]). The concept of soft topological spaces was introduced by Shabir and Naz in their pioneer work [12]. Then many studies on soft topological spaces have been done (see, for example, [13][14][15][16][17][18]). El-Shafei et al. [19] introduced partial belong and total non-belong relations which are more functional and flexible for theoretical and application studies via the soft set theory and soft topologies. Then they employed these two new notions to present new soft separation axioms, namely p-soft T i -spaces (i = 0, 1,2,3,4). The authors of [20][21][22][23][24][25] have done some amendments for some alleged results on soft axioms. Al-shami and Kočinac [26] explored the equivalence between the extended and enriched soft topologies and has obtained some interesting results related to the parametric topologies. The authors of [27,28] introduced different types of soft axioms on supra soft topological spaces.
In [29], the authors formulated the concepts of monotone soft sets and soft topological ordered spaces as a new soft structure. They also have utilized the natural belong and total non-belong relations to introduce the notions of p-soft T i -ordered spaces (i = 0, 1,2,3,4). In [30] we studied and investigated these notions on supra soft topological ordered spaces.
The topic of soft separation axioms is one of the most significant and interesting in soft topology. In general, soft separation axioms are utilized to obtain more restricted families of soft topological spaces. It turns out, from the previous studies, that there are many points of view to study soft separation axioms. The diversity of these perspectives is attributed to the relations of belong and non-belong that are used in the definitions; and the objects of study, ordinary points or soft points (see, for example, [12,19,[31][32][33][34]). The variety of ordered soft separation axioms will be more extended, because the soft neighborhoods and soft open sets is distinguished according to the partially ordered soft set.
As a contribution of study ordered soft separation axioms, the authors devote this work to defining and investigating two types of ordered soft separation axioms, namely soft T i -ordered and strong soft T i -ordered spaces (i = 0, 1, 2, 3,4). With the help of examples, we illustrate the relationships among them. Also, we derive their fundamental features such as the finite product of soft T i -ordered (resp. strong soft T i -ordered) spaces is soft T i -ordered (resp. strong soft T i -ordered) for i = 0, 1, 2; and the property of being a soft T iordered (strong soft T i -ordered) space is a soft topological ordered property for i = 0, 1, 2, 3, 4. Moreover, we investigate certain properties of them that associated with some notions of soft ordered topology such as soft ordered topological invariant and soft compatibly ordered subspaces. In the end of both Section (3) and Section (4), we discuss some results about the relationships between soft compact spaces and some of the initiated ordered soft separation axioms.
Preliminaries
This section is allocated to recall some definitions and well known results which we shall utilize them in the next parts of this work.
Soft set
Definition 2.1. [9] A pair (G, E) is said to be a soft set over X provided that G is a mapping of a parameters set E into 2 X .
For short, we use the notation G E instead of (G, E) and we express a soft set G E as follows: G E = {(e, G(e)) : e ∈ E and G(e) ∈ 2 X }. Also, we use the notation S(X E ) to denote the collection of all soft sets defined over X under a set of parameters E. Definition 2.2. [12,19] For a soft set G E over X and x ∈ X, we say that:
Definition 2.10. [37] Let G A and H B be two soft sets over X and Y, respectively. Then the cartesian product of G A and H B is denoted by (G × H) A×B and is defined as
Definition 2.11. [35] A soft mapping f ϕ of S(X A ) into S(Y B ) is a pair of mappings f : X → Y and ϕ : A → B such that for soft subsets G K and H L of S(X A ) and S(Y B ), respectively, we have: Definition 2.12. [35] A soft mapping f ϕ : S(X A ) → S(Y B ) is said to be injective (resp. surjective, bijective) if the two mappings f and ϕ are injective (resp. surjective, bijective).
Definition 2.20. [29] A soft map f ϕ : is said to be: ); (iii) an ordered embedding provided that P x α ⪯ 1 P y α if and only if f ϕ (P x α ) ⪯ 2 f ϕ (P y α ).
Soft topology
The notation τe, which is given in the proposition above, is said to be a parametric topology and (X, τe) is said to be a parametric topological space. Proposition 2.32. [29] In (X, τ, E, ⪯) we find that for each e ∈ E, the family τe = {G(e) : G E ∈ τ} with a partial order relation ⪯ form an ordered topology on X.
Ordered soft separation axioms
In this section, we formulate the concepts of soft T i -ordered spaces (i = 0, 1, 2, 3, 4) by using monotone soft neighborhoods and establish some of their properties. With the help of illustrative examples, we elucidate the relationship between them; and the interrelations between them and their parametric topological ordered spaces. (iii) an increasing (resp. a decreasing) partially soft neighborhood of x ∈ X provided that W E is an increasing (resp. a decreasing) and partially soft neighborhood of x.
The following example illustrates the above definition. [29] reports that for every x y in X, there exist two disjoint soft neighborhoods W E and V E containing x and y, respectively. This means that y ̸ W E and x ̸ V E . [29] are still valid for soft T 2 -ordered spaces.
Since W E and V E are disjoint then y ̸ W E if and only if y ∉ W E and x ̸ V E if and only if x ∉ V E . So the definitions of soft T 2 -ordered and p-soft T 2 -ordered spaces are equivalent. Hence, all results concerning p-soft T 2 -ordered spaces in
Proof. The proof follows immediately from Definition (3.3).
To show that the converse of the above proposition is not always true, we give the following two examples. Then τ = {̃︀ Φ,̃︀ X, G i E : i = 1, 2, 3, 4} forms a soft topology on X. Now, for y x and y z, we find that W E = {(e 1 , {y}), (e 2 , X)} is an increasing soft neighborhood of y such that x ∉ W E and z ∉ W E . Also, for z x and z y, we find that W E = {(e 1 , {z}), (e 2 , X)} is an increasing soft neighborhood of z such that x ∉ W E and y ∉ W E . Therefore (X, τ, E, ⪯) is a lower soft T 1 -ordered space. Hence, it is soft T 0 -ordered. On the other hand, there does not exist a soft neighborhood W E of x such that y ∉ W E or z ∉ W E . This means that it is not an upper forms a soft topology on X. Now, for 3 2 and 3 1, we find that G 5 E is an increasing soft neighborhood of 3 such that 2 ∉ G 5 E and 1 ∉ G 5 E ; and G 4 E is a decreasing soft neighborhood of 2 and 1 such that Proof. Let a be the smallest element in (X, ⪯). Then a ⪯ x for all x ∈ X. Since ⪯ is anti-symmetric, then x ⪯̸ a for all x ∈ X. Therefore, there exists a decreasing soft neighborhood W E of a such that x ∉ W E . Since X is finite, theñ︀ ⋂︀ W E is a decreasing soft neighborhood of a such that y ∉ W E for each y ∈ X \ {a}.
Proposition 3.9. If a is the largest element of a finite lower soft T
Proof. The proof is similar to that of Proposition (3.8).
Example 3.15. Consider a a partial order relation
To prove the proposition in the case of i = 1, let (Y , τ Y , E, ⪯ Y ) be a soft ordered subspace of a soft T 1ordered space (X, τ, E, ⪯). For every a ⪯̸ Y b ∈ Y, we have a ⪯̸ b. Therefore, there is an increasing soft neighborhood W E of a and a decreasing soft neighborhood The proof in the case of i = 0 can be done similarly. Proof. The proof is complete by observing that x ̸ G E implies that x ∉ G E for every G Ẽ︀ ⊆̃︀ X.
Then there are x ∈ X and e ∈ E such that x ∈ V(e) and x ∈ d((V) c (e)). This implies that there is y ∈ (V) c (e) such that x ⪯ y. This means that y ∈ V(e). But this contradicts the disjointness of
Proposition 3.23. Every increasing (decreasing) soft closed or soft open subset of a soft regularly ordered space
Proof. Without loss of generality, suppose that H E is an increasing soft closed set in a soft regularly ordered space (X, τ, E, ⪯) which is not stable. Then there exists x ∈ X and α, β ∈ E such that x ∈ H(α) and x ∉ H(β). This means that x ∉ H E . So for any soft neighborhood W E of x and any soft neighborhood V E of H E , we obtain that x ∈ W(α) ⋂︀ V(α). Thus, we cannot find disjoint soft neighborhoods of x and H E . This is a contradiction with soft regularly ordered of (X, τ, E, ⪯). Hence, H E must be stable.
The proof of the decreasing case can be done similarly.
Corollary 3.24. If all increasing (decreasing) soft closed or soft open subsets in (X, τ, E, ⪯) are stable, then (X, τ, E, ⪯) is p-soft regularly ordered if and only if it is soft regularly ordered.
Proposition 3.25. Every soft regularly ordered space is p-soft regularly ordered.
The example below shows that the converse of Proposition (3.25) does not hold in general.
(i) The given STOS in Example(3.26) is soft T 2 -ordered and soft T 4 -ordered, but it is not soft T 3 -ordered;
(ii) If we consider (X, τ, E, ⪯) is STOS such that E is a singleton set, then (X, τ, E, ⪯) is a topological ordered space. So Example 7 in [2] shows that a soft T 4 -ordered space is a proper extension of a soft T 3 -ordered space.
The following two problems are still open.
Problem 3.28. Is a soft T 3 -ordered space a soft T 2 -ordered space?
Problem 3.29. Is a soft T 3 -ordered space a p-soft T 3 -ordered space?
The converse of Proposition 3.30 fails. We show this in the next example.
., 6} is a soft topology on X. It can be easily verified that (X, τ, E, ⪯) is soft T 4 -ordered. In contrast, we cannot find a soft open set containing y such that x does not totally belong to it. Therefore, (X, τ, E, ⪯) fails to satisfy a condition of a p-soft T 1 -ordered space. Thus, (X, τ, E, ⪯) is not p-soft T 4 -ordered.
Theorem 3.32. Every soft compatibly ordered subspace (Y , τ Y , E, ⪯ Y ) of a soft regularly ordered space (X, τ, E, ⪯) is soft regularly ordered.
is upper soft regularly ordered. In a similar manner it can be proved that (Y , τ Y , E, ⪯ Y ) is lower soft regularly ordered. Hence, (Y , τ Y , E, ⪯ Y ) is soft regularly ordered.
Corollary 3.33. Every soft compatibly ordered subspace
The proof of the next proposition is easy and thus it is omitted.
Theorem 3.35. The finite product of soft T i -ordered spaces is soft T i -ordered for i = 0, 1, 2, 3.
Proof. We only prove the theorem in the case of i = 2, and the other cases can be proved similarly.
Assume that (X, τ 1 , E 1 , ⪯ 1 ) and (Y , τ 2 , E 2 , ⪯ 2 ) are soft T 2 -ordered spaces and let (X × Y , τ, E, ⪯) be the soft ordered product space of them. Let (x 1 , y 1 ) ⪯̸ (x 2 , y 2 ) ∈ X × Y. Then x 1 ⪯̸ 1 x 2 or y 1 ⪯̸ 2 y 2 . Without loss of generality, say x 1 ⪯̸ 1 x 2 . Since (X, τ 1 , E 1 , ⪯ 1 ) is soft T 2 -ordered, then there is an increasing soft neighborhood W E1 of x 1 and a decreasing soft neighborhood V E1 of x 2 such that x 2 ∉ W E1 and x 1 ∉ V E1 which are disjoint. Therefore, W E1 ×̃︀ Y is an increasing soft neighborhood of (x 1 , y 1 ) and V E1 ×̃︀ Y is a decreasing soft neighborhood of (x 2 , y 2 ) such that (x 2 , Proof. We only prove the theorem in the cases of i = 2, 4, and the other cases can be proved similarly. (i) Let f ϕ : (X, τ, A, ⪯ 1 ) → (Y , θ, B, ⪯ 2 ) be an ordered embedding soft homeomorphism map such that (X, τ, A, ⪯ 1 ) is soft T 2 -ordered. Suppose that x ⪯̸ 2 y ∈ Y. Then P x β ⪯̸ 2 P y β for each β ∈ B. Since f ϕ is bijective, then there are P a α and P b α iñ︀ X such that f ϕ (P a α ) = P x β and f ϕ (P b α ) = P y β and since f ϕ is an ordered embedding, then P a α ⪯̸ 1 P b α . So a ⪯̸ 1 b. By hypothesis, we have an increasing soft neighborhood V E of a and a decreasing soft neighborhood Since f ϕ is bijective soft open, then f ϕ (V E ) and f ϕ (W E ) are disjoint soft neighborhoods of x and y, respectively. From Theorem (2.21), we obtain f ϕ (V E ) and f ϕ (W E ) are increasing and decreasing, respectively. Hence, the proof is complete.
(ii) Let f ϕ : (X, τ, A, ⪯ 1 ) → (Y , θ, B A similar proof can be given for decreasing case. Proof. Suppose that F 1 E and F 2 E are two disjoint soft closed sets such that F 1 E is decreasing and F 2 E is increasing. Then F 2 Ẽ︀ ⊆F c 1 E . Since (X, τ, E, ⪯) is soft compact, then F 2 E is soft compact and since (X, τ, E, ⪯) is soft regularly ordered, then there is an increasing soft neighborhood suppose that there exists an element x ∈ G E and x ∈ d[(G E ) c ]. So there exists an element y ∈ (G E ) c such that x ⪯ y. This means that y ∈ G E . But this contradicts the disjointness of G E and (G E ) c . Thus, (X, τ, E, ⪯) is soft normally ordered.
To show that the converse of the above theorem and corollary fail we give the following example. Obviously, (X, τ, E, ⪯) is soft normally ordered and soft compact. Also, for every increasing (resp. decreasing) soft compact subset of (X, τ, E, ⪯) and every decreasing (
Strong ordered soft separation axioms
The first aim of this section is to define strong ordered soft separation axioms, namely strong soft T i -ordered spaces (i = 0, 1, 2, 3, 4) by using monotone soft open sets in the place of monotone soft neighborhoods. The second aim is to provide some examples to illustrate the relationships between these and the relationships between them and soft T i -ordered spaces. The third aim is to discuss their main properties and provide some results that associate soft compactness and some initiated strong ordered soft separation axioms.
The following example explains the difference between soft open sets and soft neighborhoods in terms of increasing and decreasing.
(i) Every monotone soft open set containing an element x is a monotone soft neighborhood of x. (ii) Every monotone soft open set containing a soft set H E is a monotone soft neighborhood of H E .
Proof. Let G E be a monotone soft open set containing an element x. Then x ∈ G E ⊆ G E . Therefore, G E is a monotone soft neighborhood of x. Also, if G E is a monotone soft open set containing a soft set H E . Then H E ⊆ G E ⊆ G E . Therefore, G E is a monotone soft neighborhood of H E . Example (4.1) demonstrates that the converse of the above proposition fails. To show that the converse of the above corollary fails, we give the following example. To prove that (iii)→(i), let x ⪯̸ y ∈ X. Since (X, τ, E, ⪯) is strong soft T 0 -ordered, then it is strong lower soft T 1 -ordered or strong upper soft T 1 -ordered. Say, it is strong upper soft T 1 -ordered. It follows, by the above corollary, that (i(x)) E is an increasing soft closed set. Since y ∉ (i(x)) E and (X, τ, E, ⪯) is strong soft regularly ordered, then there exist disjoint soft open sets W E and V E containing (i(x)) E and y, respectively, such that W E is increasing and V E is decreasing. Hence, the proof is complete. Proof. The proof follows directly from the definitions of strong soft T i -ordered and soft T i -spaces.
Remark 4.25.
To confirm that the converse of the above proposition fails, we consider E is a singleton and then we suffice with the examples introduced in [2]. Also, by considering E is a singleton, Example 3 in [2] shows that the concepts of strong soft T i -ordered and soft T i -spaces (i = 3, 4) are independent of each other.
In conclusion, we give Figure 1 to illustrate the relationships among some types of ordered soft separation axioms.
Conclusion and future work
By combining a partial order relation and a topology on a non-empty set, Nachbin [1] defined the topological ordered space. Similarly, Al-shami et al. [29] defined the soft topological ordered space. Studying soft separation axioms via soft topological spaces is a significant topic because they help establish a wider family which can be easily applied to classify the objects under study. We demonstrate in the last paragraph of introduction the reasons for doing many studies via soft separation axioms and the variety of these studies will be more via ordered soft separation axioms. Throughout this work, we use the notions of monotone soft neighborhoods and monotone soft open sets to present soft T i -ordered and strong soft T i -ordered spaces, respectively, for i = 0, 1, 2, 3, 4. These two types are formulated with respect to the ordinary points. We establish several results such as strong soft T i -ordered spaces is strictly finer than soft T i -ordered spaces and support this result with number of interesting examples. Also, we discuss the relationships which associate the soft T i -ordered (strong soft T i -ordered) spaces with p-soft T i -ordered spaces and soft T i -spaces. In Theorem (4.8), we give a condition that satisfies the equivalence between p-soft T 1 -ordered and strong soft T 1 -ordered spaces. In the end of Section (3) and Section (4), we present a number of results that associate soft compactness with some of the initiated ordered soft separation axioms. Some open problems on the relationship between strong soft T i -ordered and soft T i -ordered spaces (i = 2, 3, 4) are posed.
To extend this study, one can generalize the initiated concepts on supra soft topological spaces [40]. All these results will provide a base to researchers who want to work in the soft ordered topology field and will help to establish a general framework for applications in practical fields. | 5,513.2 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Inclusion and Intellectual Disabilities: A Cross Cultural Review of Descriptions
The benefits of inclusive practices for students with intellectual disabilities have been demonstrated in several countries; however, large-scale inclusive practices remain elusive. Having a clear understanding of how researchers define the terms inclusion and intellectual disability would support more cross-cultural collaboration and facilitate the generalization of practices. Addressed in this paper is the question of what themes, if any, exist in conceptualizing inclusion and intellectual disability across the peer-reviewed research of six countries, three of which have been identified as highly inclusive and three that have been identified as minimally inclusive. These findings may be used to further research into barriers and opportunities for inclusive practices for students with intellectual disabilities.
Introduction
An argument has been made for the importance of inclusive practices in education and creating positive postsecondary outcomes for individuals and the larger community in terms of economic opportunities, quality of life, and safeguarding basic human rights (World Health Organization [WHO], 2011). The United Nations' Convention on the Rights of People With Disabilities (CRPD; United Nations, 2006) detailed the basic human rights all people should have and provided suggestions for policy and practice to achieve these goals by 2015. The CRPD has been adopted by 161 countries with the express goal of reaffirming all people are entitled to human rights. Disability is recognized as a culturally constructed experience, so inclusion in daily community experiences with nondisabled peers is an integral part of building sustainable practices and policies. Yet, around the world, millions of children with disabilities remain who are segregated or not included at all in schools (Richler, 2017).
Overview
In this paper, we focus on students with intellectual disabilities (IDs) as defined by the American Association of Intellectual and Developmental Disability (AAIDD). The AAIDD (2019) define ID as "a disability characterized by significant limitations in both intellectual functioning and in adaptive behavior, which covers many everyday social and practical skills. This disability originates before the age of 18" (para. 1). Approximately 1-2% of the population have an ID (McKenzie, Milton, Smith, & Ouellette-Kuntz, 2016). In the United States, compared to people without disabilities and those with other disabilities, people with IDs have worse economic, social, and quality of life outcomes (Bouck, 2012). They also have been consistently segregated in school (Kurth, Morningstar, & Kozleski, 2014) despite research on inclusive practices indicating better in-school and postschool outcomes for students with IDs (White & Weiner, 2004).
Because the CRPD is a legally binding international treaty with a supervisory body and implementation mechanisms, the definitions it uses have significant potential to create widespread and sustainable change. While each country, state, and even school will have a different context, if researchers clearly describe foundational definitions, such as what is meant by students with educational needs and inclusion, then an implementation framework would support scaling up at an international level. Until all people with disabilities, including those with IDs, are active and equal members of school communities, the goals of the CRPD remain unfulfilled. We use the construct of inclusion to mean all students, including those with IDs, are active members of the school and classroom community working toward the same goals as their peers without disabilities and have the possibility of those goals being achieved with appropriate accommodations and support.
Constructing Disability
The construction of disability has and continues to evolve (Buntinx & Schalock, 2010). The medical model holds disability as a purely biological construct that impairs a person. While some progress has been made in psychological and medical professions in taking into account the lived experiences of disability, many countries' educational systems remain focused on solely a biological definition of disability (Sabatello, 2014). The result of such a medical model of disability is multifold, including viewing people with disabilities as passive recipients of aid, focusing on disability as something that should be cured-and if not cured then pitied, and aggregating disability experiences into an abstract "normal" experience that rarely mirrors lived experiences. Instead, what disability means depends in part on individual variables such as socioeconomic status, nationality, race, and gender. Conflating all experiences of a medical label into one aggregate experience may further marginalize individuals who have intersectional identities.
The social model of disability, on the other hand, attempts to take into account not only individual variables such as socioeconomic status and nationality but also the person with the disability as the central impetus of action and experience. The barriers that exist are not in the person but are a result of environmental and cultural inflexibility that conceptualizes a mythical normal and builds around that phantasmal original (Butler, 1999). The social model of dis-ability does not deny a biological aspect to disability; rather, it acknowledges the experience of disability as going beyond the body to include social, financial, spiritual, educational, ecological, and other systems and experiences. While the CRPD allows for a wide range of disability constructions through its definition of disability as "those who have long-term physical, mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others" (United Nations, 2006, para. 2), without an understanding of the social role of disability, the goals of the CRPD are unattainable. The AAIDD's (2019) definition of intellectual disability, which takes into account experiences and barriers outside of the individual, is better aligned to the goals of the CRPD than a strictly medical definition (Weller, 2011).
Using the social model of disability, it would be expected that definitions of disability vary by context and country. While this is true, the lack of common definitions of disability have been reported as challenges throughout the literature (Bolderson, Mabbett, Hvinden, & van Oorschot, 2002). A comparative analysis for the European Commission outlined the problems with differing definitions of disability (Bolderson et al., 2002). The authors found each country in the European Union had varying definitions of disability, which often focused on aid and financial assistance received. The authors also argued the lack of commonality surrounding disability created problems for individuals who moved from one country to the next and also for doing any comparative work to inform policy (Bolderson et al., 2002).
Inclusion
Similar to disability, there is no one universally accepted definition of inclusion as it relates to education, though most researchers agree inclusion is more than merely sitting in the same classroom as one's peers (Nes, Demo, & Ianes, 2018). The act of inclusion involves acceptance, belonging, and an active and equitable role in the community. It is the belief all students have the right to an education equal to that of their peers. According to UNICEF (2013) in the State of the World's Children report, "Inclusive education entails providing meaningful learning opportunities to all students within the regular school system. It allows children with and without disabilities to attend the same age-appropriate classes at the local school, with additional, individually tailored supports as needed" (p. 7). This definition aligns with other international organizations that promote inclusion, such as the United Nations, the Index of Inclusion, and Inclusion International.
The CRPD outlines the objective that people with disabilities have equal rights "to live in the community, with choices equal to others, and shall take effective and appropriate measures to facilitate full enjoyment by persons with disabilities of this right and their full inclusion and participation in the community" (United Nations, 2006, para. 1). Inclusion in educational systems is a key driver for inclusion into the rest of the community. Studies have shown inclusion in general education classrooms with appropriate supports and services leads to better postsecondary outcomes than in segregated settings, so this may be a way to support an equitable opportunity for all people (Test et al., 2009). When schools segregate students based on academic ability or disability labels, they inadvertently set up a hierarchy of power later reflected in the larger society.
Research has shown when schools plan for all learners and make the content and environment accessible to all students, students with and without disabilities have improved academic outcomes. Conversely, when students with disabilities (SWDs) are in segregated settings, their opportunities to learn are hampered, and they have less positive postschool outcomes (Test et al., 2009). Additionally, in inclusive settings, students learn human variation is a natural expectation, a foundation that may support equity across the lifespan (UNICEF, 2013).
Researchers have shown inclusion improves academic performance in both literacy and mathematics for SWDs, including students with IDs (Peetsma, Vergeer, Roeleveld, & Karsten, 2001;Ryndak, Morrison, & Sommerstein, 1999). Students educated in inclusive classrooms spend more time on academic standards and have increased engagement on academics when compared to their peers in segregated settings (Wehmeyer, Lattin, Lapp-Rincker, & Agran, 2003). In addition, research indicates students in inclusive settings have access to higher quality teaching practices and increased rigor and expectations (Hunt & Farron-Davis, 1992). Furthermore, inclusion has been linked to increased attendance and overall health of SWDs (Dessemontet, Bless, & Morin, 2012).
When SWDs are taught in the general education context with their peers, they are provided positive social and behavioral role models so they can learn social and behavioral skills that occur in a natural setting. This promotes both explicit and incidental learning, which has been shown to increase social skills and positive behavior (McDonnell, Mathot-Buckner, Thorson, & Fister, 2001;McGregor & Vogelsberg, 1998;Odom et al., 2004). When inclusion occurs in primary and secondary schools, it often results in inclusion after graduation. Brown et al. (1986) found students who were educated in the general education context were also more likely than their peers in segregated settings to be employed after graduation. In fact, White and Weiner (2004) found inclusion was the number one predictor of employment postgraduation for students with IDs. Inclusion was a stronger predictor of employment than intelligence, behavior, or disability. Furthermore, it has been found that inclusion increases independence postgraduation (Blackorby, Hancock, & Siegel, 1993;White & Weiner, 2004). Increased employment and independence has been linked to increased quality of life for individuals with disabilities, including students with extensive support needs (Ryndak, Ward, Alper, Montegomery, & Storch, 2010). In-school and postschool outcomes are improved when all students are provided the opportunity to learn alongside their peers. These outcomes support economic growth and stability, which will strengthen the larger society.
Because variations in the definition of inclusion exist, international comparisons of inclusive education may be an extremely arduous task. When researching inclusion, it was sometimes difficult to determine what inclusion referred to in that setting and research context. Furthermore, we focused on students with IDs, a population often excluded from formal education (Richler, 2017). Compounding the issue, many international articles do not define the student population, or they use the broad terms students with disabilities or students with educational needs, which makes it difficult, if not impossible, to determine if students with IDs are included in the study.
Implementation Science
While research has consistently shown positive outcomes for all students, creating, sustaining, and scaling up inclusive educational systems remains an elusive goal. To scale up inclusive practices, it would be helpful if researchers, advocates, and educators could pool their knowledge. However, there are differing understandings of disability labels and inclusion across the world (Taub, Foster, Orlando, & Ryndak, 2017), making it difficult to use lessons learned in one context to inform instructional methods and systems change work in another setting. Implementation science is a methodology and framework for translating research into sustainable and systemic policies and practices (Learning Collaborative for Implementation Science in Global Brain Disorders, 2016) and a possible methodology for systematically promoting inclusive practices. The process includes understanding the specific drivers and context in which the intervention is being rolled out, consistently using data to evaluate and refine implementation, and using this process for continued refinement and scaling up.
Significant drivers of inclusive practices in CRPD are equity and economic growth. Some researchers and policymakers argue, when working toward change, "equity is not a by-product but an essential element-a value-of thoughtfully considered intervention design, learning agendas, and applied data collection and evaluation and research" (Farrow & Morrison, 2019, p. 5). Inclusive education is an equity issue; indeed it may be the equity issue. Currently, UNESCO reports 90% of children with disabilities in the developing world do not attend school (Richler, 2017). Each country and state would have individualized drivers, levers, and barriers that necessitate consideration for implementation, improvement, and reproduction. These individualized aspects do not eliminate the possibility of international cooperative learning.
Educators, policymakers, families, and researchers need to learn from others' successes and barriers to facilitate effective educational systems. While each context has specific barriers to and levers for change, lessons may be learned across contexts. During research studies, clearly categorizing context and participants sets the stage for more unified learning. While a common definition of intellectual disability and inclusion may not be necessary across all countries, to learn from each other, a clear understanding of the terms and goals is required.
This research began with an initial question of whether there was a correlation between highly inclusive countries and those with a high quality of life for people with IDs. A literature search using the University of North Carolina, Greensboro University library online database was conducted to determine if there were international rankings of countries that included people with disabilities in schools, with a specific focus on identifying countries with high and low rates of school inclusion for students with IDs. Next, a Google search was conducted to identify other potential ranking sources. Another set of searches was conducted on quality of life indicators for people with IDs (economic standing, happiness, friendship). Quality of life and inclusion rankings from WHO, UNESCO, World Bank, and World Bank Group and Gallup Poll were reviewed and compared.
There was limited agreement across sources for where countries ranked in terms of inclusion levels and quality of life data for people with disabilities. Some common issues making the initial research question ineffective were aggregated data for all types of disabilities, differing definitions of common terms (such as intellectual disability and inclusion), and lack of detailed data on quality of life for people with IDs, all of which resulted in often conflicting pictures of a country's inclusion levels and/or quality of life for people with IDs. Ultimately, the World Report on Disability rankings of delivery of education in specific European countries (WHO, 2011) were used to identify and match countries with high and low inclusive educational practices because the data were clearest on location of service delivery (separate school/separate class/inclusive classes). As a result, we addressed a more percussive research question of what, if any, themes existed in conceptualizing inclusion and intellectual disability across the peer-reviewed research of six countries, three of which we identified as highly inclusive and three we identified as minimally inclusive.
Methodology
Six paired countries were identified based on population, geography (island vs. mainland), and inclusion levels, with one pair having relatively high levels of inclusion and the other having relatively low levels of inclusion. The list of countries was limited and thus near-population matches could not always be made. High levels of inclusion were determined based on Figure 7.3 in the World Report on Disability (WHO, 2011). Spain's population of 46 million had approximately 83% of SWDs in inclusive classes and the remaining 17% in segregated schools, and Spain was paired with Germany. Germany's 82.79 million population had almost the exact inverse inclusion rates with only 17% of SWDs included and 83% in segregated schools. Portugal and Belgium were paired due to similar population levels (10.31 million and 11.35 million, respectively). Portugal was identified as having 85% of SWDs in inclusive classes, 5% in segregated classes in typical schools, and 10% in segregated schools. In the chart, Belgium was divided into Flanders and Wallonia; however, for the purposes of this research, they were viewed as a single entity. The data from the World Report on Disability (WHO, 2011) were averaged as 91% of SWDs in separate schools and 9% in inclusive settings. The smaller population country with low inclusion rates was Latvia with 1.9 million people and approximately 18% inclusion placements, 12% of SWDs in segregated classes in typical schools, and the remaining 70% in segregated schools. There were two small population countries with high inclusion rates: Iceland and Norway. Iceland had 338,349 people, while Norway had 5.25 million. Finally, Norway was chosen over Iceland even though the population difference between the countries was larger due to additional variables in play with an island country. Norway had approximately 84% of SWDs in inclusive classrooms, 13% in segregated classes in a typical school, and 3% in segregated schools. Norway was paired with Latvia. Latvia had approximately 70% of SWDs in segregated schools, 10% in segregated classes in regular schools, and 20% in inclusive classes.
Next, we conducted another literature review using eight online library databases, such as JSTOR, WorldCat, and Pro-Quest Central. Several combinations of search terms were used, including the keywords intellectual disability, teaching, school, inclusion, special education needs, education, cognitive, and each identified country's name. The search was limited to peer-reviewed articles from 1980-2019. An initial review of titles was used to determine if the article had the potential to be included. Articles on nonrelated topics such as genetic testing or fish hatcheries were not included. We then reviewed the abstracts to determine which studies met the criteria of including students with IDs, being about or in inclusive primary or secondary school settings, and discussing or located in the country of interest. The remaining articles were acquired and read to ensure they matched eligibility criteria. Data were collected and entered into a database that included the country, definitions or characteristics of ID or students with special education needs (SENs), definitions or descriptions of inclusion or inclusive practices, number of students addressed, if appropriate, and additional notes on context or content.
We then used a modified hybrid approach to thematic analysis that incorporated both identifying themes important to answer the research questions while using the data to develop and uncover new themes during the analysis (Swain, 2018). We each reviewed a different set of articles and checked in several times throughout the process to compare terms used, data gathered, and to answer questions. All data were recorded in the database for future analysis.
Results
We initially identified 385 possible articles through the searches. The number of possible articles from each search was 151 from Norway, 100 from Germany, 81 from Spain, 30 from Belgium, 15 from Portugal, and one from Latvia.
After reviewing the titles and abstracts, 66 potential papers remained: 19 from Norway, 18 from Germany, 13 from Belgium, 11 from Spain, four from Portugal, and one from Latvia. We rejected articles if the abstract did not target the identified country clearly or did not include discussion on students with IDs and inclusion. Articles that identified multiple countries were evaluated separately for each country to identify pertinent data.
Next, we read each remaining article to confirm it met the criteria and to collect data on constructs of students with IDs and components of inclusion or inclusive practices. During the second reading, seven articles were inaccessible, and additional articles were discarded due to the same reasons as in the abstract review. For instance, in five cases, one article in the bibliography mentioned the targeted country, but the article did not. Generalized papers on inclusion with no specific country mentioned that focused on philosophy or rights across the world were not included in these results, leaving 10 articles for analysis. The remaining 19 articles included eight from Norway, three from Germany, three from Portugal, two from Spain, two from Belgium, and one from Latvia.
Defining Students with Intellectual Disabilities
Understanding the definition of ID would vary across borders, the objective of this research was to look for common learner characteristics to identify themes related to this population. While the majority of papers defined students with SENs, 15% expanded on this label to include a more precise description of what learner difficulties, SENs, IDs, or academic difficulties entailed. Articles from each of the countries referred to students with IDs yet never defined the criteria for ID. Two articles from Norway, on the other hand, had very clear definitions, including an article by Scharenberg, Rollett, and Bos (2019) who defined ID using operationalized boundaries from psychological assessments. Three articles from Germany had a bit more information about SENs than just that generic label. Henke et al. (2017) provided a less detailed definition but did include a bit of additional information by defining SENs with a focus on students who have a need in a learning domain. Weiss, Markowetz, and Kiel (2018) stated, "In Germany . . . 'moderate and severe ID' is a category of education; respectively, a certain area of special needs which is related to limitations in functioning (conceptual, social, practical)" (p. 838). Pijl, Frostad, and Flem (2008) argued both the medical and social models of disability are problematic when defining SENs for their study.
Defining Inclusion
Several authors provided definitions of inclusion that explained what it was by stating what it was not. For instance, authors stated inclusion was more than being in the room and had importance beyond social skills. Authors of two of the articles used Booth and Ainscow's (2002) Index for Inclusion: Developing Learning and Participation in Schools as a rubric for what inclusion should be. Other authors used the beyond access model of inclusion by Sonnenmeier, McSheehan, and Jorgensen (2005) as the bar for inclusion. These were the only studies that included physically sharing space, being social, and learning alongside peers without disabilities as a part of the criteria for defining inclusion. In the remaining articles, authors discussed inclusion without clarifying components of the definition or providing an overarching idea of inclusion as students being in the same classroom as peers without disabilities with a sole focus on the social realm.
The authors covered peer friendships, self-determination skills, teacher and student relationships, supports needed for student involvement, making academics accessible, teachers' perceptions of inclusion, the training teachers need to implement inclusive practices, and an overarching focus on building inclusion. There was overlap in topics between the high-inclusive and low-inclusive countries. Both included information on peer supports, making academics accessible, supports needed for students, and training needed to support teachers, as well as teachers' perceptions of inclusion and student and teacher relationships. There were two topics present only in the articles from low-inclusion countries: (a) an overall conversation on building inclusive classrooms or schools and (b) the skills teachers need to implement inclusion. The one topic present only in the high-inclusive countries was a study on student self-determination.
Discussion
In an effort to build a more complete understanding of educational inclusion with a goal of learning from how various countries have implemented large-scale systemic change, the original intent of this research was to create a protocol for comparing policies, laws, and practices of countries with high and low rates of inclusive education. The early findings indicated, while research consistently showed inclusive practices were beneficial, many studies did not include people with low incidence disabilities such as IDs, and, across each country, there were different definitions of both disability categories and inclusion. These basic differences in variables made it difficult to compare systems across borders. This initial investigation into differences in foundational definitions of intellectual disability and inclusion provides a starting point for researchers to develop clear protocols of explicit descriptions of these two constructs to contextualize local efforts and make it easier for researchers, educators, advocates, and policymakers to determine universal themes, if any, on including students with IDs as active participants in general education classrooms with their peers without disabilities as the norm rather than the outlier.
The most evident theme that emerged from the literature review was the lack of consistency found between articles and countries. In the literature, there were no common definitions of key terms, even in countries such as Germany that have a legal definition of the term intellectual disability. Without a description of the students served and a definition of inclusive education, a meaningful comparison between countries remains difficult and thus a barrier to improving and learning from other countries' practices. For example, many articles focused on the very broad term students with special education needs without explicitly defining the learner characteristics of those students included in the study, in some cases making it impossible to determine if students with IDs were included in the population of study. The definitions in the original 183 articles defined disability quite differently, with some articles including sex (female) and others including ethnicity in a larger construct of marginalization and disability.
The importance and value of recognizing disability as socially constructed does not preclude the need for researchers, educators, and policymakers to find patterns of what works to support various learner characteristics. For instance, in the United States, data are clear that students with IDs who are educated in segregated settings are less likely to be included and, upon graduation, are more likely to be unemployed, have few friends, and experience little independence (Brown et al., 1986;Butterworth et al., 2014). Without a common understanding of what learner characteristics comprise the construct of ID in the United States, it is only through disaggregating disability category data these patterns become clear; identifying the pattern allows researchers, educators, and policymakers to begin to deconstruct where barriers exist for these students. With common understandings across international studies, it would be possible to determine if there were practices or policies that support better postsecondary outcomes for these students that could be disseminated and implemented in other contexts. Having unclear understandings of learner characteristics makes it difficult to disseminate evidence-based practices across the world so each country does not have to start from scratch but instead can build from lessons learned.
Similarly, the term inclusion can vary considerably, and, in the final articles used for analysis, only one of them provided clear characteristics for what inclusion should look like (Mortier, Van Hove, & De Schauwer, 2010). Many of the articles included in the original dataset used "included" to mean all students are educated, regardless of setting. For instance, in the initial sample of papers, the focus was on including females, students from lower socioeconomic families, and students with physical disabilities. Other articles used the term "inclusion" or "included," but the study seemed to only occur in self-contained classes. Is inclusion merely sharing the same physical space? At a school level or a classroom level? Is inclusion primarily for social reasons? Or are academics just as important? We used a more comprehensive definition of inclusion that involves not only being in the same space but working with peers without disabilities on the same academic work, though it may be modified in terms of depth of knowledge and difficulty. The various definitions of inclusion may reflect larger societal beliefs about who is or is not worthy of an education, but the range of categories was a barrier to international comparisons.
Another theme that emerged when doing an initial search of datasets related to data and population. First, some countries lacked updated data on inclusion and disability, thus compounding the issue of consistency since it was unclear if progress had been made since the latest data were reported. Second, based on the report from the WHO (2011), larger countries were generally not as inclusive as smaller countries such as Iceland. This trend, along with the limited number of countries included in their dataset, made finding comparable countries challenging. For example, Spain has a relatively high rate of inclusion, and a population of 46 million was compared to Germany's low rate of inclusion and population of 82 million. Countries with larger populations face challenges smaller countries do not due to the number of students served and thus the increased number of SWDs served. As a result, we attempted to account for population by matching countries according to population; however, variations still exist.
Lastly, countries that relied on tracking systems had lower rates of inclusion. Germany, for example, places students into tracks at a young age based on perceived academic potential. Students are considered to be university bound or vocation bound and then educated accordingly. This system of tracking students invariably leads to segregation, where SWDs and those who struggle academically are placed into tracks that differ from their same-age peers. This system of tracking not only shapes a students' education but also their future life trajectory.
Why does it matter if researchers, educators, and policymakers review international literature on teaching students with IDs and inclusive practices? First, each day students are excluded from the general education classroom, they are losing opportunities to learn they cannot afford to lose. Second, as the CRPD, WHO, and UNESCO have argued, when a subgroup of the population is barred from education, their quality of life tends to be low, and their families have a loss of income due to caregiving requirements. Third, the tenets of implementation science have been identified as useful when trying to create sustainable, systematic change and improvement (Fixsen, Blase, Metz, & Van Dyke, 2015), especially for change that requires attitudinal and behavioral shifts, as it takes into account local context. However, when the research and practice reported does not clearly detail the contexts in which they are working, including in this case the learner characteristics of the students and the characteristics of what is meant by inclusion, it is difficult to move from individual change to systemic development. Thus, not only were there very few articles on the practice or theory of including students with IDs, but those we found often provided little context from which others could learn when implementing change.
Limitations
A major limitation of this study was the lack of a more comprehensive ranking of inclusion than the World Report on Disability (WHO, 2011). This list focused solely on select European nations, leaving out many countries that should inform practice. It was used because it provided a clear and common construct for further inquiry that could later be extended to other countries. An additional concern was the low number of articles found overall, with only 5.5% of those articles meeting the inclusion criteria. This limited the understanding of inclusive education in the countries selected. It is possible the keywords were too detailed, which would have excluded articles of possible interest. In addition, we relied on university databases that resulted in very few articles written in languages other than English. Since the focus was on international education, it is likely there are many articles written in other languages that would have met the criteria. Another limitation was the lack of available datasets comparing educational placements in various countries. The dataset chosen only compared 30 European countries. This significantly limited the initial selection of countries and thus the articles we found.
Future Recommendations
Researchers who clearly detail the learner characteristics of the population in their studies and who provide detailed characteristics of what inclusion means in their context would support opportunities for cross-cultural learning. Describing disability categories or characteristics and clear explanations of educational placements would greatly reduce the confusion related to differing terminology. In addition, countries that do not currently collect and disaggregate data on their population of people with disabilities need to do so. The CRPD offers tools and guidance for data collection; however, there is no one way to collect this data as long as it includes, but is not necessarily limited to, the number of people and their age ranges who have various disabilities or learner characteristics (e.g., male/female, deaf, blind, ethnicity, requiring adapted intellectual and behavior supports across multiple settings), where they are getting their education at the classroom level (e.g., general education classroom v. separate classroom) and the amount of time there, as well as common contextual expectations or practices of what that schooling entails (e.g., active participation or sitting in the back of the room with an adult other than the teacher, academics or physical education, art or music, completing the same or similar work as their peers without disabilities or significantly different work). Postsecondary data are also necessary to examine quality of life levels for individuals with disability.
Ensuring children with disabilities receive a high-quality education in an inclusive environment should be a priority of all countries. To do this, and to fulfil the goals of the CRPD and ensure equity for people with disabilities, systemic barriers to inclusion need to be removed. The measurement of that progress requires clear data collection, monitoring, and analysis to regularly inform policies and practices. | 7,604.8 | 2020-01-28T00:00:00.000 | [
"Medicine",
"Philosophy"
] |
Tracking R of COVID-19: A new real-time estimation using the Kalman filter
We develop a new method for estimating the effective reproduction number of an infectious disease (R) and apply it to track the dynamics of COVID-19. The method is based on the fact that in the SIR model, R is linearly related to the growth rate of the number of infected individuals. This time-varying growth rate is estimated using the Kalman filter from data on new cases. The method is easy to implement in standard statistical software, and it performs well even when the number of infected individuals is imperfectly measured, or the infection does not follow the SIR model. Our estimates of R for COVID-19 for 124 countries across the world are provided in an interactive online dashboard, and they are used to assess the effectiveness of non-pharmaceutical interventions in a sample of 14 European countries.
A.1 SIS Model
We now show that the estimator in the main text in Eq. (3) is also obtained when the dynamics of the disease follow the SIS model. The SIS model, again in discrete time, is given by The only difference from the SIR model in the main text in Eq.
(1) is that formerly infected individuals do not obtain immunity after recovery and instead again join the pool of susceptibles. As is well known, the basic reproduction number in the SIS model is the same as in the SIR model (e.g., Chowell and Brauer, 2009) and given by R (t) 0 = β t /γ. Since the law of motion for I t in the SIS model is the same as in the SIR model, we can repeat the same steps as in the benchmark analysis to arrive at the same estimator as in in Eq. (3).
A.2 Generalized SIR Model
In this section, we show that we can also obtain the estimator in Eq. (3) from a generalized version of the SIR model with stochastic shocks. Specifically, we consider the following generalized SIR model: Differently from the baseline model, we introduce random shocks v 1,t and v 2,t . The shocks are i.i.d., and the time-varying support of v 1,t is [0, S t−1 − β t I t−1 /N ], while the support of v 2,t is [0, I t−1 + β t I t−1 S t−1 /N ]. We also assume that E t−1 [v 1,t − v 2,t ] = 0, so that the conditional expectation E t−1 [I t ] coincides with the value for I t given by the noiseless SIR model. With these modifications, the model can capture rich patterns of infectious disease dynamics. For example, "super spreader events" can be modeled either as v 1,t shocks or as a spike in β t . The model can also capture richer forms of population structures than the baseline SIR model. For example, if individuals who are more infectious (e.g., those with more connections in a network model) are more likely to become infected first, that can be captured by assuming that β t becomes lower over time.
Defining the time-varying basic reproduction number as R (t) 0 = β t /γ, and R t ≡ R (t) 0 S t−1 /N , we obtain that where v t ≡ (v 1,t − v 2,t )/I t−1 . Taking expectations on both sides of the equation, we arrive at Hence, the generalized SIR model of the present section leads to the same estimator as the baseline SIR model. Finally, we note that if γ varies deterministically over time, the equation above remains essentially unchanged, the only difference being that γ is replaced by γ t . If γ t follows a non-degenerate stochastic process, then the estimator for E[R t ] would need to correct for the covariance between γ t and R t .
A.3 Foundation for the Local-Level Model
When estimating R t , we use a local-level specification for the growth rate of the number of infected individuals. In this section, we show that the local-level model arises naturally in the SIR model in the early stages of an epidemic when the transmission rate follows a random walk.
Specifically, consider the generalized SIR model in Section A.2 of the Supplementary Appendix. We now specialize the process for the transmission rate β t to be a random walk: with a given initial value β 0 > 0. We calculate that in the early stages of the epidemic when S t ≈ N . Defining the effective reproduction number early on in the epidemic as R t = β t /γ, we therefore have directly that the growth rate of I t follows a local-level model with gr(I t ) = γ(R t − 1) + v t R t = R t−1 +η t whereη t ≡ η t /γ. Provided that the distribution of v t can be approximated with a normal distribution, we directly obtain the specification in the main text in Eq. (5). Alternatively, to obtain an exact normal local-level model, we could assume that v 1,t = v 2,t = 0 (no shocks in the original model, just as in the baseline model in the main text in Eq. (1)) but that instead of observing the true growth rate gr(I t ), we only observe gr(I t ) + ε t where ε t is i.i.d. normally distributed mean-zero measurement error.
A.4 Gibbs-Sampling Algorithm
In this section we discuss how the parameters of the state-space model can be estimated with a Gibbs-sampling algorithm à la Carter and Kohn (1994). Besides being a natural robustness check to our methodology, this algorithm uses a different approach to ensure the non-negativity of R t . To use the Kalman filter, we need to estimate σ 2 and σ 2 η in the state-space model in Eq. (5) of the main text. For the Gibbs sampler, we break the model down into conditional densities from which we can sample iteratively. The algorithm is the following: 1. Conditional on σ 2 ε and σ 2 η , use the Kalman filter to infer the state vector R t ; 2. Conditional on the sequence of R t computed in the previous step, take samples of σ 2 ε and σ 2 η from their prior distributions; 3. Conditional on the new draws of σ 2 ε and σ 2 η , estimate R t ; 4. Verify that each element of R t is positive. If yes, store the draws of σ 2 ε and σ 2 η . If not, discard the draws and repeat step 2; 5. Compute the Kalman smoother; 6. Iterate forward for as many replications as needed.
Finally, we contrast the estimates of R t obtained with the Gibbs sampler to our baseline estimates. We obtain a correlation of 0.85 between the two sets of estimates. Credible intervals are highly correlated as well. Overall, we conclude that our estimates are similar across different statistical estimation approaches.
A.5 SEIR Model: Monte Carlo Simulation
Our estimation method uses a structural mapping between R t and gr(I t ) derived from the basic SIR model. While we can generalize the baseline SIR model to include stochastic shocks (Section A.2 of the Supplementary Appendix), and the estimator remains valid when the disease follows an SIS model (Section A.1 of the Supplementary Appendix), the model is nevertheless restrictive. In particular, it ignores incubation periods as well as transmission during the incubation period. These features are likely especially important when modeling COVID-19.
We now perform a simulation exercise to see how our estimator of R t performs in a richer model that accounts for these additional features. Specifically, we consider an SEIR model in which the exposed are infectious: Here, E t denotes the number of individuals that are exposed at day t, κ is the daily transition rate from exposed to infected, and ∈ [0, 1] measures the degree to which the exposed are less infectious than the infected. If = 0, the exposed are not infectious at all, and we obtain the benchmark SEIR model. If = 1, the exposed are as infectious as the infected, and the model is isomorphic to the standard SIR model. We calibrate the parameters following Wang et al. (2020) who apply the benchmark SEIR model (with = 0) to study the dynamics of COVID-19 in Wuhan. In particular, we use κ = 1/5.2 and γ = 1/18 as in Wang et al. (2020). Then, we set = 2/3, following Ferguson et al. (2020) who assume that symptomatic individuals are 50% more infectious than the asymptomatic (that is, −1 = 1.5). Finally, we choose β by targeting a basic reproduction number of R 0 = 2.6, again as in Wang et al. (2020). In the model above, R 0 is given by R 0 = β/γ + β /κ, implying β = R 0 γκ/(γ + κ). The formula yields β ≈ 0.12. Finally, we set S 0 = 11 × 10 6 (approximating the population size of Wuhan), E 0 = R 0 = 0, and I 0 = 1.
The Monte Carlo design is as follows. First, we simulate the deterministic system in Eq. (A.1) using the parameters above. Then, we calculate the growth rate in the true number of infected individuals, i.e., gr(I t ) = I t /I t−1 − 1. However, instead of knowing the true growth rate, the statistician is assumed to observe a noisy version of it given by Notes: Estimates of the effective reproduction number (R t ) when the true dynamics of the disease follow an SEIR model. We investigate two values for γ est. , the transition rate from infected to recovered, that are used when estimating R t . First, we use the correct value of γ est. = (γ −1 + κ −1 ) −1 ≈ 0.043. Second, we use a misspecified values of γ est. = 1/10. Average values from 10,000 Monte Carlo replications are shown. See text for more details.
gr(I t ) = gr(I t ) + ε t . Here, ε is an i.i.d. normal disturbance with mean zero and standard deviation of 0.10. The standard deviation of the disturbances is roughly equal to the range of the true growth rates. Hence, the amount of noise used in the simulation is fairly large. For each realization of the disturbances, we estimate R t using our method. As in our empirical application, only data after 100 total cases have been reached is used. We investigate two values for γ est. that are used when estimating R t . First, we consider a situation in which the statistician uses the correct time that individuals are infected, given by γ est. = (γ −1 + κ −1 ) −1 where γ and κ are the true parameter values of the SEIR model. Second, we investigate a case in which the statistician incorrectly thinks that individuals are infectious only for ten days (γ est. = 1/10). We repeat the process for 10,000 Monte Carlo replications.
The results of the Monte Carlo simulation are shown in Figure A.1. When the statistician uses the correct number of days that an individual is infectious (that is, taking into account the incubation time), the estimates of R t from our method are very close to their true theoretical values. That is in spite of the fact that our estimator for R t is derived assuming that the dynamics of the disease are described by an SIR model. However, we also show that if the statistician misspecifies the number of days than an individual is infectious (assuming 10 days instead of the true number of 23.2 days), the estimates of R t are substantially biased, especially in the early stages of the epidemic. As is to be expected, underestimating the number of days that an individual is infectious leads to a downwards bias in the estimates of R t early on in the epidemic (when R t > 1), and upwards bias when the true R t falls below one. Overall, the results imply that the new method performs well when estimating R t even when the true dynamics of the disease do not follow the SIR model, provided that the duration of infectiousness used in the estimation is sufficiently accurate.
A.6 Effects of Potential Data Issues
We now discuss the effects of various data issues on the performance of our estimator.
Reporting delays. In practice, data may be subject to significant reporting delays. For example, suppose that due to testing constraints there is a lag of days between the date that an individual becomes infected and the date on which the case is registered. In this case, the estimates of R t would also be subject to delay of days. If there are significant reporting delays, one may first obtain, say, one-week-ahead forecasts of new cases, and then use these forecasts to construct a time series for I t .
Imperfect detection. A natural worry with any estimator of R t is that it may be substantially biased if not all of infected individuals are detected. Given the simplicity of our estimator, we can analytically assess the effects of imperfect detection.
Suppose that the true numbers of susceptible, infected, and recovered individuals are given by S * t , I * t , and R t , respectively. Their evolution is the same as in the SIR model in the main text in Eq. (1). However, we only observe I t = α t I * t , where α t ≡ I t /I * t is the detection rate. In practice, α is typically less than one, although the mathematical calculation below does not require this.
With this notation, we have that since gr(α t ) × gr(I * t ) ≈ 0 at a daily frequency; the approximation is exact in continuous time. Using the approximation above and Eq. (2) in the main text, we therefore obtain that the bias of the estimator under imperfect detection is given bŷ We now discuss several cases of practical importance: • Constant detection rate (α t = α). If the detection rate is constant over time, then our estimator is unbiased, andR t = R t . Hence, for example, even if we only detect 10% of the infectives (but the fraction detected remains constant over time), the estimator remains unbiased. Note that if the number of tests increases over time, that is not inconsistent with α t = α given that the number of infected individuals is likely to be growing at the same time. • Constant growth in the detection rate (gr(α t ) = g α ). If the growth rate of α t is constant over time, then our estimate of R t is biased upwards if g α > 0 and downwards if g α < 0. Note, however, that we are often mostly interested in the trend of R t over time and whether the trend is affected by various policy interventions. The trend in R t is estimated accurately even if g α = 0. Intuitively, constant growth in the detection rate leads to a level bias, but the slope is still estimated correctly. • Detection rate converges over time (α t → α). The final case of interest occurs when the detection rate converges to a constant over time. For example, if everyone is detected towards the end of the epidemic, we would have α t → 1. Since our method uses Kalman-filtering techniques to estimate the growth rate of I t , transient fluctuations in α t would have a limited effect on the estimates of R t later on in the sample. Given that we are often precisely interested in the behavior of R t in the later stages of the epidemic (when the detection rate is likely fairly constant), our method would still yield reliable estimates.
To provide a quantitative illustration, we perform a small-scale Monte Carlo study. We choose the parameters of the simulation to match our empirical estimates. First, we use our empirical estimates of R for the world as a whole for the first 50 days of the sample as the true values of R. From these values of R, we calculate the true (but unobserved) values of the growth rate of the number of infected individuals as µ t ≡ gr(I * t ) = γ(R t − 1), using γ = 1/7. Finally, the observed growth rate-as seen by the statistician-is generated as gr(I t ) = gr(α t )(1 + µ t ) + µ t + ε t . We use the empirical estimate of σ 2 ε to simulate ε t shocks (i.i.d. draws from a normal distribution with mean zero). We then apply our estimator to the generated data on gr(I t ), using the same value of γ = 1/7 to back out R.
In the simulation we consider three underdetection scenarios: • Constant underdetection. Given our analytical results, it is immaterial what percentage of cases is detected, as long as that percentage is constant over time. • Ramp-up in testing. Next, we consider a situation in which-due to an increase in the number of tests performed-the fraction of detected infected individuals goes up from α 0 = 0.10 to α 14 = 0.15 (an increase of 50%) over a period of two weeks. After the two weeks, the detection probability remains constant at 0.15. Again, the precise values of α 0 and α 14 are irrelevant, and what matters is the growth rate in the detection rate. • Stochastic underdetection. Finally, we suppose that the detection rate satisfies gr(α t ) = φ gr(α t−1 ) + ν t where ν t is an i.i.d. normally distributed shock. Intuitively, the detection rate is assumed to be unconditionally constant, but its growth rate is stochastic and follows an AR(1) process. We set the persistence parameter to φ = 0.75 in order to allow for fairly long-lasting deviations from the average detection rate. To ensure that the variance of gr(I t ) remains constant across simulations (to ensure an apples-to-apples comparison), we suppose that 50% of the noise in the observed growth rate comes from variation in α t , with the other 50% coming from the ε t shocks. (Here, by "noise" we refer to the variation in gr(I t ) that is not solely due to the variation in µ t , namely, gr(α t )(1 + µ t ) + ε t .) Denoting our estimates of the variance of the growth rate byσ 2 µ and the variance of the irregular component byσ 2 ε , we therefore set Var We draw the initial value for gr(α 0 ) from its unconditional distribution.
The results of the Monte Carlo simulation are summarized in Figure A.2. As seen in the top panel, the estimated average effective reproduction numbers are fairly close to their theoretical values in all three scenarios. In the Testing Ramp-Up scenario, the estimates are biased upwards at the beginning of the sample, but the amount of bias is quantitatively relatively small. In addition, the estimates converge to those in the other two scenarios quite quickly. In all three scenarios, the estimates are able to pick up the reversal in the trend of R that happens at around time 30. The bottom panel of the figure plots the average absolute error of the estimates. As expected, the estimates are least accurate in the Stochastic Undertesting case, and mostly within 0.25-0.30 of the true R in the Constant Underdetection and Testing Ramp-Up scenarios.
Finally, in Figure A.3 we provide Monte Carlo estimates of the coverage frequency of credible intervals obtained by our estimation procedure. As seen in the graph, the credible intervals in the Constant Undertesting and Testing Ramp-Up scenarios have good coverage properties, with conservative confidence bounds. The credible bounds are narrower in the Stochastic Underdetection scenario, resulting in lower than nominal coverage frequency. However, the size distortion appears not too severe, especially considering the small sample size.
Imported cases. Our estimates may be biased if the fraction of cases that is imported changes over time (the previous results on imperfect detection apply to misclassification because of imported cases, too). If the source of infections is known, it is possible to correct for the issue by simply not including imported cases when constructing the time series for I t .
A.7 Estimation Details
To estimate R t of COVID-19, we use Bayesian filtering methods. We employ the following strategy to calibrate the prior distributions. First, we estimate a local-level model for gr(I t ) using a frequentist Kalman filter with diffuse initial conditions. In particular, we estimate the following model: The model is the same as Eq. (5) in the main text, except for a slight simplification in notation.
The procedure yields maximum likelihood estimates of σ 2 ε (variance of the irregular component) and the signal-to-noise ratio q ≡ σ 2 η /σ 2 ε for each country in the sample (with σ 2 η denoting the variance of the level component). We then use the distribution of σ 2 ε andq across countries to calibrate the priors for the precision of the irregular component (1/σ 2 ε ) and the signal-to-noise ratio (q). To ensure that the priors are not too "dogmatic," we inflate the variance of the estimates by a factor of 3 when calibrating the prior distributions. We use a gamma prior for both the signal-to-noise ratio and the precision of the irregular component, and we calibrate the parameters of the gamma distribution by matching the expected value and variance of the gamma-distributed random variables to their sample counterparts. Finally, we use a fairly uninformative normal prior for the initial value of the smoothed growth rate. The resulting priors are given in Table A.1.
Intuitively, these priors shrink the estimates of the precision and signal-to-noise ratio for each country towards their grand mean (average across countries). Such Bayesian shrinkage ensures that the parameter estimates are well behaved even though the sample Notes: Priors used in the Bayesian estimation of R t . See text for description on how the priors for the precision of the irregular component (1/σ 2 ε ) and the signal-to-noise ratio (q ≡ σ 2 η /σ 2 ε ) are calibrated based on cross-country frequentist estimates. size for many countries is fairly small, and the data are often noisy. We use the Stan programming language (Gelman, Lee, and Guo, 2015) to specify and estimate the Bayesian model. In particular, we use the pystan interface to call Stan from Python.
A.8 Empirical Validation
In this section, we perform two empirical validation exercises to check the performance of our estimates in practice.
Since our estimates are based on data on new cases, they may be misleading if new cases are subject to significant measurement problems. To help assuage this concern, we now perform the following exercise. We ask whether current values of R t help predict future growth in deaths. Since deaths are likely to be measured more accurately, this exercise provides a test of whether our estimates contain meaningful information and are not contaminated by data problems.
Formally, we consider the following regression:
Growth in Deaths in One Week
Corr. = 0.61 Notes: Relationship between current estimates of the effective reproduction number (R t ) and the growth rate of the number of new deaths in one week. The data is aggregated to a weekly frequency. Both variables are residualized to subtract country fixed effects by performing the within transformation. Only data after the cumulative number of deaths reaches 50 is included in the scatter plot. We include all countries in the John Hopkins database for which we have at least 20 observations after the outbreak. We remove data for the week of 2020-04-13-2020-04-19 in China that contain a large number of deaths that were previously unrecognized. where i denotes a particular country, and t indexes calendar weeks. Although our original data is daily, we aggregate to a weekly frequency; otherwise, measures of the growth rate of new deaths are too noisy. In addition, we only include weeks after the cumulative number of COVID-19 deaths has reached 50. After imposing these sample restrictions, we are left with 270 country-week observations across 68 countries. Given that we have panel data, we can include country fixed effects α i to account for time-invariant unobserved heterogeneity (such as differences in average age-a key correlate of COVID-19 mortality (Verity et al., 2020)-or family structures). The relationship given above is predicted by the baseline SIR model. Letting CFR = d t /I t− denote the case fatality rate (assumed to be constant over time), with standing for the average time between becoming infected and death, we have that gr(d t ) = gr(I t− ), yielding the regression equation above. The relationship is shown in Figure A.4. In the scatter plot, both variables are residualized to remove country fixed effects. We observe a strong positive relationship be- Notes: Relationship between current estimates of the effective reproduction number (R t ) and value of the movement index two weeks ago (first principal component of the six movement categories in Google (2020)). The data is aggregated to a weekly frequency. Both variables are residualized to subtract country fixed effects by performing the within transformation. We include all countries in the John Hopkins database for which we have at least 20 observations after the outbreak.
tween the value of R t this week and the growth in deaths one week later (corr. = 0.61). In Figure A.5, we demonstrate that there is also positive correlation (corr. = 0.48) between R t and deaths two weeks later. We note that while the average medical duration from the onset of symptoms to death for COVID-19 is longer than two weeks (around 18 days, see , the duration from reported cases to deaths is likely to substantially shorter because of reporting delays. For example, Hortaçsu, Liu, and Schwieg (2020) assume that new cases of COVID-19 are reported with a lag of 8 days in their baseline calculations (5 days for symptoms to appear, consistent with the evidence from Lauer et al. (2020) and Park et al. (2020), as people are unlikely to be tested without exhibiting symptoms, and an additional 3 days to capture delays in obtaining test results, based on andecdotal reports from the US). Since deaths are likely reported in a timely manner, if new cases are reported with a lag of 8 days, we would expect an average duration of around 10 days (≈1.43 weeks) between reported cases and reported deaths.
As a second validation check, we ask whether our estimates of R t are correlated with past movement data, as it should be if the estimates are meaningful. For information on movement, we use aggregated smartphone location data collected by Google and published in their "COVID-19 Community Mobility Reports" (Google, 2020). Google provides data on percentage changes in movement for six types of places: (i) groceries and pharmacies; (ii) parks; (iii) transit stations; (iv) retail and recreation; (v) residential; and (vi) workplaces. Since the six categories are strongly correlated, we take the first principal component of the six categories (the first principal component explains 83.03% of the total variance in the data). We refer to the first principal component as the "Mobility Index." As before, we only consider weeks after the cumulative number of confirmed COVID-19 cases in the country reaches 100. After imposing these restrictions, we are left with 792 country-week observations over 100 countries. Note that the number of countries is less than 124 because Google does not provide mobility data for all countries in our original sample. As shown in Figure A.6, current estimates of R t are strongly correlated with the value of the mobility index two weeks ago (corr. = 0.63).
For both validation exercises performed in the present section, we include all countries for which we have at least 20 observations after the onset of the epidemic (100 cumulative cases of COVID-19 reached) and satisfy any additional sample restrictions, as outlined above. If we narrow the sample down to countries with more and higherquality data-such as the sample of European countries analyzed in the main text-the correlations generally become substantially stronger. Hence, we consider the tests of the present section to be conservative.
Bayesian Smoother Classical Smoother Bayesian Filter Classical Filter
Notes: Estimated effective reproduction number (R) for China: filtered and smoothed estimates, using both Bayesian and classical estimation procedures. The Bayesian estimates are given by our baseline estimation procedure, as explained in the text. The classical estimates are obtained by maximum likelihood estimation (with diffuse initial conditions). The smoothed estimates use information from the full sample, while the filtered estimates at time t only use information up to time t. 65% credible bounds shown by the shaded areas.
A.11 Power Analysis
In this section, we study the statistical power of the event-study analysis in the main text using a Monte Carlo simulation.
We now describe the design of the power study. Intuitively, we simulate data using a stochastic process that is calibrated to match the properties of the observed data. We then simulate a sharp drop in the effective reproduction number-say, because of a lockdown. We apply our estimator to the simulated data and ask how often this abrupt change is detected by the estimation procedure.
To simplify the notation, we use the parametrization in Eq. (A.2). Optimal nowcasts from the local-level model in the steady state can be written as (Muth, 1960;Shephard, 2015, Section 3.4): is the signalto-noise ratio, and ω is the steady-state Kalman gain. Hence, nowcast errors, m t − µ t , follow an AR(1) process with Given that the shocks ε t and η t are uncorrelated, the variance of nowcast errors is The design of the power analysis is as follows: 1. We set γ = 1/7, and calibrate the remaining parameters of the data-generating process (q and σ 2 ε ) using the median values of the empirical estimates in the main text. The resulting parameter values are given in Table A.2. 2. We simulate µ t = γ(R t − 1). We initially set µ 0 = 1/7, implying an effective reproduction number of 2. At time 1, we simulate an abrupt decline in R t by setting µ 1 = 1/14, yielding a new effective reproduction number of 1.5, or a decline of 25%. For 2 ≤ t ≤ 14, we simulate µ t as a random walk, as in Eq. (A.2). 3. We simulate the observed growth rate of the number of infected individuals, y t = gr(I t ), as y t = µ t + ε t where ε t is an i.i.d. normal random variable with mean zero and variance σ 2 ε . 4. We simulate the nowcasts m t . We draw the initial nowcast m 0 from a normal Notes: Parameter values used in the power analysis. The parameters values for q (signal-to-noise ratio) and σ 2 ε (variance of the irregular component) are given by the median estimates from the 14 European countries considered in the empirical analysis in the main text. The mean duration of infectiousness is assumed to be γ −1 = 7, and the Kalman gain ω is calculated from Eq. (A.3). distribution with mean 1/7 and variance given in Eq. (A.4), and simulate further values of m t by the recursion in Eq. (A.3). The estimated value of R t is then given by our estimator. 5. We repeat steps 2-4 for 14 times, to simulate data for 14 "countries", as in the empirical application, and obtain estimates of the effective reproduction number by averaging across the 14 "countries". 6. We repeat steps 2-5 for 10,000 Monte Carlo replications.
The results of the power analysis are shown in Figure A.9. We observe that in 95% of the simulations, the change in R t is detected as soon as two days after the drop in R t . Hence, the analysis in the main text appears sufficiently powerful to detect moderate changes in R t . The key reason why the analysis has high statistical power, even though the signal-to-noise ratio is quite low (see Table A.2) is that data from multiple countries are used to obtain cross-country averages. This feature of the estimation procedure reduces estimation error substantially. While the signal-to-noise ratio is fairly low, we also note that the weight placed on data is that are more than one-week old is only (1 − ω) 7 ≈ 15.2%. Hence, one week after the change in R t , the estimates of R t are based primarily on data received after the change in R t .
The power analysis in the current section is arguably conservative. Specifically, we assume that after the abrupt decline, R t follows a random walk rather than staying fixed at the new level. As a result, as time goes on, the estimates of R t become more "spread out" across simulations, as is visible towards the end of Figure Days After Change in R Notes: Power study of the statistical analysis in the main text (effects of non-pharmaceutical interventions on R t , the effective reproduction number). We simulate an abrupt change in R t from 2.0 to 1.5 using a data-generating process that is calibrated to match our empirical estimates in the main text. We then apply our estimator to the simulated data and ask how often the change is detected by the estimation procedure. The solid line gives the average estimate of R t , while the shaded lines denote 65% and and 95% of simulations (in particular, the shaded area for 65% of simulations is given by the 17.5 and 82.5 percentiles of the estimated R t across simulations, and the shaded area for 95% of simulations is given by the 2.5 and 97.5 percentiles of the estimated R t 's). 10,000 Monte Carlo replications used.
A.12 Comparison With EpiEstim Estimates
In this section, we compare our estimates of R for COVID-19 with estimates obtained using the method of Cori et al. (2013); see also Thompson et al. (2019). The method proposed by Cori et al. (2013)-arguably the most widely used approach to estimating the effective reproduction number of an infectious disease-is implemented in a popular R package EpiEstim. First, we download estimates of R obtained using EpiEstim provided by Xihong Lin's Group in the Department of Biostatistics at the Harvard Chan School of Public Health (http://metrics.covid19-analysis.org/). These estimates, just as in our case, are obtained using data from the John Hopkins CSSE repository. The parametrization used in the EpiEstim estimation assumes a time window of 7 days, and a gamma-distributed serial interval with a mean of 5.2 days and a standard deviation of 5.1 days. Next, we merge these estimates to our full sample of estimates. We restrict attention to countries for which we have at least 20 contemporaneous estimates from each of the two different methods. That leaves us with 108 countries and, on average, 42.14 time-series observations per country. Finally, we calculate the Pearson correlation coefficient between the two estimates of R for each country.
The results of this exercise are given in Table A.3. The two sets of estimates are highly correlated, with the average correlation coefficient equal to 0.80 (median: 0.89). The interquartile range is 0.78-0.95. These findings suggest that the two estimates are highly consistent with each other, despite very different estimation methods and underlying assumptions. Notes: The graph plots the estimated effective reproduction number (R t ) one week before and three weeks after public events are banned in a country. The original sample consists of 14 European countries studied by Flaxman et al. (2020). For the event-study graph, we restrict the sample to countries for which data on R t is available for the whole event window. Heteroskedasticity-robust confidence bounds are shown by the shaded areas. Notes: The graph plots the estimated effective reproduction number (R t ) one week before and three weeks after case-based measures are introduced in a country. The original sample consists of 14 European countries studied by Flaxman et al. (2020). For the event-study graph, we restrict the sample to countries for which data on R t is available for the whole event window. Heteroskedasticity-robust confidence bounds are shown by the shaded areas. Days Since Intervention Notes: The graph plots the estimated effective reproduction number (R t ) one week before and three weeks after school closures are ordered in a country. The original sample consists of 14 European countries studied by Flaxman et al. (2020). For the event-study graph, we restrict the sample to countries for which data on R t is available for the whole event window. Heteroskedasticity-robust confidence bounds are shown by the shaded areas. Notes: The graph plots the estimated effective reproduction number (R t ) one week before and three weeks after social distancing is encouraged in a country. The original sample consists of 14 European countries studied by Flaxman et al. (2020). For the event-study graph, we restrict the sample to countries for which data on R t is available for the whole event window. Heteroskedasticity-robust confidence bounds are shown by the shaded areas. Notes: Results of panel-data regressions of the (log of) effective reproduction number (R t ) on indicator variables that are equal to 1 after the introduction of a non-pharmaceutical intervention (NPI) and 0 before the introduction. The regressions are similar to those in Table 2 in the main text except that the intervention variables are included separately (one-at-a-time). See the main text for more details.
A.15 GATHER Checklist
Item # Checklist item Reported on page # Objectives and funding 1 Define the indicator(s), populations (including age, sex, and geographic entities), and time period(s) for which estimates were made.
3, 5 2
List the funding sources for the work. 13
Data inputs
For all data inputs from multiple sources that are synthesized as part of the study:
3
Describe how the data were identified and how the data were accessed. 3, 5
4
Specify the inclusion and exclusion criteria. Identify all adhoc exclusions.
5
Provide information on all included data sources and their main characteristics. For each data source used, report reference information or contact name/institution, population represented, data collection method, year(s) of data collection, sex and age range, diagnostic criteria or measurement method, and sample size, as relevant.
3, 8 6
Identify and describe any categories of input data that have potentially important biases (e.g., based on characteristics listed in item 5).
A8, A9
For data inputs that contribute to the analysis but were not synthesized as part of the study:
7
Describe and give sources for any other data inputs.
Reported on page #
For all data inputs:
8
Provide all data inputs in a file format from which data can be efficiently extracted (e.g., a spreadsheet rather than a PDF), including all relevant metadata listed in item 5. For any data inputs that cannot be shared because of ethical or legal reasons, such as thirdparty ownership, provide a contact name or the name of the institution that retains the right to the data.
Data analysis 9
Provide a conceptual overview of the data analysis method. A diagram may be helpful. 3, 4, A13
10
Provide a detailed description of all steps of the analysis, including mathematical formulae. This description should cover, as relevant, data cleaning, data preprocessing, data adjustments and weighting of data sources, and mathematical or statistical model(s).
11
Describe how candidate models were evaluated and how the final model(s) were selected.
12
Provide the results of an evaluation of model performance, if done, as well as the results of any relevant sensitivity analysis.
13
Describe methods for calculating uncertainty of the estimates. State which sources of uncertainty were, and were not, accounted for in the uncertainty analysis.
14
State how analytic or statistical source code used to generate estimates can be accessed. 2
Item # Checklist item
Reported on page #
15
Provide published estimates in a file format from which data can be efficiently extracted. 2
16
Report a quantitative measure of the uncertainty of the estimates (e.g. uncertainty intervals). 6, 8, 10
17
Interpret results in light of existing evidence. If updating a previous set of estimates, describe the reasons for changes in estimates. 6, 7, 9
18
Discuss limitations of the estimates. Include a discussion of any modelling assumptions or data limitations that affect interpretation of the estimates.
11, 12
Notes: GATHER checklist (gather-statement.org) to facilitate the evaluation and replication of our empirical analysis. References to numbers in the Supplementary Appendix are prefixed with an "A." Hence, for example, "A15" refers to page 15 in the Supplementary Appendix, while "15" refers to page 15 in the main text. | 9,578.8 | 2020-05-10T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Research on High precision Yime Measurement and Analysis method Based on Oscilloscope
High precision time measurement technology is of great significance.It is necessary to test and analyze high precision time,whether it is engineering practice such as telecommunication,chip design,or theoretical research such as atomic physics experiment,as well as space technology such as satellite positioning,radar pulse,etc.With the help of oscilloscope,the time and delay information can be obtained with high accuracy on the premise of high accuracy on the premise of high fidelity acquisition of measured signal.In this paper,aiming at how to use oscilloscope for high-precision time test and analysis,several key factors affecting the accuracy of oscilloscope time test are analyzed,and reference methods and test examples are provided for high-precision time test.
Introduction
With the rapid development of electronic technology, the speed of signals is getting higher and higher. Any small time changes in high-speed signals will have a huge impact on the entire system. Highprecision time testing has become the key to the success of electronic systems. In high-speed data transmission systems, the time interval of signals, the delay between signals, and the jitter of signals must be strictly controlled. Only accurate and effective testing and analysis of these time parameters can make the system operate normally and improve System performance and stability. This article focuses on how to use the oscilloscope to perform high-precision time testing. It first discusses several key factors that affect the accuracy of the oscilloscope time test, analyzes and studies how to evaluate the time test accuracy, and finally gives examples of high-precision time testing.
Analysis of factors affecting high-precision time testing
To use the oscilloscope for high-precision time testing, you need to first evaluate whether the oscilloscope's technical specifications meet the time test requirements. High-precision time testing is affected by many factors, including the oscilloscope's bandwidth in multi-channel sampling mode, multi-channel time deviation elimination, multi-channel sampling rate, interpolation error, multichannel high-speed acquisition memory, and trigger system.
Analysis of Factors Influencing Multi-channel Time Deviation on Measurement Error
When the oscilloscope is performing multi-channel delay test, due to different channel delays or cable and probe delays, etc., there will be a channel time deviation of the test system. During testing, attention should be paid to using the multi-channel time deviation elimination (Deskew) function, and the time deviation should be corrected in units of less than 1 picosecond according to the test requirements.
Analysis of Factors Influencing Sampling Rate on Measurement Error
The high sampling rate can not only avoid signal aliasing and obtain high-fidelity signal waveforms, but also the key to obtain high-precision time information. As the sampling decreases and the sampling interval becomes larger, the accuracy of the time test decreases, as shown in the following figure. To carry out multi-channel delay test, the oscilloscope should meet the condition that each channel independently supports high-speed ADC sampling, so as to avoid the impact of the reduced sampling rate on the test error in multiple channel mode.
Analysis of Factors Affecting Measurement Errors by Real-time Interpolation Acquisition Mode
The interpolation error is an error caused by linear interpolation between actual voltage samples. This error can be improved using the Sin (x) / x sinusoidal interpolation algorithm in the oscilloscope and the vertical dynamic range of the oscilloscope. Real-time interpolation can improve the accuracy of time testing. Real-time interpolation is to insert mathematical operation points between two actual data sampling points. The inserted data points can improve the accuracy of time measurement under a faster time base and make the waveform closer real. Among them, real-time interpolation techniques include linear interpolation and sinusoidal interpolation Sin (x) / X, the figure on the left in the figure below is an example of sine wave using Sin (x) / x interpolation, and the figure on the right is using linear interpolation Examples of poor sine waves. It can be seen from the figure that the sinusoidal sin (x) / x interpolation technology is closer to the real waveform, and the time test accuracy is higher. Therefore, when using the oscilloscope for high-precision time test, the interpolation mode is used with high-speed acquisition memory There are multiple samples within the difference of 20ps between ADC samples, resulting in higher time resolution. The minimum time interval is 0.2ps.
Analysis of the factors affecting the measurement error of high-speed acquisition memory
For an oscilloscope, the sampling rate × acquisition time = acquisition memory. To test high-precision, long-time interval signals, high-speed acquisition memory is required in conjunction with high-speed sampling. For the communication signal that collects 1.2G carrier in 1ms time, such as BPSK, the memory length required at the maximum sampling rate = 50GS / s × 1ms = 50M. Analyzing the storage design of modern oscilloscopes, the silicon germanium (SiGe) semiconductor integrated acquisition front end is used, and each channel has an independent high-speed memory, which ensures that the maximum sampling rate and storage length are supported at the same time, making it possible to meet high-precision time testing.
Analysis of Factors Influencing Trigger Capability on Measurement Error
When multiple acquisitions are required to obtain time information, the oscilloscope's own trigger jitter will accumulate on the signal under test and affect the time test results. The trigger jitter of highperformance oscilloscopes is generally about 1ps, which is enough to meet the test requirements for high-precision testing. For high-speed RF communication signals, another important indicator is the trigger bandwidth. If the trigger bandwidth is insufficient, the signal under test cannot be synchronized. Therefore, the trigger bandwidth of all trigger functions (including edges and various advanced triggers) of the oscilloscope needs to be high enough to ensure Stable acquisition of the signal under test makes time testing more accurate.
Design of high precision time test method
Through the above analysis, to carry out high-precision time testing, you first need to determine the basic indicators of the selected oscilloscope to meet the test requirements, paying special attention to the time base stability index of the oscilloscope. Time base is a main indicator of oscilloscope. In the oscilloscope sampling system, the stability of timing components directly affects the accuracy of timing measurement. If there is an error in the time base, then measurements based on that time base will have an equal or greater error. After the selection of the oscilloscope, the following test methods are designed to achieve the best test accuracy.
Oscilloscope bandwidth selection
Choose a high-performance oscilloscope that can achieve sufficient bandwidth while using multiple channels, so that the signal under test can be collected without distortion. Sometimes it is necessary to use a high-bandwidth oscilloscope to test some low-frequency signals. The background noise of the oscilloscope will also affect the high-precision time test to a certain extent. At the same time, because the oscilloscope is a broadband receiver, the larger the bandwidth, the greater the noise. In the highperformance oscilloscope settings, you can select the appropriate oscilloscope application bandwidth range according to the signal bandwidth, so that you can achieve the best time test accuracy.
Time deviation and interpolation error correction
Before performing multi-channel delay test, adjust the Deskew of the oscilloscope to eliminate the time deviation between the channels of the test system. Make full use of the vertical dynamic range of the oscilloscope, make the input signal amplitude reach the full scale of the oscilloscope, and set the sampling rate of the oscilloscope to the highest supported rate. When the time accuracy of the test needs to be higher, the premise of the highest sampling rate of the multi-channel ADC hardware Next, by setting the Sin (x) / x sinusoidal real-time internal difference in the oscilloscope, the time test accuracy can be further improved.
Trigger settings
Set the trigger of the oscilloscope so that the waveform under test is displayed stably on the oscilloscope. If the test exceeds the time interval of milliseconds, you can use the "Trigger Delay" function of the oscilloscope, that is, delay a fixed time after triggering. For long time interval measurement, the long-term stability accuracy affects the main factors of high-precision time testing. In order to improve the test accuracy, the standard 10M external reference clock input on the oscilloscope is connected to an external high-stability time base to improve time measurement accuracy, such as cesium Bell, Rubidium Bell, etc. For short-time delay testing, the long-stability feature will not greatly affect the time accuracy of the test. Use a long memory at a high sampling rate to collect waveforms for direct testing.
Test case
Taking satellite communication test as an example, in satellite communication, it is required to test the synchronization signal and the jitter change of the radio frequency communication signal at a fixed phase reversal point after a delay of 100ms. The synchronization signal is a rise time sub-nanosecond pulse signal, the RF communication signal is a QPSK modulated signal, and the carrier frequency is 5 GHz. The following figure shows the two waveforms of the measured signal. The yellow waveform is the synchronization signal and the purple waveform is the RF signal. The signal is tested with a high-precision oscilloscope and a high-stability time reference. The bandwidth of the oscilloscope is 20GHz. When four channels are used at the same time, it supports 50GS / s real-time sampling rate. In the four-channel working mode, the maximum high-speed acquisition memory of each channel is 200M, and Rich analysis and display functions for broadband signals.
Connect the synchronization signal and the RF signal to channel 1 and channel 2 of the oscilloscope respectively. Channel 1 is used as the trigger channel of the oscilloscope, that is, to trigger on the synchronization signal; channel 2 tests the RF communication signal, and performs the most on The best settings make the time test accuracy the highest. Due to the need to test the synchronization signal and the time jitter change of the RF communication signal after a delay of 100ms, the Trigger Delay function of the oscilloscope is used, that is, trigger on channel 1, and after a delay of a fixed time of 100ms after the trigger, the channel 2 RF signal waveform is collected to find the phase inversion Point, through the cursor, statistically test the jitter information relative to channel 1 to obtain the test result, as shown in Figure 4. It can be seen from the above calculations that for long-term accurate delay testing, the long-time stability accuracy of the time-based system is very important. For the above applications, the external high-stability time base (rubidium clock) is connected to the oscilloscope's standard 10M external reference clock input to improve the time measurement accuracy. The accuracy of the test oscilloscope time base system reaches no more than 1×10 -10 index The time accuracy obtained by testing 100ms time information can reach the level of 11.2ps, thus meeting the requirements of high-precision time testing.
Conclusion
Faced with various high-precision time testing needs, first of all, we should choose the appropriate testing tools and methods according to the actual application and index requirements. Highperformance oscilloscope is a key tool for high-precision time test. Before performing high-precision time test, you need to understand the key indicators and test methods of the oscilloscope's impact on the accuracy of high-precision time test, and the impact of different test parameters on the test results. High-precision time testing requires evaluation of the overall performance of the oscilloscope, such as multi-channel oscilloscope bandwidth, multi-channel time deviation elimination, multi-channel sampling rate, real-time interpolation acquisition mode, trigger system, and high sampling rate that needs to be matched with it Under the length of the acquisition memory, to complete high-precision time testing and analysis. | 2,700.6 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Multimodal, Digital Artefacts as Learning Tools in a University Subject-Specific English Language Course
This paper explores the practice of using multimodal, digital assessment tasks assigned to students of an English for Architects and Civil Engineers course at a university in Germany. Students were tasked with creating multimodal video compositions and interviewed about the processes behind composing their artefacts. The goal was to interrogate to what extent multimodal assessment tasks such as these can promote the communication of technical concepts, facilitate nuanced opportunities for language development and develop the students as social agents. The artefacts were examined through the lens of Systemic Functional Semiotics, drawing particularly upon the Genre and Multimodality framework (Bateman et al., 2017; Bateman & Schmidt-Borcherding, 2018) and a recent approach to analysing multimodal artefacts developed by Turney & Jones (2021).
Introduction
The overt emphasis on digital, multimodal communication in the applied disciplines of architecture and civil engineering is not always reflected in related subject-specific English language courses. This is especially the case in Germany, where multimodal literacy has been neglected in favour of a textcentric approach to language education (Wilke, 2012). There is a pressing need to develop curriculum and assessment tasks in Teaching English to Speakers of other Languages (TESOL) education that better reflect the demands placed on English language students, both during their disciplinary studies and within their future workplaces. Additionally, the recent changes to the Council of Europe's Common European Framework of Reference for Languages (CEFR) have asked educators to substantially reimagine "… the user/learner as a social agent" (Council of Europe, 2020) and measure new competencies around mediation and plurilingualism. Further, educational policy around the world has increasingly emphasised the importance of digital literacy for 21st century social and civic participation (Lamb et al., 2017), especially with the recent move towards online education necessitated by the spread of communicative modes", of which the four skills form only a part. These modes of communication are: reception (listening, reading); production (speaking, writing); interaction (a social skill) and a new mode, mediation. It is this last mode that is the most complex, involving as it does all the other modes, and it is also of most relevance to this paper. According to the CEFR, mediation is an "…interpretation or reformulation of a source text" very similar to "resemiotization", where meaning making shifts from context to context, from practice to practice, or from one stage of a practice to the next (Iedema, 2003, p. 41). Mediation, and its relationship to multimodal literacy, will be explored in more detail in 4.3 of the Results section. It will be suggested that multimodal assessment tasks are an excellent way to expand upon this area of language acquisition and provide educators with a way of satisfying the descriptors included within it. The multimodal assessment task explored in this paper involved the production of a 3-5-minute video composition (VC) explaining a concept from architecture or civil engineering to a nonspecialist audience using a variety of modes. This task was worth 20% of the students' overall grade.
Theoretical background
Multimodality theory allows us to explore the ways in which the connections and combinations between modes can unsettle existing practices, forge new connections and animate new meanings. In this sense, multimodality is closely tied to semiotics, in particular, to social semiotics (Hodge & Kress, 1988). This approach draws on Halliday's work on systemic functional linguistics (SFL), which sees language as a social action constituting culture (Halliday, 1994). Multimodal artefacts are those which communicate through a variety of modes simultaneously (Jewitt, 2005), and the lens of multimodality can help us to understand the complexities at play within them. Examples of modes include, but are not limited to, writing, images, music and architecture. At their essence, modes are shaped by culture to produce different kinds of knowledges, enact different social relationships and perform identities. However, the boundaries between what constitutes a mode can be somewhat diffuse, for example, the mode of "image" can usefully be broken down into image types, such as a photograph or a charcoal sketch. Indeed, even within a photograph there are different communicative elements, such as the choice of black & white or colour, and further still, within the realm of colour, communicative choices such as brightness, saturation and so on can be made.
For clarity, and in accordance with the work of Bateman et al. (2017), this paper conceptualises a mode as limited by the affordances of its materiality. They write, "The reach of a semiotic mode will usually be a refinement and extension of what its material carrier affords" (Bateman et al., 2017, p. 119); that is, the definitional boundaries of a mode are determined by the opportunities and constraints of its material context -such as which senses are engaged, whether and how time is involved, and so on -and also in terms of the discourse community within which it functions. It can be reasonably assumed that the student-creators of the video compositions (VCs) reported on here belong to one shared discourse community, narrowing the scope of available modes. Once the mode has been identified, it is then useful to think of modes as having different 'modal resources' (Bezemer & Kress, 2008), that is, within a mode like 'image', modal resources such as framing, composition and colour affect the meanings that are created and connoted (Kress and van Leeuwen, 1996). Jewitt et al. (2001, p. 27, emphasis original) argue that social semiotics allows us to see "…the process of learning as a dynamic process of sign-making". Small wonder, then, that in the past twenty years in particular, multimodal assessment tasks have flourished in classrooms around the world. Scholars have found that multimodal assessment tasks can enhance student creativity and agency (McGinnis, 2007), provide opportunities for increased levels of student engagement (Pandya et al., 2018) and better prepare learners for the future (Hafner, 2015). Although the two concepts are distinct (Alvermann, 2017), multimodal literacy has some overlap with the theory of "multiliteracies", first proposed by the New London Group (1996) and further developed by Cope and Kalantzis (2009), who emphasised the sociohistorical context of literacy. While the creation of multimodal texts may not be a new phenomenon, the increasing ubiquity of digital technologies has changed social practices (Lotherington & Jenson, 2011) and further democratised knowledge, with an increasing emphasis on collaboration and participation (Knobel & Lankshear, 2014).
Literature review
Digital, multimodal projects have been popular at all levels of education for at least the past twenty years, whether at primary (Burn & Parker, 2001 etc), secondary (Nash, 2018 etc) or tertiary levels (Nielsen, et al., 2016 etc). They have also been explicitly included in curricula all around the world, such as in Australia (ACARA, 2015), the United States (Lapp & Fisher, 2011) and Europe (EUMade4All, 2019). As such, it is no surprise that TESOL educators have also embraced multimodal literacy at all levels, including in primary (e.g. Grapin, 2019) and secondary education (e.g. Huang, 2019). From a tertiary TESOL perspective, Oldakowski (2014) argues that multimodal assessments deepen comprehension and promote engagement, while Zacchi (2016) asserts that multimodal meaning making is an effective way of traversing cultural differences in an increasingly globalized world. Jiang and Luk (2016), writing from a Chinese context, found that such multimodal assessment tasks increase students' "motivational capacity", with interviews indicating an increased sense of curiosity and cooperation, among other qualities. In Taiwan, Lee (2014) claims that multimodal learning practices enhance students' motivation and self-confidence.
Nevertheless, digital, multimodal assessment tasks remain the exception rather than the rule within most TESOL curricula. Writing not only from the tertiary perspective but across their work with young children, adolescents, and adults, Early et al. (2015) suggest that multimodality is "on the margins… in the TESOL community" (p. 450-1) and that we may need to rethink our course design and rewrite our textbooks if we are to wholly incorporate multimodality into language education. Lotherington & Jenson (2011) further this claim, arguing that in all L2 teaching contexts, teachers have been reluctant to embrace or even acknowledge multimodal literacy, favouring instead the 'flat literacies' of paper-based assessment.
Case context
The cases within the study were drawn from two courses of "English for Architects and Civil Engineers A". This was a 4-credit point, 14-week course taught in the winter semester 2019-2020 at a university in Germany. It was designed for students with an English language level of intermediate to advanced (B2-C1 according to the CEFR scale), and the courses attracted both undergraduate and postgraduate students, (n = 38), 32 of whom were majoring in civil engineering, with only six students majoring in architecture. This bias was likely a consequence of the students needing a B2 English to graduate from Civil Engineering, while the Architecture students had no such requirement. For 20% of their grade, students were tasked with producing a 3-5 minute video composition (VC) explaining a concept from architecture or civil engineering to a non-specialised audience using a variety of modes. The number of modes could vary, resulting in dynamic, standalone, two-dimensional artefacts (the material parameters of these artefacts will be explored in more detail in 4.1 of the Results section). Students were asked to upload the artefacts to the sharing platform Moodle before they were screened in class, where they were encouraged to lead discussions around the VCs, offering feedback and asking questions of their fellow student creators.
Students in both courses were also invited to participate in this research project, which had two phases of data collection: the artefacts themselves were collected, and after semester's end participants were interviewed about the processes of creating the artefacts and their perspectives as audience members. Of the 38 students in the two classes, 17 agreed to participate in the first phase of the study (artefact collection), with 7 of those also agreeing to the second phase (the interview). The semistructured interviews lasted between 19:56 and 46:11 minutes and involved a pre-prepared interview protocol. There were seven general questions relating to the processes behind designing the artefacts as well as a number of questions developed in response to specific elements of the VCs. There were also four additional general questions relating to their experience as audience members. The data was then investigated inductively, using NVivo to develop themes and code the responses.
Case study: "Human comfort in relation to architectural spaces"
The case reported on in this paper is comprised of a video composition and the information provided in two post-task interviews, one with the student composer (Student C) and one with a fellow-student audience member (Student A). This artefact was selected as it was considered to exemplify the task, as well as including six modes with varying affordances: text (both written and spoken), image (handdrawn sketches, cartoons and photographs), music, various typographical and layout elements, film and gesture. Student C is a C1 level (advanced) English language learner in an Architecture track program. Her artefact is 5:15 minutes and her interview lasted 36:02 mins. Student A is a B2 level (intermediate) language learner in a Civil Engineering program, and his interview lasted 46:11 minutes, of which 9:17 minutes were devoted to responses to Student C's artefact. The remainder of Student A's interview is not relevant to this paper and is not included in the data drawn upon here.
RQ1: To what extent can digital, multimodal assessment tasks promote the communication of technical concepts?
Measuring the impact of these video compositions on students' understanding of technical concepts is far beyond the scope of this qualitative study. However, one way of approaching this question is by examining the artefact through a lens developed by Bateman and Schmidt-Borcherding (2018) in their quantitative study of the effectiveness of educational videos in terms of learner uptake and engagement. After analysing the results of a knowledge test and an engagement survey, they suggest that a successful educational video should establish clear expectations and avoid sensory overload. The more successful videos "…prepare their audiences for their messages audiovisually and then use this preparation for presenting new information" (Bateman & Schmidt-Borcherding, 2018, p. 4), a process they divide into the two 'discourse units' of 'scaffolding' and 'development'. Units that scaffold information prepare audiences for what to expect later in the video, and units that develop information "elaborate or extend what has been introduced previously" (Bateman & Schmidt-Borcherding, 2018, p. 11).
In order to perform a fine-grained analysis of these very complex artefacts, they argue that the constraints and affordances of the media used must first be recognised in order to identify what is intentional and what is a result of a limitation of the medium (as this in turn restricts which semiotic modes can be employed). It is therefore important to identify the parameters of the physical situation in which VCs take place, what Bateman, et al. (2017, p. 96) term "…the 'canvas' that meaning is inscribed on…". According to their definitions, a video composition changes over time and is therefore "dynamic"; it is also viewed rather than participated in, making it "observational", and this is done through a computer screen, rendering it "two-dimensional". Further, the artefact itself cannot change, making it "immutable", and it can be rewatched for a limited time (until the course content is removed from Moodle), making it "partially transient". To analyse such a dynamic, 2D, immutable, observational, partially transient artefact, the artefact must be broken down into smaller, trackable units of meaning. The term "presentational micro-event" (PMEs) is useful here, as it facilitates looking at the artefacts as a collection of "…unit(s) of meaningful behaviour that may be distributed across several coordinated sensory channels…" (Bateman & Schmidt-Borcherding, 2018, p. 6).
Figure 1 A Visual Representation of Some Presentational Micro-events (PMEs) Sharing Meaning Intersemiotically
This is perhaps best illustrated with an example. In Figure 1, a section of the artefact is presented. In this image, five still frames of the video are presented above a text of the accompanying narration. The VC has been broken down into five PMEs, and it is possible to see here how meaning is shared across modes, or sensory channels. Three noteworthy moments occur at 2:02 minutes, when a photograph of a highway appears on screen with the narration of the term "street space"; and at 2:28 minutes, when the architectural term "dominance" is accompanied by a sketch of Zaha Hadid's Heydar Aliyev Centre in Azerbaijan; and again at 2:30 minutes, when a sketch of a terrace house co-occurs with the narrated term "adaptation". The example at 2:28 minutes should perhaps be unpacked: this is a good example of how an image can coinstantiate the technical field of discourse as it draws upon a shared repertoire of architectural knowledge. For the initiated, the Heydar Aliyev Centre is one of the most recognisable of Hadid's buildings. It carries with it some of the meaning of the term "dominance", because Hadid's designs are synonymous with the kind of architecture that ignores the context of its surroundings and has "…neither respect nor reference to its locality" (Bayley cited in Fairs, 2015). Student C follows this PME with a sketch of a terrace house at 2:30 minutes to co-instantiate the meaning of "adapting to the cityscape", shown in synchrony with the narrated term, "adaption". In this way she invites her audience to unpack the technical nominalisations of "dominance" and "adaption" by showing us a visual example of each. These can also be considered "grammatical metaphors", as they package complex processes as single elements within the clause (Macnaught et al., 2013); that is, the activities of "dominating a cityscape" or "adapting to a cityscape" are bundled into technical terms of considerable complexity ('dominance', 'adaption'), especially for English language learners. As such, Student C has chosen to depict buildings that exemplify the processes of dominating or adapting to the cityscape in order to help the audience "make sense" of these technical terms by sharing the "work" of making meaning intersemiotically.
Student C helps her audience understand her technical concepts in other ways as well. She frequently uses parallelisms and redundancies across the narration and the images, or the "audio and visual sensory channels", to support her audience's understanding. As can be seen in Figure 1 at 2:06 minutes, the terms of the text of the narration in colour coincide with the moment the bullet point of written text appears on screen. Student C has taken the time to temporally coordinate her visual information in order to support her audience's conceptual understanding: when she says "balanced" in the narration, the bullet point "balanced relationship" appears; when she says "connection", the bullet point 'connecting with the surrounding' appears; and when she says "to enter into" the bullet point "dialogue with other buildings" appears. This attention to detail shows not only that she has harnessed the affordances of the medium effectively and appropriately, but that she has given considerable thought to its pedagogic potential and her relationship to the audience.
Further, returning to Bateman and Schmidt-Borcherding's discourse functions of "scaffolding" and "development", Student C has placed scaffolding segments at regular and appropriate intervals throughout the artefact. They can be found at four intervals: at 0:35 minutes, when she introduces a taxonomy of the elements to consider when designing interior space; again at 1:34 when she taxonomises exterior space in the same manner; at 2:06 (see Figure 1), when she relates the building to its environment; and again at 2:39 minutes, when she categorises the ways in which individuals experience the built environment. This strengthens her participation in the genre of a 'system explanation', which will be elaborated upon below. It also helps the audience anticipate the meanings to come and focus their attention upon the most salient concepts.
She also makes meaning intersemiotically by repeating a sketch of a face throughout the VC. This sketch occurs at the beginning and end of the artefact, and, crucially, re-occurs when new information is being scaffolded as a "visual reminder" for the audience to focus their attention. The image also remains in the background of the artefact as a sort of visual "anchor", albeit at varying degrees of magnification. You can see it clearly at 2:04 minutes, for example ( Figure 1), but it is also present in the background at 4:27 (Figure 2), although the magnification transforms it into an indistinguishable blur of pixels. It works as an almost subliminal cohesive device, guiding the audience into unfamiliar conceptual territory while retaining a "familiar face". The fact that this is a hand-drawn sketch by the participant herself of a singer she enjoys speaks to the rich vein of interpersonal meaning that is present in the artefact but beyond the scope of this paper. It is worth noting, however, that eye-tracking research suggests that faces on screen, however small, attract and fixate the gaze (Wang & Antonenko, 2017, cited in Bateman & Schmidt-Borcherding, 2018), and it could be said that the sketch was included in an attempt to hold the audience's attention. Bateman and Schmidt-Borcherding (2018) suggest the constraints and affordances of the media employed -the "canvas" -should be identified in order to separate intentional from inadvertent meaning making. After this, the artefact can be broken down into smaller units of meaning -termed "presentational micro-events" (PMEs) -to track how meaning is made. Such an approach illuminates how Student C consistently shares the semiotic labour across modes to help her audience understand the technical concepts she communicates. For example, certain technical terms employed in the narration are elaborated visually in the form of sketches, and she frequently uses parallelisms and redundancies to support her audience's understanding. A successful educational video should also establish clear expectations and avoid sensory overload. According to Bateman and Schmidt-Borcherding (2018), this can be achieved through two processes they term "scaffolding" and "development". Scaffolding, that is, preparing the audience for what to expect, is observable in Student C's VC when she categorises elements of space and the built environment. She then 'develops' these taxonomies by elaborating upon what has already been introduced. All of these elements suggest that digital, multimodal assessment tasks such as this one can very effectively promote the communication of technical concepts.
RQ2: To what extent can digital, multimodal assessment tasks facilitate more nuanced opportunities for meaning making?
In order to understand meaning making, we need to situate the artefact in the context of the culture, and one of the best ways to do this is to identify which genre it is participating in. Genres can be seen as 'recurrent configurations of meaning' (Rose & Martin, 2012, p. 53) which make sense to the discourse communities that comprise the culture. As such, it is important that the students correctly identify and reproduce the genre required by the task. In order to better determine multimodal task fulfilment, Turney & Jones (2021) have developed a method of analysing tertiary-level, student-generated, educational videos. Drawing upon the Genre and Multimodality model (Bateman et al. 2017), they suggest first identifying the genre of the artefact before examining the media used and then exploring how the artefact unfolds intersemiotically, realizing configurations of register variables (patternings of field, tenor and mode). The medium and some of the intersemiotic meanings realised in the artefact have been briefly touched upon in the previous section, but it is worth unpacking the VC in terms of its genre. In the Martinian systemic functional perspective, genre is situated in the stratified context plane, departing from a strictly Hallidayan perspective which associates genre with mode (Martin, 2009). For an artefact to participate in a genre, it must have a characteristic structure and observable stages and phases contributing to the achievement of its social purpose (Derewianka & Jones, 2016). It is also realised in terms of three simultaneously occurring parameters: the field, tenor and mode of discourse. While field is concerned with the topic of the text and its representation of the world, tenor focuses on the relationships between the participants, and mode is concerned with the text type and its organisation (Martin, 2009). Identifying and understanding the boundaries of genre is essential to teaching and assessment, especially in a TESOL context, where the differences between genres may be more difficult for second language learners to identify and reproduce.
The task question explicitly asked students to "explain a concept from the fields of architecture or civil engineering…", and unsurprisingly, a great number of language features present in the narration of Student C's VC were typical of the "explanation" genre. Looking firstly at how she constructs her field of discourse, there are a marked number of generalised participants (e.g., "interior and exterior spaces") and nominalized abstract concepts (e.g., dominance, adaption) as well as causal relationships (e.g., "These experiences are triggered by the senses…") and a considerable amount of technical and specialized vocabulary (e.g., "form dimensioning"). It is harder to ascertain which of the explanation genres the artefact is participating in, as it does not unproblematically conform to any one. It can, however, be read as a system explanation, the explanation genre concerned with the relationships and interactions between different parts of a system. This genre typically begins by identifying the Phenomenon, describing the System, explaining the interaction between the Components of the System and concluding with a Generalisation (Derewianka & Jones, 2016, p. 205-6). These elements are broadly observable in Student C's VC, with "the interior", "the public space", "the architect" and "the human comfort" functioning as the Components. In this way, the interaction between the Components reflects the student's own sequential process through the narration: Interaction One is between the interior/exterior and human senses, or as she terms it, the "first step" of cognitive appraisal; Interaction Two occurs between the interior/exterior and human emotions ("the affective reaction"), and Interaction Three is between the interior/exterior and human aesthetics, or what she calls the third process of "aesthetic reaction". The interaction between the levels is less explicitly realised, but can be seen in her conclusion (4:38 -5:13 mins), where she narrates, "the relationship between space, human and content is what gives the place its expression".
Figure 2 Intersemiotic Meaning Making with Film, Image and Text
Having identified the genre, the question remains as to what extent multimodal assessment tasks such as this one can facilitate more nuanced opportunities for meaning making. Perhaps the most effective method of exploring this is to identify where meaning is made with notable depth or subtlety. Figure 2 depicts one such section of the artefact. Much like in Figure 1, the still frames in Figure 2 were taken from the VC between 4:11 and 4:37 minutes, the text of the narration is printed underneath, and the section is divided into four PMEs. What makes this section different to the one shown in Figure 1 is the use of video, shot by a friend of the participant at her request. Two separate videos were recorded, one at 4:11 -4:18 minutes and another at 4:21 -4:26 minutes, both of which share the meaning of the narration in subtle and interesting ways. In the first video, a hand-held camera pans upwards, simulating the craning of a neck as Student C narrates "… a church or mosque where you feel delightful, peaceful". Neck craning both connotes and is a physical response to awe, and this gesture corroborates the meaning made in her narration. Similarly, a second film, also suggestive of a first-person experience with the use of a hand-held camera at head height, carries some of the meaning of the narration at 4:21 minutes ("… radiates a feeling of coldness, such as a basement"), with the jolting movement of someone walking into a dark, narrow space. Student C is sharing the work of making meaning across modes and is also elaborating upon the experiences she describes, inviting her audience to share in her experiences and affiliate around her values (Knight, 2010): feeling "peaceful" and "delightful" as they "gaze" upwards at a light-filled dome in a mosque, and feeling "cold" in the dark basement. Exploring the interpersonal patternings instantiated in the tenor of this artefact is beyond the scope of this paper, but such finegrained attention to detail also guides the audience to engage with and comprehend the key concepts explained in her artefact and emphasised in her title "Human comfort in architectural spaces". Although some additional information is provided in the narration that is not manifested visually, her most salient ideational meanings are almost unfailingly supported across more than one mode simultaneously. Whenever she presents key ideas, for example, she both narrates and visually displays the key points on screen (see Figure 2 at 4:27 minutes).
Similarly, the information presented visually supports the meanings being made in the audio text, even if this is not always performed flawlessly. When asked about her inclusion of what appears to be a cartoonish drawing of David Bowie at 4:27 minutes (see Figure 2), alongside drawings of a man in a suit and a woman with a flower crown, Student C commented in her interview data that "…they're all different, so I just wanted to include that or stress that". For the student creator, these animations expand upon the meanings made in her narration ("…the aesthetic reaction, which is taste dependent and different from each person. Does the room correspond to my style?"), and so what at first appears unnecessary or distracting, upon closer examination is yet another example of her semiotic decision making. Such "representation" produces a sign that is focused on Student C's interest, rather than "...the assumed interest of the recipient of the sign" (Kress, 2010, p. 71). Although the cartoons she chose may not be as transparent in their meanings as other elements in the artefact, they are nevertheless examples of motivated representation rather than meanings made inadvertently.
Summary of RQ2:
(To what extent do digital, multimodal assessment tasks facilitate more nuanced opportunities for meaning making?) For meaning to be made effectively, the artefact should be appropriately situated within the context of the culture. TESOL students in particular often struggle with identifying and participating in genres and reproducing the stages and phases that contribute to their structures. This multimodal assessment task facilitates opportunities for effective meaning making in a broad sense by necessitating that the artefact participates in one of the explanation genres, as Student C's does. The task also facilitates more nuanced opportunities for meaning making, as evidenced by the rich and subtle meanings made by, for example, certain camera movements in her filmed segments to variously communicate awe and claustrophobia.
RQ3: To what extent do digital, multimodal assessment tasks develop the students as social agents?
As mentioned in section 2.1 of the Background, the Council of Europe has redesigned their CEFR assessment criteria in "...a move away from the matrix of four skills…" and towards "… reallife language use (Council of Europe, 2020, p. 33)". They have not only added a new competence, plurilingualism, but have also expanded on the fours skills to add two new "communicative modes" (see Figure 3), of which one, mediation, is of particular relevance here. Mediation emphasises "…the constant movement between the individual and social level in language learning, mainly through its vision of the user/learner as a social agent" (Council of Europe, 2020, p. 36). It also bears a striking resemblance to Iedema's "resemiotization", the process of transforming meaning across contexts and practices (Iedema, 2003). The theoretical background for this conceptual shift is based, however, on the work of Vygotsky (1978) and sociocultural theory, as well as the ecological model (van Lier, 2000) and complexity theories (Piccardo, 2015, all cited in Council of Europe, 2016.
Figure 3
The Relationship Between Reception, Production, Interaction and Mediation (Council of Europe, 2020, p. 34) Multimodal artefacts such as the one explored here position the learner as a social agent in screenings and uploadings of their work. The skills demonstrated throughout this task also bear a striking resemblance to some of the updated descriptors included in the appendix of the latest companion volume of the CEFR. This could be of benefit to TESOL educators wishing to accredit their students with CEFR certification. Examples of how multimodal assessment tasks could meet CEFR descriptors is provided in Figure 4. Multimodal assessment tasks such as this one could also be adapted to groupwork, which would incorporate some of the other new mediation competences, such as "Managing interaction" and "Collaborating to construct meaning".
The Council of Europe have added eighteen new competences, with detailed descriptors provided for learners from beginner (A1) to advanced (C2) levels (Council of Europe, 2020). Of these eighteen, only the eight competences most relevant to multimodal literacy are shown here (Figure 4), along with one corresponding descriptor for both B2 and C1 levels. In order to demonstrate how closely aligned some of the new descriptors are with many of the learning outcomes of this assessment task, it is worth looking at one competence in closer detail, along with two of its related descriptors at both C1 and B2 levels. One example of demonstrating mediation competence at C1 level is that students should be able to "…explain (in Language B) the relevance of specific information found in a particular section of a long, complex text (in Language A)", while at B2 level, they should be able to "…interpret and describe reliably (in Language B) detailed information contained in complex diagrams, charts and other visually organised information (with text in Language A)" (see Figure 4). This dovetails beautifully with the task requirements of the assessment explored here and is richly demonstrated in the artefacts. When students rephrase, or "mediate" the academic, German language of their lectures and textbooks into "everyday" English, they also fulfil the descriptors for both spoken and written mediation competence. This is visible not only in the labels and text boxes of their VCs, but also in the scripts they compose to prepare for their narration: six of the seven students interviewed reported that they wrote a text to read aloud in advance of their audio narration.
Further, within the category, "mediation strategies", there are a further two sub-categories of particular relevance here: "strategies to explain a new concept and strategies to simplify a text" (see Figure 5). The descriptors here are remarkably relevant to this project: for example, students at C1 level are expected to be able to "...explain technical terminology and difficult concepts when communicating with non-experts about matters within their own field of specialisation". Similarly, at B2 level, learners should be able to "...explain technical topics within their field, using suitably non-technical language for a recipient who does not have specialist knowledge" (see Figure 5). The simplifying strategies are also highly pertinent: at C1 level, students are expected to "...make complex, challenging content more accessible by explaining difficult aspects more explicitly…", while at B2 level, students should "...make concepts on subjects in their fields of interest more accessible by giving concrete examples…" (see Figure 5).
Figure 4
Some of the Mediation Descriptors Satisfied by this Assessment Task (Council of Europe, 2020, p. 198-241). (Council of Europe, 2020, p. 119-122) Summary of RQ3: (To what extent do digital, multimodal assessment tasks develop the students as social agents?) The Council of Europe has redesigned their CEFR assessment criteria to include plurilingualism and mediation, and the skills demonstrated throughout this multimodal assessment task are very closely aligned with a great number of the related descriptors. For example, students should be able to "… interpret and describe reliably (in Language B) detailed information contained in complex diagrams, charts and other visually organised information (with text in Language A)" as well as "...explain technical terminology and difficult concepts when communicating with non-experts about matters within their own field of specialisation" (Council of Europe, 2020, p. 119-122). In this sense, digital, multimodal assessment tasks such as this one contribute considerably to developing the students as social agents through "...the constant movement between the individual and social level in language learning" (Council of Europe, 2020, p. 36), as well as in the screenings and uploadings of their work.
Conclusion
Despite the ubiquity of multimodal communication, the skills involved are largely neglected in tertiary TESOL classrooms in Germany. The Council of Europe has attempted to address this with new competences, claiming that "…tasks in the language classroom should involve communicative language activities… that also occur in the real world" (Council of Europe, 2020, p. 32). However, a reluctance to move away from the 'four skills' persists. This case study is an attempt to demonstrate the usefulness of multimodal assessment tasks by examining the results of one in close detail. Returning again to the research questions posed by this paper, it has been demonstrated that this task can very effectively promote the communication of technical concepts (RQ1) and provide opportunities for nuanced meaning making in English (RQ2) while simultaneously developing the students as social agents (RQ3). More broadly, tasks such as these can prepare students for their disciplinary studies and the job markets of the future, while also helping them become CEFR accredited, especially in terms of their mediation skills. | 8,139.6 | 2022-04-01T00:00:00.000 | [
"Education",
"Linguistics",
"Computer Science"
] |
LOCATION AND DISTANCE OF FARMERS TO AGRICULTURAL EXTENSION SERVICE: IMPLICATION FOR AGRICULTURAL DEVELOPMENT IN OYO
The study investigated the location and distance covered by farmers to agricultural extension service/unit among farmers in Oyo state. Furthermore, it tried to look at the implication on farmers’ agricultural production. A multistage random sampling procedure was used to select 320 farmers from four agricultural zones (Ibadan/Ibarapa, Ogbomoso, Oyo and Saki) of Oyo State Agricultural Development programme (OYSADEP). Farmers were selected from 8 local government areas and from 124 villages. Both descriptive and inferential statistics were used to analyse the results from the study. Findings revealed that agricultural extension agents were within the reach of farmers as 79.1% of the farmers indicated that agricultural extension agents were the major source of agricultural information and also provided advisory service (77.8%). The mean distance covered by farmers to extension units was 17.8km but bad road network (77.5%) and low extension-farmer ratio (64.1%) were some of the major constraints identified by farmers as affecting extension service delivery. Regression analysis between distance of farmers to extension and other production incentives show a positive relationship (p<0.00) on income alone. Therefore, it is recommended that the government improve road conditions and also invest funds to support the Agricultural development Programme (ADP) system.
INTRODUCTION
Agricultural extension remains the most important source of information used by farmers. Extension is basically an educational function. Its job may vary considerably from country to country, but without exception it is expected to inform, advise and educate in a practical manner. Agricultural extension services are established for the purpose of changing the knowledge, skills, practices and attitude of masses of rural people, school pupils, suppliers and buyers of agricultural products and many other institutions involved in activities affecting rural people (Fabusoro, Awotunde & Alarima, 2008;Oyegbami, 2014). At the Federal and State levels, governments continue to actively evolve policies and programmes aimed at facilitating the rapid development of the agricultural sector. One of these policies is to improve access to improved technologies for rapid increase in productivity, self-sufficiency in food and fibre production, enhanced income, and the improvement of the quality of life of the farmers.
The primary objective of both research and extension is to increase agricultural productivity and enhance farm income (Kwarteng & Towler, 1994). Attaining this objective requires communication between research and extension, such that technical production packages generated by research reach the farmers and are profitably used by them. In the past, lack of effective linkage between research and extension had been largely responsible for non-adoption of recommended practices (Oyebanji, 2000). Thus, the gaps in crop yield between those obtained by scientists on their research farms and those recorded by farmers in their fields had remained very wide (Omidiji, 1994). Recently, a lot of problems are associated with the delivery of extension services, especially in Nigeria. Among these problems are finance, poor extension-farmer linkage and large extension-farmer ratio (Omotayo, 2011). In order to close the gap, there is a need to maintain a clear line of communication between scientists, extensionists and farmers, so as to encourage mutual exchange of information for the benefit of those involved in the generation, transfer and usage of technologies. Aliyu & Adedipe, (1997) asserted` that a flourishing agricultural extension system is a requirement for the socio-economic, political existence and rapid industrialisation of a country. For rapid agricultural development to take place, local input such as technology generated on a continuous basis through research and development activities, among others, must be ensured. The transfer of technology involves the transfer not only of information but also skill, preferably in ways that encourage the development of indigenous skills. One way to transfer information is by person-to-person contact. This is done by bringing together the people who have the technology (researchers) with the people who wish to acquire it (farmers). The most effective means of building human resource capability is through formal and informal training of the farmers and the extension workers. In this age of information technology, not a day goes by without hearing of new information technologies that can make decision making issues or programming task easier and more efficient. Proximity to service centres is regarded as an important factor in access and usage. Therefore, this study utilises Geographic Information Systems (GIS) technology to identify the location of agricultural extension units and determine their proximity (distance) to farmers and the implication on agricultural production.
Specific objectives
1. Describe the production characteristics of respondents in the study area. 2. Identify and describe the services rendered by agricultural extension agents to farmers. 3. Spatially analyse the location/distribution of agricultural extension units and determine distance covered by farmers to these unit/services. 4. Determine the effect of distance of farmers to ADPs on farmers crop production 5. Examine farmers perceived constraints to services of agricultural extension
Research Hypothesis
Ho1: Distance of farmers to agricultural extension service/units has no significant relationship with yield of selected crops.
METHODOLOGY
The Study was carried out in Oyo State. Oyo State is one of the 36 states of the Federal Republic of Nigeria. It is located in the south west geo-political zone and has an equatorial climate with dry and wet seasons and relatively high humidity. The study population consist of farmers involved in maize, cassava and yam production.
Sampling was based on four agricultural zones of the state ADP. These are Ibadan/Ibarapa, Ogbomosho, Oyo, and Saki zones. A multistage random sampling technique was used. Two local Government Areas (LGAs) were randomly selected from each zone to give a total of eight LGAs. The LGAs selected were Ido, Ibarapa central, Surulere, Ogbomoso South, Oyo West, S. Afr. J. Agric. Ext. Oyegbami. Vol. 46, No. 2, 2018: 14 -23 DOI: http://dx.doi.org/10.17159/2413-3221/2018 (License: CC BY 4.0) Iseyin, Atisbo and Orelope. Five percent (5%) of the villages in each LGA were randomly sampled for effective data management to give a total of 124 villages. Three farmers from each village were also randomly sampled, giving a total of 372 respondents. Only 320 questionnaires were completed and used for analysis.
Data was obtained through personal interviews conducted with the aid of an interview schedule on the production characteristics of respondents, services rendered by extension agents, location of agricultural extension services/units and farmers perceived constraints to services of agricultural extension. Descriptive statistics such as frequency counts, percentage and mean were used to analyse the collected data. Inferential statistics such as regression analysis was used to find the relationship between variables. The productivity (GIS) analysis tool in Arc view (3.2a) was used to develop multiple buffers around extension units/block and to determine their proximity (distance) to farmers and to determine the number of farmers within a 10km buffered location. Table 1 shows the production characteristics of the respondents. The results indicated that 73.1% of the farmers cultivate less than five hectares of farm land. This implies that the majority of farmers in the study area are small scale farmers with the size of farm being disincentive to the use of mechanised implements and improved technologies. Also, the majority (87.5%, 90% and 61.9%) of the respondents cultivate maize, cassava and yam respectively, together with other crops like cowpea, okra, pepper and leafy vegetables.
Production Characteristics of Respondents
The results in Table 1 also indicate that about half (54.4%) of the respondents had an annual income above of N200, 000. This may be due to the fact that the majority of respondents practiced mixed farming and gathered income from different farm enterprises, which will definitely increase household income as well as their standard of living. Furthermore, 86.9% of farmers do not have access to credit. Lack of access to credit may result in lack of access to basic farm input and this will virtually make it impossible for small scale farmers to increase their yield and income thereby reinforcing widespread poverty. Very few (13.1%) of the respondents that do have access to credit got it through other sources like: friends and neighbours and from cooperative groups, this is according to respondents' submissions. However, access to credit will remain a challenge to small scale farming.
Prominent sources of technical information to farmers were; extension agents (79.1%), other sources of information included television (37.5%), friends and neighbours (32.2%) and internet (13.1%). This implies that farmer's get agricultural information from different sources. If these sources were adopted and used in the right way, they will increase their knowledge about new technologies, improve their farming practices and also increase their production. More than three quarters (83.4%) of the respondents submitted that extension agents visit them once or twice in a month. This indicates that there is timely delivery of information about new findings to respondents since the extension agents are the closest to the farmers and are expected to inform and educate farmers about new technologies.
Services rendered by Agricultural Extension Agents to Farmers
The majority (96.3%) of the respondents interviewed agreed that extension/advisory services are the core service provided by extension agents (figure 1). This is expected because extension is the major and often the only agency responsible for transferring new technologies, training and educating the rural people. The mission of extension, according to Gregg, van Gastel, Asiedu, Donkoh & White, (1999) is to help people, especially farmers improve their lives through an informal educational process, which puts scientific knowledge in a form which S. Afr. J. Agric. Ext. Oyegbami. Vol. 46, No. 2, 2018: 14 -23 DOI: http://dx.doi.org/10.17159/2413-3221/2018 people can understand, use and help focus on improving their lives, satisfying their needs and moving towards improvement.
More than half (57.5%) of the respondents interviewed ascertained that extension agents act as a guide to farmers as to how and where to procure inputs to avoid purchase of adulterated inputs like herbicides, pesticide and fertilizers. Input procurement and distribution is one of the services provided by extension, though the majority of the respondents submitted that most of the inputs used on their farms were usually procured from nearby markets and from previous storage (especially with seeds). Idachaba, (2006) submitted that Agricultural Development Programmes (ADPs) are directly involved in the procurement and distribution of inputs (improved seeds, fertilizer, herbicides, pesticides etc) which are supplied to them by private agro-chemical companies. The ADPs are also established to help farmers in the areas of crop production and protection. Figure 1 show that the majority (96.9%) of the respondents interviewed submitted that training in the areas of agriculture is one of the services provided by extension agents in the study area. This is expected since training helps farmers acquire knowledge, skill and required attitude or behavioural change, which if applied to a specific farm situation results in better performance in terms of efficiency, effectiveness and quality output (Ajayi, 2008).
The focus of Women in Agriculture (WIA) programme in all the ADPs was to encourage and stimulate rural women towards improving the standard of living of their families. Almost half (86.5%) of the population sampled agreed that agricultural extension agents provides services that relate to women in Agriculture. Banji & Okunade, (2005) reported that the WIA component extension activities of the ADPs primarily focus on women's production activities within the confines of the wide diversity of economic, cultural, ethnic and religious differences within the country. S. Afr. J. Agric. Ext. Oyegbami. Vol. 46, No. 2, 2018: 14 -23 DOI: http://dx.doi.org/10.17159/2413-3221/2018 19
Location/distribution of Agricultural Development Programmes (ADP) and farmers.
Figure 2 is a map showing the location of Agricultural Development Programme and that of the farmers in the study area. Buffering was done to calculate the distance covered by farmers in getting to these ADP locations. Table 3 show the GPS calculated distance. Almost half (46.3%) of the respondents (farmers) cover between 1 -10 km distance to get to ADPs, while only 10% had to cover more than 40km to get to the ADP office. This shows that the ADPs are within reach of their clientele (the farmer). This is expected because the field level extension agents (FLEA) in the Nigeria Agricultural Development Programme (ADP) are directly responsible for dissemination of extension messages to farmers within the catchments areas. Furthermore, they are the most important elements in the Training and Visit (T&V) management system of extension as reported by Fabusoro et al., (2008). The village extension agents (VEAs) are the frontline workers responsible for the day-to-day extension delivery activities to the farmers. Farmers that cover less than 10km (46.3%) are likely to get information faster regarding new technologies, thereby increasing their yield and income, compared to those that cover more than 40km to get extension services, although this may not always be the case. Oyegbami. Vol. 46, No. 2, 2018: 14 -23 DOI: http://dx.doi.org/10.17159/2413-3221/2018 (License: CC BY 4.0) Table 3 show farmers perceived constraints to services of agricultural extension agents. About three quarters (77.5%) of the farmers interviewed submitted that bad road network is a major constraint to agricultural extension services delivery. This may negatively affect the movement of agricultural extension agents, and even that of the farmers. Bad road network has a lot of negative impact, among which are; high transportation cost (for both the farmers and agricultural extension agents), increase market price of farm produce, reduce adoption rate (for agricultural technologies) under development and the likes.
Low extension-farmers-ratio is also one of the major constraints to agricultural extension delivery as retreated by the 64.1% of the respondents. According to Agbamu (2006), and Omotayo (2011) disproportionate extension agent to farm family ration in the developing countries has led to a situation in which many farmers do not benefit from the service of agricultural extension. As agricultural extension agents strive to meet as many farm families as possible, the resultant effect will be poor extension of agricultural technologies, low popularisation of innovation and consequent low productivity which may have a negative effect on the farmer and his family and the nation's economy in the long run.
Distance to agricultural extension and other production incentives have no influence on the yield of selected crops.
The result of regression analysis show that 47% and 48% of the variation in the yield of maize and cassava was determined by the explanatory variables, while the f-value of 47.88 which is significant at P<0.01 shows that all the explanatory variables have joint significant influence on maize yield of farmers. Specifically, the result show that income has significant positive influence on yield of the farmers, while distance to extension service centres, and market have non-significant negative influence on maize yield although these were not significant.
The significant influence of farmers' income on farmers' yield implies that when farmers have higher income, they are likely to have enhanced capacity to purchase yield enhancing inputs like fertilizer and pesticide and consequently achieve greater yield. This result implies that regardless of distance to service centres, farmer's capability to purchase needed input through enhanced income is likely to have a more pronounced influence on yield. A similar pattern of influence was obtained for yam as income of the farmer was the only variable that has significant influence on the yield of yam. These results have shown the significant influence of farmers' income on the potentials for enhanced yield across the three crops. This invariably points to the fact that enhanced incomes of the farmers provide increased pool of funds that farmers utilise for increased and timely investment in quality yield. The attendant effect of such opportunity is increased in yield which in addition to favourable market dynamics, could lead to greater increase in income which can also lead to increase in the adoption of technologies and attendant increase in the standard of living of the farmers. Oyegbami. Vol. 46, No. 2, 2018: 14 -23 DOI: http://dx.doi.org/10.17159/2413-3221/2018 (License: CC BY 4.0)
CONCLUSION AND RECOMMENDATIONS
Findings from the study confirm that agricultural extension agents are the major source of agricultural information to farmers because they are majorly concerned with transferring new technologies to farmers through training and education. Furthermore, about half of the respondents cover less than 10km to get extension services. This proves that farmers are still within reach of extension services. However, bad road network and low extension-farmer ratio were the major constraints identified by farmers as affecting extension service delivery. Therefore, it is recommended that government improve physical and social infrastructure like roads, electricity and water supply to boast agricultural production. In addition, the government should invest funds into supporting the ADP system, especially by employing graduates into the extension outfit of the ADPs. This will go a long way to address and achieve the dream for food sufficiency and food security in the near future. | 3,862.6 | 2018-11-15T00:00:00.000 | [
"Economics"
] |
The Extended Direct Algebraic Method for Extracting Analytical Solitons Solutions to the Cubic Nonlinear Schrödinger Equation Involving Beta Derivatives in Space and Time
: In the field of nonlinear optics, quantum mechanics, condensed matter physics, and wave propagation in rigid and other nonlinear instability phenomena, the nonlinear Schrödinger equation has significant applications. In this study, the soliton solutions of the space-time fractional cubic nonlinear Schrödinger equation with Kerr law nonlinearity are investigated using an extended direct algebraic method. The solutions are found in the form of hyperbolic, trigonometric, and rational functions. Among the established solutions, some exhibit wide spectral and typical characteristics, while others are standard. Various types of well-known solitons, including kink-shape, periodic, V-shape, and singular kink-shape solitons, have been extracted here. To gain insight into the internal formation of these phenomena, the obtained solutions have been depicted in two-and three-dimensional graphs with different parameter values. The obtained solitons can be employed to explain many complicated phenomena associated with this model.
Introduction
Diverse real-world phenomena have been explained using nonlinear models, leading to the revelation of important information.Fractional nonlinear evolution equations represent an advanced class of differential equations that yield improved results.These equations help to illustrate intricate physical phenomena, attracting many researchers to work in this field due to their significant applications.Within the realm of fractional nonlinear evolution equations, the nonlinear Schrödinger equation plays a crucial role and finds applications in various areas such as quantum mechanics, optical fiber, plasma physics, fluid mechanics, biology, the dispersion of chemically reactive materials, electricity, shallow water wave phenomena, heat flow, finance, and fractal dynamics.
The relationship between the nonlinearity and dispersion components of medium solitons is uncovered, and as they travel through the medium, their undulation structure remains unaltered.The soliton solutions derived from FNLEE have practical and commercial applications in various fields such as optical fiber technology, telecommunications, signal processing, image processing, system identification, water purification, plasma physics, medical device sterilization, chemistry, and other related domains [1,2].Various dynamic approaches have been introduced and implemented in the literature to solve nonlinear fractional differential equations (NFDES) and obtain analytical traveling wave solutions, for example, the exp-function method [3], the Modified Exp-function method [4], the inverse scattering transformation method [5,6], the Bäcklund transformation method [7], the homogenous balance method [8,9], the Jacobi elliptic function method [10], the unified algebraic method [11], the sine-cosine method [12,13], the tanh-coth method [14,15], improved modified extended tanh-function method [16,17], the Lie symmetry analysis method [18], the extended generalized (G /G)-expansion method [19], the modified sim- ple equation method [20], the generalized Kudryashov method [21,22], the sine-Gordon expansion method [23], the Riccati-Bernoulli equation method [24,25], the new extended direct algebraic method [26,27], and the new auxiliary equation method [28].
Fractional derivatives have been widely applied in diverse scientific and engineering fields, including physics, mechanics, signal processing, control systems, biomedical engineering, finance and economics, electromagnetism, and fluid mechanics.For instance, the mathematical modelling of viscoelastic food ingredients experiencing stress and relaxation can be accomplished using fractional calculus [29].These applications showcase the adaptability and practical value of fractional derivatives across a range of scientific and engineering disciplines, enabling improved modelling capabilities and deeper comprehension of intricate phenomena.
In this article, we consider the space and time fractional cubic NLSE with the Kerr law nonlinearity with space and time fraction in the following form [30].
where U(x, t) is a complex-valued wave profile which is related to spatial co-ordinate x, and temporal variable t.In addition, r, s, andz are real coefficients with fractional parameters 0 < α ≤ 1 and 0 < β ≤ 1.The cubic nonlinear Schrödinger equation involving beta derivatives in space and time is used to model certain nonlinear optical phenomena.For example, it can describe the propagation of ultrashort optical pulses in nonlinear media with anomalous dispersion.
Here, by utilizing the equations x U, where i = √ −1, and assuming α = β with beta fractional derivatives, Equation ( 1) is transformed into the following form The model has been investigated in the previous literature using various methods, including Nucci's reduction method and the simplest equation method [31], the fractional Riccati expansion method [32], the fractional mapping expansion method [33], the (G /G)- expansion method [34], and the Adomain decomposition method [35].
To the best of our current knowledge, the extended direct algebraic method has not yet been applied to the model represented by Equation (1) to evaluate soliton solutions.The application of this method extends to various fields of nonlinear sciences, including mathematical physics, quantum physics, and engineering.However, the extended direct algebraic method is modified and implemented on a nonlinear space-time fractional model in Equation (1).By doing so, advanced, fresh, and wide-ranging soliton solutions are obtained.In this study, our primary focus is to establish advanced and widely applicable soliton solutions for the space-time fractional cubic nonlinear Schrödinger equation using the recommended method.The obtained soliton solutions exhibit wave-like behavior and are expressed in trigonometric, hyperbolic, and exponential forms.This research will also provide valuable insight into the internal formation of the travelling wave phenomena by depicting the obtained solutions in two-and three-dimensional graphs with different parameter values.Furthermore, the soliton solutions derived from this study will also contribute to the interpretation of complex phenomena associated with this particular space-time fractional model.
This article organizes its contents as follows: Section 2 presents the properties of the beta derivative.The algorithm of the proposed method is explained in Section 3. Section 4 provides a mathematical analysis.In Section 5, graphical representation and discussion are presented.The comparison scheme is outlined in Section 6, and finally, Section 7 concludes the article.
Definition of Beta Derivative and Its Properties
Several definitions of fractional derivatives, such as Riemann Liouville, the modified Riemann Liouville, the Caputo, the Caputo-Fabrizio, the conformable fractional derivative the Atangana-Baleanu derivatives, have been developed recently by many researchers [36,37].Most of the fractional derivatives do not agree with the well-known properties of classical calculus such as the chain rule, the Leibnitz rule, and the derivative of a constant is zero.Atangana et al. [38] launched a new crucial and progressive definition of fractional derivatives called beta derivative, which follows the fundamental properties of classical calculus.
Definition 1: Let α ∈ R and the function h = h(x) : [α, ∞) → R , then the beta derivative of order α with respect to x is defined as follows [39]: where Γ is the gamma function.D α x h(x) = d dx h(x) for α = 1.Properties: If h(x) and u(x) are α-order differentiable for all x > 0, and d 1 , d 2 are real constants, then the beta derivative encompasses the following properties [39]: dx .By using these properties of beta derivative, fractional differential equations simply turn into ordinary differential equations.As of now, the beta derivative has not been found to have any limitations and it fulfills all the properties associated with integerorder derivatives.Furthermore, it exhibits the property of yielding a derivative of zero for constant functions [40][41][42].The beta derivative is a non-local derivative that exhibits its distinctiveness when applied to functions that embody the entire characteristic of the function itself.It serves as a generalized version of the Caputo and Riemann-Liouville derivatives.In comparison to other derivatives, the beta derivative offers greater flexibility and can accurately model complex systems.Its applications are widespread, ranging from electrochemical systems and complex geometries to modeling electromagnetic waves in dielectric media and cancer treatment [43][44][45].Numerous scientific studies have reported the utilization of the beta derivative in diverse fields, further enhancing its appeal and prompting its application to real-world problems [40,41,46,47].
Algorithm of the Extended Direct Algebraic Method
In this section, we have presented the extended direct algebraic method as an effective technique.This method enables us to obtain fresh and wide-ranging analytical solutions for model (1).By employing this technique, fractional partial differential equations can be transformed into ordinary differential equations, simplifying the calculation process.The algorithm is narrated below: Step 1: Let the general form of the fractional order nonlinear evolution equation be where F is a polynomial of u(x, t), and D α t be fractional derivative of α-order and u(x, t) is the travelling wave variable, where subscripts denote partial derivatives.
Let us hypothesize about the travelling wave solution where u(ξ) is a function of ξ with In Equation ( 5) v, κ are respectively velocity and soliton frequency, ω is wave number and θ is the soliton phase component.
Inserting the above transformation into Equation (3), we obtain the following ordinary differential equation of integer order: where H be the polynomial of function u(ξ), and prime denotes the derivative with respect to ξ.
According to the new extended algebraic method, the solution of Equation ( 6) can be expressed in the form where c j (0 ≤ j ≤ N) are constant coefficients to be evaluated later and H(ξ) satisfies the following ordinary differential equation, where prime denotes derivative with respect to ξ, and µ, γ, λ are constant coefficients.The general solutions of Equation ( 8) (adequate solutions) are given in [27].By substituting Equations ( 7) and ( 8) into Equation ( 6), we obtain a polynomial of H(ξ).By extracting the coefficient terms of different powers of H j (ξ) where (j = 0,1,2,. . . ) and setting them equal to zero, then we obtain a system of algebraic equations with various parametric values such as such as c j (j = 0, 1, 2 . ..), µ, γ, λ, ω, and κ.By solving these algebraic equations, we can determine the values of the unknown parameters.Substituting these values of the parameters along with Equation (8) into Equation (7), as the broadspectrum solutions of Equation ( 8) are known, we obtain new and more general solutions.
For several values of µ, γ, λ and their correlation, Equation (8) gives disparate general solutions of NLSEs.
Mathematical Analysis
In this subsection, we studied the space and time fractional cubic NLSEs to find more general and standard exact wave solutions using an extended direct algebraic method.Furthermore, we discuss the mathematical analysis of the wave solutions.The fractional transformation in Equation (5) converts Equation (2) into the following ordinary differential equation, comprising both real and imaginary parts.
Now balancing between the highest order derivatives and highest power of the nonlinear term in Equation ( 9), we obtain N = 1.Therefore, the solution of Equation ( 9) is of the form By substituting the results from (8) into Equation ( 9) along with Equation ( 10), we obtain a polynomial equation in H(ξ), where (0 ≤ j ≤ N).Taking zero the resemble coefficient power of H j (ξ), we achieve a set of algebraic equations with c 0 , c 1 , µ, γ, λ.Calculating this set of algebraic equations with the software Mathematica, we obtain the values of the parameter as follows: where µ, γ, λ and r, s, k, z are free parameters.Now, embedding the values of ( 12) into (11) and the hypothesis of the auxiliary equation for different conditions, we establish the travelling wave solutions of (1) which are given below.
Case 9: While µ = 0 and γ = 0, we obtain where , with conditions (zs(2 Fractal Fract.2023, 7, 426 8 of 14 The soliton solutions obtained in this study are diverse and novel, originating from the general solutions.
Physical Significance and Explanations
In this section, attained soliton solutions of the space and time fractional cubic NLSEs are presented in Figures 1-5 and discussed the nature of these solitons for several values of unknown parameters through the software Mathematica.
In this section, attained soliton solutions of the space and time fractional cubic NLSEs are presented in Figures 1-5 and discussed the nature of these solitons for several values of unknown parameters through the software Mathematica.
The accomplished solutions are related to two parts including the real part and the imaginary part.The solutions provide various types of solitons such as kink shape soliton, singular kink shape soliton, V shape soliton, periodic soliton, flat kink shape soliton, antisingular kink shape soliton, soliton solutions and in such manners.The wave velocity and wave number have significant effects on the travelling wave profile.
The solution 6 exhibits kink-shaped soliton solution for the modulus part with velocity = −1.443depicted in Figure 1 Taking other various values of free parameters this model provides the same type of soliton solutions, repeat solitons have been neglected here and solitons profile depend on the value of fractional order, wave velocity, wave number and other wave variables.The accomplished solutions are related to two parts including the real part and the imaginary part.The solutions provide various types of solitons such as kink shape soliton, singular kink shape soliton, V shape soliton, periodic soliton, flat kink shape soliton, antisingular kink shape soliton, soliton solutions and in such manners.The wave velocity and wave number have significant effects on the travelling wave profile.
The solution u 27 exhibits singular bell shape soliton for modulus part with velocity v = 4.1 for the value of α = 0.9, k = 3 depicted in Figure 4, but while decreasing the value α = 0.55, 0.25, k = 1.68 then the wave velocity become v = 2.79 and those graphical representation are provided in Figure 4b,c.Portraits of 3D are shown within the interval 0 ≤ x ≤ 5 and 0 ≤ t ≤ 5, and 2D.Portraits are shown at t = 1 with values of arbitrary parameters s = 2, γ = 0, r = 1.8, µ = 1.
Taking other various values of free parameters this model provides the same type of soliton solutions, repeat solitons have been neglected here and solitons profile depend on the value of fractional order, wave velocity, wave number and other wave variables.
Comparison
We compare the results of the space-time fractional cubic nonlinear Schrodinger equation obtained through the extended algebraic equation method with Abdelwahed et al. [30] solutions.It is noticed that from the attained results few of them are analogous to the results that were established earlier by several approaches and some of them are fresh.
Solution Using the Simplest Equation Method
The Attained Solutions q(x, t) = − In the above table, we discussed the solutions obtained in this paper with the previous study.Hashemi et al. [30] have given more than two solutions which are not homologous with the attained results.
Conclusions
The extended direct algebraic method has been used to derive novel exact analytical soliton solutions of the cubic nonlinear Schrödinger equation with fractional space-time terms.In this study, Kerr's law nonlinearities are utilized, which arise from the nonharmonic motion of bound electrons when light pulses propagate in optical fibers.All solutions are expressed in terms of trigonometric and hyperbolic functions.Computational calculations and graphical representations of the solutions are plotted using the Wolfram Mathematica software.The graphical representations of these solutions help us to visualize and understand the internal features of the system more accurately.Among these solutions, some are new and have not been reported previously in the literature.
Figure 1 .
Figure 1.Three-dimensional and two-dimensional plot of kink shape soliton solution of u 6 .Fractal Fract.2023, 7, x FOR PEER REVIEW 9 of 14
Figure 2 .
Figure 2. The 3D and 2D plots of the periodic travelling wave solution u 6 .
Figure 3 .
Figure 3.The 3D and 2D plots of V shape soliton solution 26 .The solution 27 exhibits singular bell shape soliton for modulus part with velocity = 4.1 for the value of = 0.9, = 3 depicted in Figure 4, but while decreasing the value = 0.55, 0.25, = 1.68 then the wave velocity become = 2.79 and those graphical representation are provided in Figure 4b,c.Portraits of 3D are shown within the interval 0 ≤ ≤ 5 and 0 ≤ ≤ 5, and 2D.Portraits are shown at = 1 with values of arbitrary parameters = 2, = 0, = 1.8, = 1,
Figure 4 .
Figure 4.The 3D and 2D plot of singular bell shape soliton solution corresponds to u 27 .
Figure 5 .
Figure 5.The 3D and 2D plot of the soliton solution corresponds to u 35 . | 3,725.4 | 2023-05-25T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Control Oriented Model of a Variable Geometry Turbocharger in an Engine with Two EGR Loops
Résumé — Modélisation de compresseur à géométrie variable dédiée au contrôle pour un moteur avec deux boucles EGR — Afin de rendre les moteurs Diesel modernes plus propres et plus économes en carburant, l’architecture de la boucle d’air est de plus en plus complexe. Les stratégies de contrôle doivent tenir compte des interactions entre les dynamiques et doivent être facilement calibrables. Dans ce contexte, l’utilisation de modèle reposant sur des principes physiques est très adaptée. Dans cet article, nous proposons un modèle d’un turbocompresseur à géométrie variable dans une architecture à deux boucles de recirculation des gaz d’échappement (EGR) : haute pression (HP) et basse pression (BP). Ce modèle est la composante principale d’une stratégie de contrôle et est évalué expérimentale-ment lors d’essais avec de l’EGR basse pression et de l’EGR haute pression. Les résultats montrent que le choix du circuit EGR a une grande influence sur la position de l’actionneur turbocompresseur, mais que cet e ff et est bien pris en compte dans le modèle proposé. Abstract — Control Oriented Model of a Variable Geometry Turbocharger in an Engine with Two EGR Loops — In order to make modern Diesel engines cleaner and more fuel e ffi cient, their air systems architecture become more and more complex. The control strategies of these systems must take account of the multiple components interactions with minimal calibration e ff ort required. In this context, model based techniques are very attractive. In this paper, we propose a control oriented model of a variable geometry turbocharger in an architecture with two Exhaust Gas Recirculation (EGR) loops: High Pressure (HP) and Low Pressure (LP). This model is implemented in a basic control strategy and evaluated experimentally during tests with LP or HP EGR. The results show that the choice of EGR circuit has a high influence on the turbocharger actuator position, but that this e ff ect is well taken into account in the proposed model.
NOMENCLATURE
Nomenclature.Comp and turb stands for compressor and turbine respectively.
Motivation
In the automotive industry, the necessary reduction of pollutant emissions involves drastic evolution of engines, and in particular Diesel engines.Exhaust Gas Recirculation (EGR) and turbocharging have been the major evolution of Diesel engines in the recent past.They allow to increase the quantity of burned gas in the intake manifold, which helps reducing the NOx production during the combustion.Two types of EGR systems have been investigated: High Pressure (HP) and Low Pressure (LP) systems, named after their position on the air system with respect to the turbocharger.When both systems are combined, the operating conditions of the turbochargers are highly dependent on the use of either of these two systems.This must be taken into account in the turbocharger control strategy which determines the Variable Geometry Turbocharger (VGT) position corresponding to a required pressure at the outlet of the compressor.When considering the global system, the problem is multi variables, highly nonlinear, with a lot of interactions between the different subsystems.In this kind of issue, an interesting solution consists in model based techniques which allow to decouple the subsystems and therefore simplify the problem.The global control structure used in our case is described in [1] which also details the strategies for the control of each EGR loop and for the estimation of the intake manifold composition.The present paper aims at complementing this work.It describes the design of a control oriented model of the turbocharger adapted to an engine architecture with two EGR loops.Experimental results are provided to validate the different assumptions.On this topic, not many publications can be found.In [2], the same engine setup is considered, but no details are given for the turbocharger control.The approach proposed here was used before with simpler air system configurations.In [3], it was applied to gasoline engines fitted with fixed geometry turbines.The application to variable geometry turbines was first tried in [4] where it was validated mostly outside the EGR operating area.The purpose in this paper is to extend this approach, and to show that the model proposed can easily be adapted for a system with two EGR circuits.
System Description
The engine considered in this paper is a four cylinder turbocharged Diesel engine shown in Figure 1.Without Exhaust Gas Recirculation, fresh air is aspirated in the engine through the compressor which increases the air density.The air fuel mixture is burnt in the cylinder where the combustion results in the production of mechanical torque.At the exhaust of the system, the turbine converts part of the gas enthalpy into mechanical power on the turbocharger shaft, whose dynamics are the consequence of the balance between the compressor and turbine powers.Two EGR circuits can be used: -High Pressure EGR: gas from the exhaust manifold are derived to the intake manifold.The EGR dynamics is fast at the price of acting as a discharge for the turbocharger, i.e. less energy is provided to the turbocharger; -Low Pressure EGR: gas are taken downstream the particulate filter and derived to the upstream of the compressor.
On the contrary to the high pressure EGR, this dynamics is much slower but the turbocharger takes all the energy from the exhaust gases.
Control Input
The turbocharger is equipped with guide vanes whose angles are adjusted via an actuator that is noted u vgt .This affects both the angle of the gas flow on the turbine blades and the turbine effective flow area.By these means, it is possible to maintain a high boost even at low engine speed, and to improve the system dynamics performance.
Measurements
The sensors available on the system are the following: -engine speed N e , -intake manifold pressure and temperature, P dc and T dc , -compressor upstream pressure and temperature, P uc and T uc , -manifold air flow D c .The other variables will be estimated with the measured variables.
Figure 1 LTC-Diesel engine including a variable geometry turbocharger, a cooled High-Pressure EGR loop and a cooled Low-Pressure EGR loop.Sensors are the engine speed, the intake manifold pressure and temperature (P dc and T dc ), the compressor upstream pressure and temperature, (P uc and T uc ), the compressor air flow (D c ).
Model Objective
First of all, the objective of the model development described here is to provide a basis for the design of a control strategy.The difficulty of this task consists in keeping the right level of complexity.Two main criteria will be considered: -The model has to represent only the main dynamics governing the evolution of the system in order to minimize the number of states in the controller.The fast dynamics are neglected; -In an engine, the evolution of a turbocharger depends on the conditions at its boundaries: pressures and temperatures upstream and downstream the compressor and turbine, gas mass flow through these components.This is very important particularly for the architecture with two EGR loops that is considered in this paper.These influences must be represented as far as possible, using measurements when available or estimations otherwise.As a consequence, some assumptions will have to be made.They will be justified by a comparison between experimental test data and the results of the model.
TURBOCHARGED ENGINE MODEL
Most of the equations governing the behavior of the turbocompressor can be found in other publications (see for example [5][6][7]).The novelty of the approach presented here lies in the simplification proposed further and the control strategy designed from the simplified model.
Turbocharger Modeling
The turbocharger is composed by a turbine driven by the exhaust gas and connected via a common shaft to the compressor, which compresses the air in the intake.The rotational speed of the turbocharger shaft N t can be derived from a balance between the turbine power P t and the compressor power P c d dt where J t is the inertia of the turbocharger.
Compressor
In order to derive an equation for the compressor power, the first law of thermodynamics is applied.It states that (neglecting heat losses) the compressor power is related to the mass flow through the compressor D c and the total change of enthalpy by P c = D c c p (T dc −T ut ).The compressor efficiency is introduced as the ratio between isentropic and actual compression powers.The compressor power reads where η c is the compressor efficiency, Π c P dc P uc the compressor pressure ratio, and γ the specific heat ratio.The compressor speed, flow, pressure ratio and efficiency are linked.Different representations can be found in the literature, among which a commonly used one consists in mapping the pressure ratio and efficiency against flow and speed.These maps are extrapolated from data measured during characterization tests.Several extrapolation methods have been proposed (for example [8]).In order to take the variations of the upstream compressor conditions into account, these variables are corrected as follow The compressor pressure ratio corresponding to the system studied here is represented in Figure 2.
Turbine
Similarly, the turbocharger power is related to the mass flow through the turbine D t and the total change of enthalpy.This results in where η t is the turbine efficiency, T dt and P dt are the temperature and pressure after the turbine, P ut the exhaust manifold pressure, Π t P ut P dt is the turbine pressure ratio and γ the specific heat ratio.In this case, the corrected turbine flow D t,cor and isentropic efficiency η t are mapped versus the pressure ratio across the turbine, the corrected turbocharger shaft speed N t,cor , and the VGT actuator position u vgt .As for compressor maps, different methods have been proposed to obtain these maps from test data (see [5]).This can be rearranged in the following form, for more commodity:
Engine Modeling
Conventionally (see [9] for example), we assume that the aspirated flow D asp can be computed as where and V cyl is the cylinder volume.η v is the volumetric efficiency.Classically, it is experimentally derived and, eventually, given by a look-up table η v (P dc , N e ).
Intake and Exhaust Modeling
We consider the exhaust and intake manifolds as a fixed volume for which the thermodynamics states (pressure, temperature, and composition) are assumed to be homogeneous.The entire volume between the compressor and the engine can be lumped into a single volume.The mass balance in this volume and in the exhaust manifold leads to
Summary
Gathering Equations (1-8) leads to the following dynamics where − 1, and α t P dt c p T ut η t √ T ut .This model takes account of parameters external to the turbocharger itself: temperatures upstream the compressor and turbine, pressure downstream the turbine.However, it contains five states.Since the ultimate purpose of this work is to design a model based control law, further simplifications have to be undertaken.Different types of assumptions will be made and verified experimentally.
MODEL REDUCTION
The first type of assumptions concerns the dynamics.The second type concerns the steady state dependencies.The purpose is to keep only the relevant dynamics of the system, and parameters that can be measured or estimated from the available sensors.
Dynamic Simplification: Model Simplification by Singular Perturbation
The fifth order nonlinear system (9) accurately describes the dynamics of the system.However, one can notice that the turbocharger speed is much slower than the pressure dynamics.Indeed, typically we have V ut RT ut 5e − 9, V dc RT dc 5e − 8 and J t = 3e − 5.This suggests to simplify these dynamics with a singular perturbation method [10].Let RT ut be a scalar that represents all the small parameters to be neglected.The reference dynamics (9) has the form of the standard singularly perturbed system where z 1 N t , z 2 P uc P dc P ut P dt T .In other words, we split the slow z 1 -dynamics (the power balance) and the z 2 -fast dynamics (mass balances).
The equation ψ(z 1 , z 2 , 0) = 0 has a unique root of interest To ensure the validity of the simplification, we can check the uniform stability of the Jacobian of ψ as shown in [11] [Assumption 3.2 p. 11].These dynamics are stable as shown in [3].The reduced dynamics writes From [10] [Th 11.1], the following proposition holds Proposition 1 Consider the singularly perturbated system (10) and z 2 = h(z 1 ) the isolated root of ψ(z 1 , z 2 ) = 0.There exists a positive constant > > 0 such that (10) possesses a unique trajectory z 1 (t, ), z 2 (t, ), and Thus, the new reference system writes The simplified dynamics write now as a first order nonlinear dynamics with an algebraic equation, the steady state solution of the intake and exhaust dynamics
High Pressure Flow Simplification
Contrary to the turbocharger, the high pressure EGR loop has a direct control, i.e. the EGR valve controls directly this flow.Again, simplification is made in order to substitute the high pressure EGR flow by its reference value that is directly linked to the intake pressure.We introduce variable δ HP to characterize the choice of EGR circuit: when HP EGR is used δ HP is equal to 1, it is equal to 0 otherwise.With this notation, we have: where X int (resp X exh ) is the intake (resp.exhaust) burned gas ratio.
Turbine Flow Simplification
The turbine can be considered as a restriction on the exhaust gas flow.However, the standard equation for compressible flow across an orifice cannot be applied in this case.Modified versions of this equation have been proposed which fit better the experimental results, based on various assumptions (see [7]).Most of them neglect the influence of the turbine speed.The formula kept in the present case is given below, the justification being that it shows a good correlation with the characterization data (see Fig. 3).
where ψ vgt (u vgt ) is equivalent to an effective area (represented in Fig. 4) and Function ψ vgt with respect to the control input u vgt .
Correlation between the Turbocharger Speed and the Intake Pressure
For given engine operating conditions, the turbocharger speed and the intake pressure are very correlated.It is therefore interesting to consider the combination of (3) and (6).The corrected compressor flow depends on the compressor pressure ratio, the engine speed and the operating conditions, and: Experimental results at steady state.Variation of the turbocharger speed square N 2 t w.r.t. the compressor pressure ratio Π c .This expression is remarkable since it shows a direct dependency between the compressor pressure ratio and the engine speed.The influence of the intake temperature is of second order and will be neglected.The following graph shows experimental measurements.As experimentally represented in Figure 5, we can estimate the turbocharger speed square N 2 t linearly w.r.t. the compressor pressure ratio Π c , i.e.
Steady State Assumptions
The system dynamics depend on a lot of different variables that physically are related to the engine operating conditions (engine speed, volumetric efficiency) or the environment (compressor upstream pressure and temperature, turbine downstream pressure).Since they are external to the turbocharger, we will make the assumption that they depend on the operating point of the engine.They can either be measured or estimated based on steady state maps.The only remaining unknown terms in the system of Equations ( 12) are η c and η t .Since they vary in small proportions on the engine operating points, we will also consider that they can be mapped as functions of the engine operating conditions.It is difficult to validate it in transient since it is not possible to measure the efficiencies in this case.The correct behavior of the control law designed from these assumptions will validate them a posteriori.
Reference System
The linear correlation between compressor pressure ratio and turbocharger kinetic energy considerably simplifies the studied system.The state variable can be chosen as the compressor pressure ratio, and the turbocharger speed does not appear any more in the model equations.
The reference system writes where {α i } i∈ [1,4] depend on the engine operating conditions: The first equation of system (14) represents the balance between compressor and turbine mechanical power, giving the dynamics of the system.The second equation represents the mass conservation in the exhaust manifold, the dynamics being neglected.
The functions φ turb , ψ vgt , ψ c and ψ t are nonlinear but invertible.This property is very important and will be used when designing the control law.
The coefficients α i can be computed from sensors available on the engine.Variable β represents the gas mass flow through the compressor and through the turbine.Only this variable depends on the EGR loop choice.
ONLINE MODEL VALIDATION
The purpose of this paper is not to describe the design of a control strategy already presented in [12].However, since model ( 14) is intended for control, the validation can be made online after implementing it in a control structure.In this section we briefly describe the control structure chosen for the validation, and then we show the experimental results.
Control Implementation
Model (14) was inverted and implemented in a control strategy.A possible implementation structure was proposed in [4].Other solutions would be possible.The model validity does not depend on the control structure, but we chose in this work to keep a similar implementation.The basic structure of the strategy is represented in Figure 6.Controller C consists in a linear PI controller.Its output is added to a feedforward term and transformed into an actuator setpoint in where μ p and μ i are proportional and integral gains.The actuator setpoint is computed by: ⎧ where subscript sp stands for set-point.Controller C follows from the two first lines, whereas the last two lines are the inversion of (14).Variable z is the state of the controller.In steady state when the intake manifold pressure is controlled to the setpoint, an indication of the accuracy of model 14 is given by the relative importance of C with respect to the feedforward term α 2 βψ c (Π c,sp ).
Experimental Results
Load transient tests at constant speed have been performed in HP or LP EGR configurations.The results are reported in Figures 7, 8, 9 and 10 which show respectively the intake manifold pressure, the intake manifold burned gas rate, the turbocharger actuator (VGT) and the controller output compared to the feedforward term.In each figure a test performed with HP EGR is compared with a test in LP EGR mode.The intake manifold conditions (composition and pressure) are controlled to the same values (Fig. 7 and 8) (1) , but the VGT has to be actuated at very different positions (Fig. 9) due to the differences in mass flow through turbine and compressor.However, the correction necessary from controller C is similar in each mode and stays at low levels compared to the feedforward term (Fig. 10).The operating conditions of the turbocharger highly depends on the flow through the compressor and turbine, and hence on the choice of EGR circuit.This is why the VGT is positioned at very different levels.However, the feedforward term takes well account of these effects.This validates the model and the assumptions made in its design.
The model provides a feedforward structure that takes into account the interaction of the turbocharger with other components, in particular the EGR circuit.The calibration of the turbocharger controller is independent of the choice of EGR loop, which reduces dramatically the calibration effort required.
CONCLUSION
This paper describes the development of a turbocharger model in a Diesel engine fitted with LP and HP EGR loops, and its reduction in order to provide a control oriented model.This work complements previous works already published or submitted ( [3,4] and [1]).Experimental validation results are presented, justifying the assumptions made in the model reduction process.
The turbocharger control strategies designed from this model combined with an adequate EGR control and estimation provide a solid basis for the management of modern Diesel engines.In particular, the calibration of a complex air system could be a daunting task.In the proposed structure the effort required is greatly reduced thanks to an adequate physical representation of the system.
Figure 2
Figure 2 Compressor map.Compressor pressure ratio Π c w.r.t.its corrected flow D c,cor and its corrected speed N c,cor .The blue crosses show the characterization measurements in both the HP and LP EGR loops.
Figure 3
Figure 3Comparison between the measured corrected turbine flow D t√T ut P dt and its simplified modeling for several value of the VGT position.
Figure 7 Intake
Figure 7Intake manifold pressure with HP or LP EGR.
Figure 8
Figure 8Estimated intake manifold BGR with HP or LP EGR.
Figure 9
Figure 9Turbocharger actuator with HP or LP EGR.
Figure 10 Controller
Figure 10Controller terms with HP or LP EGR.I: integral term represented by C in (15), ff: feedforward term equal to α 2 βψ c (Π c,sp ) in (15).Note that the presented controller terms are in the unit of the power balance (in Watt). | 4,859.4 | 2011-07-01T00:00:00.000 | [
"Engineering"
] |
Calculated magnetic exchange interactions in the van der Waals layered magnet CrSBr
Intrinsic van der Waals materials layered magnets have attracted much attention, especially the air-stable semiconductor CrSBr. Herein, we carry out a comprehensive investigation of both bulk and monolayer CrSBr using the first-principles linear-response method. Through the calculation of the magnetic exchange interactions, it is confirmed that the ground state of bulk CrSBr is A-type antiferromagnetic, while there are five sizable large intralayer exchange interactions with small magnetic frustration, which results in a relatively high magnetic transition temperature of both bulk and monolayer CrSBr. Moreover, the significant electron doping effect and strain effect are demonstrated, with further increased Curie temperature for monolayer CrSBr, as well as an antiferromagnetic to ferromagnetic phase transition for bulk CrSBr. We also calculate the magnon spectra using linear spin-wave theory. These features of CrSBr can be helpful to clarify the microscopic magnetic mechanism and promote the application in spintronics.
In recent years, ternary chromium thiohalide compound CrSBr has been extensively studied [19][20][21][22][23][24][25][26][27][28][29][30][31][32]. CrSBr is a 2D magnetic material with a van der Waals (vdW) layered structure along the c axis [22]. Scanning tunneling spectroscopy and photoluminescence studies indicate that CrSBr is a semiconductor with an electronic gap of 1.5 eV [22]. Bulk CrSBr is reported to have triaxial anisotropy, easy to magnetize b axis, middle to magnetize a axis and hard to magnetize c axis [22]. CrSBr bulk material has a high antiferromagnetic ordered temperature T N = 132 K [22], and many theoretical works predict that monolayer CrSBr has higher ferromagnetic ordered temperature [5,[19][20][21]. Driven by the theoretical prediction, Lee et al measured the T C of monolayer CrSBr at 146 K by using the second harmonic generation technique [23]. Although many theoretical studies have investigated the magnetic interactions of undoped CrSBr and estimated the magnetic transition temperature based on the magnetic interactions [5, 19-21, 29, 33]. However, most of them use a cluster approximation and map the total energy differences to a model Hamiltonian, which prevents detailed study of the long-range properties of the exchange interactions. These models generally consider three nearest neighbor interactions, and some studies even consider only two nearest neighbor interactions. Interestingly, the recent magnon dispersions of CrSBr, measured by inelastic neutron scattering, suggest a Heisenberg exchange model with seven nearest in-plane exchanges [31]. Therefore, in order to accurately study the magnetic interactions of CrSBr, we adopt the first-principles linear-response (FPLR) method [34,35].
In this work, using density functional theory (DFT) calculations, we systematically study the electronic and magnetic properties of both bulk and monolayer CrSBr. Our calculations show that bulk (monolayer) CrSBr is an semiconductor with a band gap of 1.42 eV (1.47 eV), which is in good agreement with the experimental results [22]. Using FPLR method, we calculate the magnetic exchange constants. There are five sizable intralayer magnetic exchange terms with small frustration. Although the calculated interlayer interaction J z1 of bulk CrSBr is very weak, it is indeed antiferromagnetic coupling, which is consistent with the experimental results [22]. Based on the calculated exchange constants, we estimate the magnetic transition temperature of bulk (monolayer) CrSBr at 178 K (211 K). In addition, we study the effect of electron doping and strain on the exchange interactions in CrSBr, and find that both strategies can increase T C of monolayer CrSBr. There is an antiferromagnetic to ferromagnetic phase transition for the doped bulk CrSBr, which is confirmed by both FPLR calculations and direct total energy calculations. We also calculated the magnon spectra using linear spin-wave theory.
Method
The electronic band structure calculations have been carried out by using the full potential linearized augmented plane wave method as implemented in WIEN2K package [36]. For the exchange-correlation potential, the generalized gradient approximation (GGA) is used. The vdW interactions in 2D materials can exhibit large many-body effects [37,38]. To better take into account the interlayer vdW forces, there are many approaches in use, including vdW density functional (vdW-DF) [39], DFT-D method [40], and many-body dispersion method [41,42]. For bulk CrSBr, we adopt the vdW-DF in the form of optB88-vdW [43,44] for structure related calculations. GGA + U calculations are also performed for including the effect of Coulomb repulsion in Cr-3d orbital [45]. Here, we use the values of U = 4 eV and J = 1 eV , which have been widely used in previous theoretical works [5,20,28,29]. Using the second-order variational procedure, we include the spin-orbit coupling interaction [46]. Based on the experimental lattice constants a = 3.50 Å, b = 4.76 Å, and c = 7.96 Å [22], we optimize the internal atomic coordinate for bulk CrSBr. The crystal structure of monolayer CrSBr is fully optimized, while the vacuum space is set to be 15 Å to avoid interactions with other neighboring layers. The phonon spectrum is calculated by using the PHONOPY code [47]. The basic functions were expanded to R mt × K max = 7, where R mt is the smallest of the muffin-tin sphere radii and K max is the largest reciprocal lattice vector used in the plane-wave expansion. The 13 × 9 × 6, 13 × 9 × 3, and 13 × 9 × 1 k-point meshes are used for the primitive cell, 1 × 1 × 2 supercell, and slab calculations, respectively. The self-consistent calculations are considered to be converged when the difference in the total energy of the crystal does not exceed 0.01 mRy at consecutive steps. In order to obtain the exact value of total energy, the convergence criteria for the energy difference is change to 0.0001 mRy.
The exchange constants J's are the basis for helping us understand the magnetic properties. Here, we use FPLR method to calculate the exchange interactions, which is based on a combination of the magnetic force theorem [34] and the linear response method [35]. We assumed a rigid rotation of atomic spin at sites R + τ and R ′ + τ ′ of the lattice (here R are the lattice translations and τ are the atoms in the basis). Then, the exchange constant J can be given as a second variation of the total energy induced by the rotation of atomic spin at sites R + τ and R ′ + τ ′ [35], where σ is Pauli matrix and B is the effective local magnetic field. ϵ is the one-electron energy while ψ is the corresponding wave function. This method directly computes the lattice Fourier transform J(q) of the exchange interaction J(R l ), so it is easy to calculate the exact long-range exchange interactions. This technique has been successfully used to evaluate magnetic interactions in a variety of materials [34,35,[48][49][50][51][52][53][54][55].
Bulk CrSBr
The crystal structure of bulk CrSBr belongs to space group Pmmn. The lattice constants are a = 3.50 Å, b = 4.76 Å, and c = 7.96 Å [22]. There are two Cr atoms in each cell. As shown in figure 1, the Cr atom is surrounded by four S atoms and two Br atoms, forming a distorted octahedra. CrS 4 Br 2 octahedra are connected by SBr edge-sharing along the a axis, SS edge-sharing along the ab direction and S corner-sharing along the b axis to form the 2D lattice. Magnetic measurements on single crystals indicate that the magnetic structure of CrSBr is A-type antiferromagnetic, where Cr atoms couple antiferromagnetically along the c axis (see figure 1(a)) [22]. We first perform the GGA + U calculations based on ferromagnetic (FM) configuration. The calculated magnetic moment on the Cr atom is 2.88 µ B , consistent with the high spin state of S = 3/2. As shown in figure 1, we depict the main magnetic interactions. With FPLR method, we estimate and give the magnetic exchange constants with bond lengths less than 8 Å in table 1. Among them, J 1 ,J 2 , and J 3 are the main magnetic interactions, and they are all ferromagnetic. These three exchange interactions determine the ferromagnetic order in the layer. On the other hand, the interlayer first-nearest-neighbor interaction J z1 , although three orders of magnitude weaker than the intralayer interaction, is antiferromagnetic, which causes ferromagnetism to be replaced by antiferromagnetic as the ground state.
Based on the ground state magnetic structure determined above, we perform GGA + U calculations using the 1 × 1 × 2 supercell, and show the band structure of bulk CrSBr in figure 2(a). Our calculations show that CrSBr is an semiconductor with a band gap of 1.42 eV, which is in good agreement with the experimental results (1.5 ± 0.2 eV) [22]. The calculated magnetic moment on the Cr atom is 2.88 µ B , which is the same as the magnetic moment calculated by FM order. The band structures of monolayer CrSBr are similar to those of bulk CrSBr and will be discussed below. The total energy of the antiferromagnetic (AFM) state is about 34 µeV f.u. −1 lower than that of FM state by the direct total energy calculations, confirming the ground state from the calculated magnetic interactions.
Using the FPLR method, we also calculate the exchange interactions for A-type AFM structure. We find that the values of exchange constants with different magnetic configurations are almost the same (the difference is less than 0.005 meV). The fitting magnetic exchange constants in the experimental works are also presented in table 1 for comparison [31]. Our J 1 = −2.31 meV, J 2 = −3.51 meV, and J 3 = −1.40 meV are consistent with the fitting results (J 1 = −1.90 meV, J 2 = −3.38 meV, J 3 = −1.67 meV) of the neutron-scattering measurements [31]. Moreover, J 4 and J 5 are very weak, but J 6 and J 7 cannot be neglected, which is consistent with the experiment [31]. In particular, J 6 is antiferromagnetic, which results in small frustration. As shown in table 1, our J 6 (0.38 meV) is very close to the fitted J 6 = 0.37 meV, while the fitted J 7 (−0.29 meV) is about two times of our J 7 (−0.136 meV). Based on the calculated magnetic exchange constants, we calculate the magnetic transition temperature using the mean-field approximation theory [56]. T N is estimated to be 178 K, which is somewhat larger than the experimental value (132 K) [22]. Since the mean field theory often overestimates the magnetic transition temperatures, the exchange constants we calculated are considered to agree with the experimental results.
It is worth noting that although the CrSBr system has a global inversion center, most Cr-Cr bonds do not have inversion symmetry, therefore Dzyaloshinskii-Moriya (DM) interactions exist. Using the FPLR approach, we also calculate the DM interactions. D 1 is estimated to be 0.07 meV parallel to the b axis, and D 3 is estimated to be 0.18 meV parallel to the a axis. D 2 should be zero because its bond has an inversion center. The DM interactions of CrSBr are so weak (less than 0.8 meV) and hard to identify from the measured spin wave spectra [31]. It is also worth mentioning that the calculated value of magnetic anisotropy energy is less than 0.1 meV, so we ignore it here.
Monolayer CrSBr
The monolayer CrSBr is phase-stable and exhibits ferromagnetic order below 146 K [23]. Based on our optimized structure (a = 3.545 Å, and b = 4.733 Å), the phonon dispersions of monolayer CrSBr along high symmetry lines are calculated by using the PHONOPY code. As shown in figure 3, there are no imaginary frequencies in phonon dispersions, suggesting that the structure of monolayer CrSBr is dynamically stable. Similarly, the magnetic exchange interactions of monolayer CrSBr are also calculated using FPLR method, as displayed in table 2. We find that the exchange constants of monolayer CrSBr only change slightly compared with those of bulk CrSBr. J 1 ,J 2 , and J 3 are all ferromagnetic, and there is no frustration between them, which results in the high T C of monolayer CrSBr. The J 1 (−2.94 meV) in monolayer CrSBr is stronger than that (−2.31 meV) in bulk CrSBr. J 2 = −3.70 meV, which is approximately equal to −3.51 meV in bulk CrSBr. We have J 3 = −1.98 meV, which is about one and a half times of −1.40 meV in bulk CrSBr. Also, J 6 is still antiferromagnetic, and the values of J 6 and J 7 in monolayer CrSBr are slightly smaller than those in bulk CrSBr.
Using the mean-field approximation theory [56], we also estimate the magnetic transition temperature of monolayer CrSBr, which is 211 K. The calculated magnetic transition temperature of monolayer CrSBr is slightly larger than that (178 K) of bulk CrSBr, which is consistent with experimental results [23].
Electron doping
In order to test the doping dependence, we perform a series of electron doping calculations using virtual crystal approximation. The number of doped electrons per unit cell varies from 0.1 to 0.7 . The exchange constants under electron doping are calculated by FPLR method. The main exchange constants of bulk and monolayer CrSBr as a function of the electron doping level are shown in figure 4.
For bulk CrSBr, J 1 and J 3 increase significantly with the increase of doping level, while J 2 increase slightly first and then decrease slightly. When the electron doping level of bulk CrSBr reaches 0.7 e/cell, J 1 becomes larger than J 2 . It is worth mentioning that when the electron doping level is above 0.1 e/cell, the interlayer interaction J z1 of bulk CrSBr changes from antiferromagnetic to ferromagnetic. As a result, the magnetic ground state changes from antiferromagnetic to ferromagnetic. To clarify the magnetic ground state of doped bulk CrSBr, we also compare the total energies of the FM and AFM states. When the electron doping level is 0.1 e/cell, the total energy of the FM state is about 15 µeV f.u. −1 lower than that of AFM state, confirming the phase transition from the calculated magnetic interactions.
Similarly, for monolayer CrSBr, J 1 and J 3 increase significantly with the increase of doping, while J 2 increase slightly first and then decrease slightly. J 1 becomes larger than J 2 , when the electron doping level reaches 0.5 e/cell for monolayer CrSBr. It can be expected that the magnetic transition temperature of monolayer CrSBr will increase with the increase of doped electrons. While increasing the electron doping level from 0 to 0.7 e/cell, T C of monolayer CrSBr increases from 211 K to 237 K.
Strain effect
The introduction of additional charge may cause lattice distortion [57], and theoretical studies of monolayer CrSBr show that strain can change the magnetic properties [29]. Therefore, we also check the effect of strain on the exchange constants. The main exchange constants of bulk and monolayer CrSBr as a function of strain are calculated using FPLR method, as displayed in figure 5.
For the intralayer interactions of bulk CrSBr, J 1 is strongly reduced by a compressive strain along the a axis, while J 2 and J 3 are significantly enhanced by a compressive strain along the b axis. The effects of strain on the magnetic exchange constants of monolayer CrSBr (see figures 5(g) and (h)) are similar to those of bulk CrSBr, and are in agreement with theoretical calculations for the monolayer reported in the literature [29]. As shown in figure 5(f), the interlayer interactions J z1 and J z2 are enhanced with the decrease of interlayer spacing. Remarkably, the AFM interaction J z1 is greatly enhanced by a compressive strain along the a axis, and changes from AFM to FM by a tensile strain along the a (or b) axis or compressive strain along the c axis. While the level of compressive strain along the b axis reaches 5%, T C of monolayer CrSBr increases from 211 K to 251 K. If we further increase the compressive strain level, the T C will decrease.
Conclusions
In conclusion, we present a comprehensive investigation of both bulk and monolayer CrSBr by using DFT calculations. The magnetic exchange constants are calculated using the FPLR method. The strongest terms, J 1 , J 2 , and J 3 are ferromagnetic interactions, without frustration between them, which leads to a high magnetic transition temperature for both bulk and monolayer CrSBr. In addition, J 4 and J 5 are very weak, but J 6 and J 7 cannot be neglected and J 6 is antiferromagnetic, which leads to frustration in this compound. On the other hand, although the interlayer interaction J z1 of bulk CrSBr is very weak antiferromagnetic coupling, it determines the magnetic ground state of A-type antiferromagnetism. Moreover, we have demonstrated the effect of electron doping and strain on the magnetic properties of both bulk and monolayer CrSBr. Both strategies are found to increase T C of monolayer CrSBr, and induce the antiferromagnetic to ferromagnetic phase transition of bulk CrSBr. This work demonstrate accurate calculations of magnetic exchange constants for CrSBr which will be helpful for deeply understanding their electronic and magnetic properties as well as promoting their applications in spintronics.
Data availability statement
The data that support the findings of this study are available upon request from the authors. | 4,008.2 | 2023-01-17T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Waste Wood Particles from Primary Wood Processing as a Filler of Insulation PUR Foams
A significant part of the work carried out so far in the field of production of biocomposite polyurethane foams (PUR) with the use of various types of lignocellulosic fillers mainly concerns rigid PUR foams with a closed-cell structure. In this work, the possibility of using waste wood particles (WP) from primary wood processing as a filler for PUR foams with open-cell structure was investigated. For this purpose, a wood particle fraction of 0.315–1.25 mm was added to the foam in concentrations of 0, 5, 10, 15 and 20%. The foaming course of the modified PUR foams (PUR-WP) was characterized on the basis of the duration of the process’ successive stages at the maximum foaming temperature. In order to explain the observed phenomena, a cellular structure was characterized using microscopic analysis such as SEM and light microscope. Computed tomography was also applied to determine the distribution of wood particles in PUR-WP materials. It was observed that the addition of WP to the open-cell PUR foam influences the kinetics of the foaming process of the PUR-WP composition and their morphology, density, compressive strength and thermal properties. The performed tests showed that the addition of WP at an the amount of 10% leads to the increase in the PUR foam’s compressive strength by 30% (parallel to foam’s growth direction) and reduce the thermal conductivity coefficient by 10%.
Introduction
In recent years, people's environmental awareness has been increasing, which has led to the search for solutions that will allow the use of technologically processed byproducts. Due to the increasing development in the wood industry, waste generation is a common problem. Two by-products of wood processing are dust and wood particles. Despite the fact that research is carried out with the use of wood dust in the context of various materials, this material is still a nuisance waste. Nowadays, the most popular composite containing wood (of any form) is a wood plastic composite (WPC) [1]. Research concerning the application of WP was also conducted in order to enhance the properties of thermoplastic starch [2]; as a component in adhesive mixtures for 3D printing [3]; in concrete as a partial replacement for sand [4]; and in the production of new polyurethane foams from liquefied wood powder [5]. Wood waste can also be applied as a potential filler for loose-fill building isolation [6].
PUR represents a wide class of polymeric materials [7,8]. Polyurethane foams account for 2/3 of the world's production of polyurethanes, and because of their numerous applications in the form of rigid, semi-rigid and flexible foams, they are continuously highly ranked among all of available foams [9]. PUR foams are the product of the addition polymerization of polyols and polyisocyanates. Catalysts, surfactants and foaming agents are also used during the production of PUR. These foams may differ in composition, density, color and mechanical properties. There are also studies where fillers were used to lower the cost and increase mechanical properties, e.g., the modulus and strength or density [10]. This paper is a continuation of the previous research conducted by the authors concerning the possibility of using by-products from wood processing in order to manufacture materials with improved properties, which are used, e.g., in construction and in the production of interior design elements [30][31][32].
Materials
A two-component foam system for the production of open-cell polyurethane thermal insulation PUREX-WG 2017 (Polychem System, Poznań, Poland) was used in the research. One of the components was a polyol (A component). The isocyanate component (B component) was polymeric methylenediphenyl-4,4′-diisocyanate consisting of 31.14% free isocyanate groups (NCO).
Wood particles (WP) representing a dimensional fraction of 0.315-1.25 mm were used as a filler (Figure 1). WP were obtained as a result of sorting sawdust intended for the production of chipboard. The moisture content of WP ranged between 0.2% and 0.5%. Figure 2 presents their fractional composition. The largest shares were observed for the fractions of 0.315 and 0.630 mm. The wood particles were added to the foam at the concentrations of 0, 5, 10, 15 and 20% determined according to a weight ratio. The amount of filler was determined based on preliminary studies and a literature review on the manufacture of biocomposite PUR foams [6,17,34,35].
Synthesis of PUR Composite Foams
The course of foaming of the modified PUR foams was characterized on the basis of the times of the successive stages of this process and the maximum foaming temperature. This paper is a continuation of the previous research conducted by the authors concerning the possibility of using by-products from wood processing in order to manufacture materials with improved properties, which are used, e.g., in construction and in the production of interior design elements [30][31][32].
Materials
A two-component foam system for the production of open-cell polyurethane thermal insulation PUREX-WG 2017 (Polychem System, Poznań, Poland) was used in the research. One of the components was a polyol (A component). The isocyanate component (B component) was polymeric methylenediphenyl-4,4′-diisocyanate consisting of 31.14% free isocyanate groups (NCO).
Wood particles (WP) representing a dimensional fraction of 0.315-1.25 mm were used as a filler (Figure 1). WP were obtained as a result of sorting sawdust intended for the production of chipboard. The moisture content of WP ranged between 0.2% and 0.5%. Figure 2 presents their fractional composition. The largest shares were observed for the fractions of 0.315 and 0.630 mm. The wood particles were added to the foam at the concentrations of 0, 5, 10, 15 and 20% determined according to a weight ratio. The amount of filler was determined based on preliminary studies and a literature review on the manufacture of biocomposite PUR foams [6,17,34,35].
Synthesis of PUR Composite Foams
The course of foaming of the modified PUR foams was characterized on the basis of the times of the successive stages of this process and the maximum foaming temperature.
Synthesis of PUR Composite Foams
The course of foaming of the modified PUR foams was characterized on the basis of the times of the successive stages of this process and the maximum foaming temperature. For this purpose, the foam components were mixed in a weight ratio premised in accordance with the manufacturer's recommendations, i.e., A:B = 100:100. The reaction mixture was prepared by mixing the appropriate amounts of wood particles with component B and then adding component A ( Figure 3). The reaction mixture was stirred with a low-speed mechanical stirrer at 1200 rpm for 10 s, at a temperature of 23 • C, and then poured into a form with internal dimensions of 250 × 250 × 130 mm 3 . After that, PUR-WP composites were allowed to grow and were left at room temperature for 24 h. Each foam variant was prepared in two replicates. The obtained foams were cut with a band saw (Holzstar, Hallstadt, Germany) into specimens of dimensions necessary for testing their properties. For this purpose, the foam components were mixed in a weight ratio premised in accordance with the manufacturer's recommendations, i.e., A:B = 100:100. The reaction mixture was prepared by mixing the appropriate amounts of wood particles with component B and then adding component A ( Figure 3). The reaction mixture was stirred with a lowspeed mechanical stirrer at 1200 rpm for 10 s, at a temperature of 23 °C, and then poured into a form with internal dimensions of 250 × 250 × 130 mm 3 . After that, PUR-WP composites were allowed to grow and were left at room temperature for 24 h. Each foam variant was prepared in two replicates. The obtained foams were cut with a band saw (Holzstar, Hallstadt, Germany) into specimens of dimensions necessary for testing their properties.
Kinetic of PUR Foaming
The influence of wood filler on the foaming process of PUR foams was determined by measuring the following times: • start of growth-time when the volume of the reaction mixture started to increase; • gelling-the time after which it was possible to remove the so-called "polyurethane thread"; • growth-the time after which the maximum foam growth was achieved; • tack-free-the time measured until the foam solidified completely.
The foaming temperatures were measured with a thermocouple immersed in the reaction mixture. The temperature was always read after the foam growth was completed. At the end, the average of 5 individual measurements was evaluated.
Characterization of PUR Sample
The density of neat PUR foam and PUR-WP composites, defined as the ratio of the sample mass to its volume, was determined in accordance with the PN-EN ISO 845 standard. Samples with the dimensions 50 × 50 × 50 mm 3 were used. The samples were measured with a thickness gauge with an accuracy of 0.01 mm and weighed on an analytical weight with an accuracy of 0.001 g.
The compressive strength in parallel direction to the foam's growth was investigated in accordance with the recommendations of EN 826 standard using a Tinius Olsen H10KT testing machine (Tinius Olsen Ltd., Salfords, UK). The test covered foam samples with dimensions of 50 × 50 × 50 mm 3 , which were compressed in the direction of foam growth at a rate of 5 mm/min and the load cell of 250 N. The compressive strength (σ10%) was defined as the maximum compressive force achieved when the relative deformation at deflection was less than 10% (based on the initial cross-sectional area of the specimen). The means of the compressive strength were evaluated based on the 7 individual measurements.
Kinetic of PUR Foaming
The influence of wood filler on the foaming process of PUR foams was determined by measuring the following times: • start of growth-time when the volume of the reaction mixture started to increase; • gelling-the time after which it was possible to remove the so-called "polyurethane thread"; • growth-the time after which the maximum foam growth was achieved; • tack-free-the time measured until the foam solidified completely.
The foaming temperatures were measured with a thermocouple immersed in the reaction mixture. The temperature was always read after the foam growth was completed. At the end, the average of 5 individual measurements was evaluated.
Characterization of PUR Sample
The density of neat PUR foam and PUR-WP composites, defined as the ratio of the sample mass to its volume, was determined in accordance with the PN-EN ISO 845 standard. Samples with the dimensions 50 × 50 × 50 mm 3 were used. The samples were measured with a thickness gauge with an accuracy of 0.01 mm and weighed on an analytical weight with an accuracy of 0.001 g.
The compressive strength in parallel direction to the foam's growth was investigated in accordance with the recommendations of EN 826 standard using a Tinius Olsen H10KT testing machine (Tinius Olsen Ltd., Salfords, UK). The test covered foam samples with dimensions of 50 × 50 × 50 mm 3 , which were compressed in the direction of foam growth at a rate of 5 mm/min and the load cell of 250 N. The compressive strength (σ 10% ) was defined as the maximum compressive force achieved when the relative deformation at deflection was less than 10% (based on the initial cross-sectional area of the specimen). The means of the compressive strength were evaluated based on the 7 individual measurements.
The thermal conduction coefficient was determined using a heat flux density sensor type ALMEMO 117 company Ahlborn (Holzkirchen, Germany), with plate dimensions of 100 × 30 × 3 mm 3 . The average values of the thermal conduction coefficient were evaluated on the basis of 5 individual measurements. A detailed description of the research method was presented in the previous works by the authors [36,37].
In order to determine the cellular structure, a scanning electron microscope (SEM) and a light microscope were used. SEM analyses were carried out with the use of an SU3500 Hitachi microscope. The images of the sample plane were prepared using a computer image analyzer equipped with a stereoscopic optical microscope (Motic SMZ-168, Hongkong, China) and a camera (Moticam 5.0, Barcelona, Spain). The image of the structure was transferred by a camera to the monitor screen, and then pictures were taken using the Motic Images Plus 3.0 program (Hongkong, China). The cell sizes of pure PUR foam and PUR-WP composite foams were determined using the same equipment.
The samples with the dimensions 50 × 50 × 50 mm 3 were collected in order to evaluate the dispersion of WP in PUR foam using computer tomography which is a type of X-ray tomography allowing to obtain cross-sections of the examined object. Scanning was performed with the use of a Hyperion X9Pro tomography, with objects scanned at a resolution of 0.3 mm at a lamp voltage of 90 kV.
The yielded test results of PUR foams with wood particles addition were analyzed statistically using STATISTICA software v.13.1(StatSoft Inc., Tulsa, OK, USA). Mean values of the parameters were compared in a one-factor analysis of variance-post hoc Tukey's test allowed us to distinguish homogeneous groups of mean values for each parameter for p = 0.05.
The Impact of WP Filler on PUR Foams Manufacture
The parameters characterizing the foaming process of the tested foams are presented in Figures 4 and 5. It can be concluded that the addition of wood particles to the open-cell PUR foam influences the kinetics of the foaming process of the PUR-WP composition. It was observed that the addition of this type of filler accelerates the onset of foam growth. This phenomenon is particularly evident in the case of PUR-WP compositions containing 10% and 15% of particles. In the case of these variants, a reduction in the starting time by approx. 29% was observed. These statistically significant differences were confirmed by the post hoc test. Statistical analysis allowed for the identification of four different groups of average foam expansion start times for the tested variants. However, mixing the foam components with wood particles had an adverse effect, and it led to extensions in the foams increases the growth time and their gelling times. With the maximum addition of wood particles, the growth time was extended by 23%, and the gel time by 33%. The extension of these times is the effect of slowing down the exothermic reaction, which is also evidenced by the decrease in the maximum foaming temperature of the foams. As shown in Figure 5, the addition of wood particles to the components of foamed PUR foam caused a decrease in the foaming temperature from 95 to 82 • C. According to the literature, the reduction in the maximum foaming temperature of PUR foams results from the reduced amount of heat generated during the reactions occurring in the latent stages and growth, used at the stage of foam stabilization and maturation. In addition, as shown in previous studies, organic fillers introduced into PUR foams can absorb some of the heat generated during synthesis, thus lowering the temperature of their foaming [19,38]. An insufficient amount of heat could slow down the cross-linking reaction and, consequently, extend the time required to achieve a track-free time, which is also confirmed by the data presented in Figure 4 [9]. Moreover, the presence of the wood particles and their relatively large dimensions undoubtedly limited both the growth of the foam cells and the susceptibility of the composition to the foaming process. According to Strąkowska et al. [24], the presence of fillers also limits the mobility of the polymer and the speed of the polymerization reaction. Moreover, as reported by Członka et al. [39], hydroxyl groups present in lignocellulosic fillers can react with highly reactive isocyanate groups. This affects the stoichiometry of the system and reduces the number of isocyanate groups capable of reacting with water, which results in a reduction in the amount of CO 2 released. reaction. Moreover, as reported by Członka et al. [39], hydroxyl groups present in lignocellulosic fillers can react with highly reactive isocyanate groups. This affects the stoichiometry of the system and reduces the number of isocyanate groups capable of reacting with water, which results in a reduction in the amount of CO2 released.
Density, Thermal Conductivity and Microstructure of PUR Foams
As expected, the addition of the wood particles causes a significant increase in the density of the PUR-WP composite. The apparent density of pure PUR foam was 20 kg/m 3 ( Figure 6). The introduction of 5% wood filler particles into its structure caused a slight increase in its density; however, the HSD Tukey test did not confirm statistically significant differences. The mean density values obtained for these two variants belong to the same homogeneous groups. A significant increase in the density of the produced foams was noticed only when larger amounts of wood particles were used, i.e., from 10% and reaction. Moreover, as reported by Członka et al. [39], hydroxyl groups present in lignocellulosic fillers can react with highly reactive isocyanate groups. This affects the stoichiometry of the system and reduces the number of isocyanate groups capable of reacting with water, which results in a reduction in the amount of CO2 released.
Density, Thermal Conductivity and Microstructure of PUR Foams
As expected, the addition of the wood particles causes a significant increase in the density of the PUR-WP composite. The apparent density of pure PUR foam was 20 kg/m 3 ( Figure 6). The introduction of 5% wood filler particles into its structure caused a slight increase in its density; however, the HSD Tukey test did not confirm statistically significant differences. The mean density values obtained for these two variants belong to the same homogeneous groups. A significant increase in the density of the produced foams was noticed only when larger amounts of wood particles were used, i.e., from 10% and Figure 5. Values of the maximum foaming temperature of PUR-WP composition depending on wood particle content.
Density, Thermal Conductivity and Microstructure of PUR Foams
As expected, the addition of the wood particles causes a significant increase in the density of the PUR-WP composite. The apparent density of pure PUR foam was 20 kg/m 3 ( Figure 6). The introduction of 5% wood filler particles into its structure caused a slight increase in its density; however, the HSD Tukey test did not confirm statistically significant differences. The mean density values obtained for these two variants belong to the same homogeneous groups. A significant increase in the density of the produced foams was noticed only when larger amounts of wood particles were used, i.e., from 10% and more. With their addition in the amount of 20%, the average apparent density of the foam was 34.6 kg/m 3 , i.e., it increased in comparison to pure foam by as much as 73%.
were also observed. This is confirmed by the results of the post hoc test and the homogeneous groups of mean values λ distinguished on its basis. Moreover, the test probability level was significantly lower than the assumed level of statistical significance, i.e., <0.05. As follows from the data presented in Figure 6, the use of wood particles as a filler for PUR foam at an amount of up to 10% leads to the reduction in the average λ value by approx. 10%, which proves the improvement of thermal insulation of this type of foam. Unfortunately, a further increase in the wood particle content results in a gradual increase in the value of λ. In the case of the 15% addition of wood filler, the value of λ was lower than the coefficient which the pure foam was characterized by, but higher than that of foams with 10% of wood particles. It should be noted that even with the maximum concentration of wood particles used in the tests (i.e., 20%), the value of the thermal conductivity coefficient was at a level comparable to that of pure PUR foam. Similar observations were made by Tao et al. [34]. The authors noted a decrease in the value of the coefficient λ by as much as 50%, while higher amount of lignocellulosic fibers (up to 20 php) resulted in a gradual increase in thermal conductivity to a level exceeding that of pure PUR foam.
This method of shaping the thermal insulation properties of the tested PUR-WP compositions may result mainly from disturbances in the cell structure of the foams. This might be manifested by changes in the distribution of cell sizes. As shown in the literature, such changes have a significant impact on the insulation properties and mechanical strength of PUR foams. As shown in Figure 7, pure PUR foam is characterized by a uniform cell size distribution, mainly within the range of 150-450 µm. The mean cell size in the range of the highest frequency is 224 µm. Introducing relatively small amount of wood particles (i.e., 5 wt.%) to the system reduces the number of cells in the size range of 200-250 µm. At the same time, it increases the number of cells above this range, i.e., within the range of 300-450 µm. However, the mean cell size with the highest frequency is still in the Along with the increase in the density of the tested foams, the statistically significant changes in their thermal insulation, determined by the thermal conductivity coefficient λ, were also observed. This is confirmed by the results of the post hoc test and the homogeneous groups of mean values λ distinguished on its basis. Moreover, the test probability level was significantly lower than the assumed level of statistical significance, i.e., <0.05. As follows from the data presented in Figure 6, the use of wood particles as a filler for PUR foam at an amount of up to 10% leads to the reduction in the average λ value by approx. 10%, which proves the improvement of thermal insulation of this type of foam.
Unfortunately, a further increase in the wood particle content results in a gradual increase in the value of λ. In the case of the 15% addition of wood filler, the value of λ was lower than the coefficient which the pure foam was characterized by, but higher than that of foams with 10% of wood particles. It should be noted that even with the maximum concentration of wood particles used in the tests (i.e., 20%), the value of the thermal conductivity coefficient was at a level comparable to that of pure PUR foam. Similar observations were made by Tao et al. [34]. The authors noted a decrease in the value of the coefficient λ by as much as 50%, while higher amount of lignocellulosic fibers (up to 20 php) resulted in a gradual increase in thermal conductivity to a level exceeding that of pure PUR foam.
This method of shaping the thermal insulation properties of the tested PUR-WP compositions may result mainly from disturbances in the cell structure of the foams. This might be manifested by changes in the distribution of cell sizes. As shown in the literature, such changes have a significant impact on the insulation properties and mechanical strength of PUR foams. As shown in Figure 7, pure PUR foam is characterized by a uniform cell size distribution, mainly within the range of 150-450 µm. The mean cell size in the range of the highest frequency is 224 µm. Introducing relatively small amount of wood particles (i.e., 5 wt.%) to the system reduces the number of cells in the size range of 200-250 µm. At the same time, it increases the number of cells above this range, i.e., within the range of 300-450 µm. However, the mean cell size with the highest frequency is still in the range of 200-250 µm. A further increase in the proportion of wood shavings in the PUR-WP composition in the amount of up to 10% of the weight results in the shifting of the mean cell size from 250 to 300 µm (average cell size 276 µm). However, taking into account the photos taken with a light microscope and SEM (Figures 8 and 9), it can be concluded that despite the noted changes in the cell size distribution, the structure of foams with the 5% and 10% wood filler added is relatively well-developed, which allows for high thermal insulation parameters. The insulating properties of the wood filler itself are likely to be important as well. Wood itself is an excellent insulator, and also has a heat capacity greater than the PUR foam.
A further increase in the amount of wood filler to 15% and 20% causes a significant disturbance of the foam structure and formation of larger and more irregular pores. In these variants, the cell structure of the composite foams is disturbed by the presence of cells sized above 450 µm. This is particularly visible in the case of variants containing 20% of wood particles (Figures 7 and 8), although a collapse of the cellular structure of the foam is visible even with a filler content of 15%. The presence of larger cells and damage to the structure of the foams result in increased air permeability. As a result, it increases the heat transfer and thus reduces the thermal insulation of this type of foam [6,34]. Similar observations in the case of the modification of closed-cell PUR foam with various types of lignocellulosic fillers were made by Strąkowska et al. and Członka et al. [24,39]. Additionally, as shown in the literature, such disturbances in the morphology of the PUR foam due to the introduction of the organic filler, such as straw particles, may result from poor interfacial adhesion between the polymer matrix and the filler surface, which consequently disrupts the foaming process and, as a result, the structure of modified PUR foams [40]. Additionally, according to Sung et al. [41], during the cell structure formation of PUR foams, the interaction between the filler surface and the polymer matrix can determine the final average cell sizes. The higher the hydrophilicity of the filler surface, as in the case of wood filler, the larger the cell sizes in the microstructure of foams. Moreover, the filler particles may constitute a nucleating agent and cause the nucleation pattern to change from homogeneous to heterogeneous and reduce the nucleation energy. For this reason, smaller cells are formed in the foam structure [42,43]. In the case of our research, the formation of cells of a small size, i.e., about 100 and 150 µm in size, was also noted. (Figures 8 and 9), it can be concluded that despite the noted changes in the cell size distribution, the structure of foams with the 5% and 10% wood filler added is relatively well-developed, which allows for high thermal insulation parameters. The insulating properties of the wood filler itself are likely to be important as well. Wood itself is an excellent insulator, and also has a heat capacity greater than the PUR foam. This can also be confirmed by the analysis of the cell size distribution and SEM (Figures 7 and 9). It also proves a significant differentiation of the cell size distribution of PUR-WP composite foams. The formation of this type of cell may result from the attachment of filler particles to the foam cell, which leads to damage and weakening of the foam microstructure, and thus lowers its strength.
Analyzing the structure of the produced foams, attention should also be paid to the dispersions of wood particles. As a rule, fillers with smaller particle sizes (e.g., nano-scale) tend to agglomerate, which also interferes with the foaming process and the morphology of PUR foams. The filler of relatively large dimensions and irregular shape used in the research allows for a high degree of dispersion of its particles in the polyurethane matrix. This is evidenced by the 3D photos of foams with 5% and 20% addition of wood particles. The photos were made with the use of the computed tomography technique, presented in Figure 10.
mation of cells of a small size, i.e., about 100 and 150 µm in size, was also noted.
This can also be confirmed by the analysis of the cell size distribution and SE ures 7 and 9). It also proves a significant differentiation of the cell size distribution WP composite foams. The formation of this type of cell may result from the attach filler particles to the foam cell, which leads to damage and weakening of the foam structure, and thus lowers its strength.
Analyzing the structure of the produced foams, attention should also be pa dispersions of wood particles. As a rule, fillers with smaller particle sizes (e.g., nan tend to agglomerate, which also interferes with the foaming process and the mor of PUR foams. The filler of relatively large dimensions and irregular shape use research allows for a high degree of dispersion of its particles in the polyurethane This is evidenced by the 3D photos of foams with 5% and 20% addition of wood p The photos were made with the use of the computed tomography technique, pres Figure 10.
Compressive Strength
The results of the structure and thermal insulation properties of the produce correspond with the results of the analysis of the compressive strength measurem shown in Figures 11 and 12, the maximum increase in the mean value of com strength was recorded for the composition with a 10% share of foam wood parti
Compressive Strength
The results of the structure and thermal insulation properties of the produced foams correspond with the results of the analysis of the compressive strength measurements. As shown in Figures 11 and 12, the maximum increase in the mean value of compressive strength was recorded for the composition with a 10% share of foam wood particles-an increase in σ 10% by approx. 30% in comparison with control PUR foam. With a 15% addition of wood filler, a tendency towards a reduction in compressive strength can be noticed, but it should be emphasized that the HSD Tukey analysis did not confirm the statistically significant differences noted for the compositions containing 10% and 15% of wood particles (the same homogeneous group b). A statistically significant decrease in compressive strength was noticed only for the composition containing 20% wood particles. It should be emphasized that in this case, the compressive strength was comparable to control samples (the same homogeneous group-a). The increase in the compressive strength of compositions containing up to 10% wood filler can be explained by an increase in the apparent density of the tested foams. At the same time, the foam still has a well-developed cell structure, which is also confirmed by the analysis of the cell size distribution of the tested foams and the photos in Figures 7-9.
Despite a significant increase in the apparent density of foams, a further increase in the amount of wood particles reduced the strength of PUR-WP. Such shaping of the compressive strength of the tested compositions proves that this type of parameter is influenced not only by the apparent density of the foam, but also by its structure. This is confirmed by the research of other authors [38,39,44] and the additionally estimated specific compressive strength of the produced PUR-WP compositions, which is defined as the ratio of the compressive strength σ 10% to the density of the tested foams [39]. As demonstrated by the analysis, the use of waste wood particles of such large dimensions as PUR foam filler increases the specific compressive strength by up to 5%. Above this amount, the value of this parameter gradually decreases.
firmed by the research of other authors [38,39,44] and the additionally estimated specific compressive strength of the produced PUR-WP compositions, which is defined as the ratio of the compressive strength σ10% to the density of the tested foams [39]. As demonstrated by the analysis, the use of waste wood particles of such large dimensions as PUR foam filler increases the specific compressive strength by up to 5%. Above this amount, the value of this parameter gradually decreases. This is due to the fact that foams with 5% of wood filler have a homogeneous structure with a narrow range of cell size distribution (like pure foam). The proper filler dispersion with a well-formed foam structure probably facilitates the transfer of strain under compressive load and thus increases its strength. As mentioned earlier, higher filler additions result in a broadening of the range of the cell size distribution and the presence of numerous structural disorders (especially at 15% and 20% of the WP activity). Despite the increase in density, these foams are characterized by lower compressive strength.
Conclusions
This article presents the influence of wood particles as a filler in PUR foams. The WP was added in different amounts, i.e., 0, 5, 10, 15 and 20% to PUR and apparent density, cellular morphology, mechanical properties and thermal property were conducted. As expected, the addition of the wood particles causes a significant increase in the density of the PUR-WP composite. A significant increase in the density of the produced foams was noticed only when larger amounts of wood particles were used, i.e., from 10% and more. Along with the increase in the density of the tested foams, statistically significant changes in their thermal insulation were observed. The addition of wood filler in the amount of 10% allows us to improve the insulation properties of PUR foam, which is manifested by a decrease in the value of the thermal conductivity coefficient by 10%. It should be noted that even with the maximum amount of wood particles used in the tests (i.e., 20%), the value of the thermal conductivity coefficient was at a level comparable to that of pure PUR Figure 12. Compression specific strength of PUR-WP composition depending on wood particle content. This is due to the fact that foams with 5% of wood filler have a homogeneous structure with a narrow range of cell size distribution (like pure foam). The proper filler dispersion with a well-formed foam structure probably facilitates the transfer of strain under compressive load and thus increases its strength. As mentioned earlier, higher filler additions result in a broadening of the range of the cell size distribution and the presence of numerous structural disorders (especially at 15% and 20% of the WP activity). Despite the increase in density, these foams are characterized by lower compressive strength.
Conclusions
This article presents the influence of wood particles as a filler in PUR foams. The WP was added in different amounts, i.e., 0, 5, 10, 15 and 20% to PUR and apparent density, cellular morphology, mechanical properties and thermal property were conducted. As expected, the addition of the wood particles causes a significant increase in the density of the PUR-WP composite. A significant increase in the density of the produced foams was noticed only when larger amounts of wood particles were used, i.e., from 10% and more. Along with the increase in the density of the tested foams, statistically significant changes in their thermal insulation were observed. The addition of wood filler in the amount of 10% allows us to improve the insulation properties of PUR foam, which is manifested by a decrease in the value of the thermal conductivity coefficient by 10%. It should be noted that even with the maximum amount of wood particles used in the tests (i.e., 20%), the value of the thermal conductivity coefficient was at a level comparable to that of pure PUR foam. Moreover, the results of the compressive strength in the parallel direction to the foam's growth showed that addition of 10% WP to the foam lead to the increase in σ 10% by approx. 30% in comparison with the control PUR foam. The increase in the compressive strength of compositions containing up to 10% can be explained by an increase in the apparent density of the tested foams. Thus, the conducted studies indicated the possibility of using wood waste as fillers for PUR foams with open-cell structure. Such composite foams with 10 wt.% of waste wood particles from primary wood processing can be used as thermal insulation of open diffusivity building partitions in modern prefabricated buildings. | 8,358 | 2021-08-24T00:00:00.000 | [
"Materials Science"
] |
Variation in atomistic structure due to annealing at diamond/silicon heterointerfaces fabricated by surface activated bonding
Chemical composition around diamond/silicon heterointerfaces fabricated by surface activated bonding (SAB) at room temperature is examined by energy-dispersive X-ray spectroscopy under scanning transmission electron microscopy. Iron impurities segregate just on the bonding interfaces, while oxygen impurities segregate off the bonding interfaces in the silicon side by 3–4 nm. Oxygen atoms would segregate so as to avoid the amorphous compound with silicon and carbon atoms, self-organized at the bonding interfaces in the SAB process. When the bonding interfaces are annealed at 1000 °C, the amorphous compound converts into cubic silicon carbide (c-SiC), and nano-voids 5–15 nm in size are formed at the region between silicon and c-SiC, at which the oxygen density is high before annealing. The nano-voids can act as the gettering sites in which metal impurities are preferentially agglomerated, and the impurity gettering would help to improve the electronic properties of the bonding interfaces by annealing.
Introduction
Diamond crystals have superior physical properties towards high power, high frequency, and low-loss electronic devices, 1) for which the figures of merit are extremely high. 2) They have a wide bandgap, high carrier mobility, 3) high saturation velocity, 4) and the highest electrical breakdown field strength, 5) that are suitable for power electronics. 6,7) Also, they have high thermal conductivity, 8) that are suitable for superior heat spreaders of power devices for the next generation applications beyond 5G, 9) as demonstrated in heterointerfaces between diamond and gallium nitride (GaN) 10,11) or silicon (Si). 12,13) Although high-quality monocrystalline diamond wafers are commercially available, 14,15) their surface size is rather small (<15 × 15 mm 2 ) and they are still expensive for device manufacturing. A possible solution is bonding diamond chips with an inexpensive Si wafer, which would be effective for developing electronics applications. 16,17) In order to design high-performance power devices, the thermal stability of diamond/Si bonding interfaces is an important issue, because the interfaces are exposed to high temperatures during device fabrication and operation processes. Diamond/Si bonding interfaces with high crystallinity, free from adhesive intermediate layers, can be fabricated by fusion bonding at elevated temperatures above 1150°C 18) and by hydrophilic bonding at about 200°C . 19,20) However, the interfaces would be cracked at elevated temperatures due to a large thermal expansion coefficient mismatch, and therefore mechanical and thermal properties of the interfaces would be degraded via the structural modification. Similar interfaces can be fabricated using adhesive metallic layers, 21) that could suppress the structural modification at elevated temperatures. However, the metallic layers can act as an impurity source, and the electronic properties of the interfaces could be degraded via the migration of metallic impurities at elevated temperatures.
Recently, it is shown that diamond/Si heterointerfaces fabricated by surface activated bonding (SAB) at room temperature (RT) are not cracked even after 1000°C annealing, 17) via some kind of stress relaxation. 22) This mechanical stability against thermal processes is attributed to a composition-graded amorphous layer with carbon and silicon atoms self-organized in the bonding process. 23) According to the model, the amorphous compound is crystallized forming cubic silicon carbide (c-SiC) during 1000°C annealing, and the crystallized layer would suppress the concentration of thermal stresses at the interface and consequently keep the crack-free interface in thermal processes. 23) Meanwhile, impacts of compositional modification around the interfaces by annealing, including impurity diffusion, on electronic properties are not fully examined, even though the diamond devices fabricated on the interfaces are operated well. 16) In the present work, we have examined the chemical composition around the interfaces and discussed how the impacts of impurity atoms can be suppressed.
Experimental methods
A diamond/Si heterointerface was fabricated by SAB with a square wafer (4 mm × 4 mm × 0.65 mm in size) of high-pressure, high-temperature synthetic Ib type (100) monocrystalline diamond and a rectangular wafer (12 mm × 10 mm × 0.52 mm) of (100) n-Si (2.6 × 10 16 mm −3 ). 17) The wafers were activated at RT in a high vacuum (below 5 × 10 −5 Pa) using an argon (Ar) atom beam with a current of 1.65 mA at an applied voltage of 1.6 kV. The activated wafers were then pressed each other for bonding immediately after the activation process. The exact location of the bonding interface was determined with iron (Fe) impurities introduced intentionally in the surface activation process. 24) A part of the bonding interface was annealed at 1000°C for 12 h in a nitrogen gas ambient. 16) Specimens with the interface for scanning transmission electron microscopy (STEM) were fabricated using the following steps. A thick foil (more than a few micrometers thick) with the diamond/Si heterointerface was cut out by using a conventional focused ion beam (FIB) system, equipped with a high-resolution scanning electron microscope (SEM) (FEI, Helios NanoLab600i), 25) and mounted on a lift-out grid for STEM. The surface normal for Si side was 〈110〉, while that for the diamond side was 〈100〉. Then, the foil was thinned by FIB milling operated at low temperatures (LT-FIB), since the conventional FIB milling operated at RT (RT-FIB) easily introduce structural defects at SAB-fabricated interfaces. 26,27) The foil was thinned to about 100 nm thick by LT-FIB milling with a cold stage operated at −150°C (IZUMI-TECH, IZU-TSCS004) to suppress the structural modification in the FIB processes. 24,[27][28][29] The chemical composition around the interface was examined by energy-dispersive X-ray spectroscopy (EDX) under STEM, using a JEOL JEM-ARM200F analytical microscope. The impurity detection limit was about 0.1 at%. The atomic arrangement around the interface was examined by high-angle annular dark-field (HAADF-) STEM using a JEM-ARM200F microscope with an atomic resolution of about 0.12 nm. , and no other impurity atom is detected. It is known that Fe atoms are introduced just on the activated surfaces during the surface activation process, and they slightly diffuse during the bonding process, presumably via the transient enhanced effect. 24) The half-width at half-maximum (HWHM) of the Fe distribution, estimated to be 1 nm, is independent of the specimen thickness, even though the spatial resolution of STEM-EDX would depend on the specimen thickness via the spread of the electron beam. This suggests that the spatial resolution of our STEM-EDX is less than 1 nm, and the estimated resolution is consistent with the previous report. 24) As indicated with the green curve in Fig. 1(f), Fe atoms can diffuse by 2-3 nm in the Si side from the bonding interface. This diffusion length is almost the same as the diffusion length estimated in the SAB-fabricated Si/GaAs 24) and Si/Si 27) interfaces, suggesting the same diffusion mechanism in the bonding process. Similarly, the density of Ar atoms, which are inevitably introduced during the surface activation process, is maximum at the bonding interface, and the Ar density profile across the interface is almost the same morphology as the Fe density profile [the pink curve in Fig. 1(f)]. These results would support the transient enhancement diffusion model that the diffusion lengths of those impurities are dominated by the diffusivity of the point defects assisting the impurity diffusion. 29) On the other hand, the oxygen density peaks of the bonding interface in the Si side by 3-4 nm [the yellow curve in Fig. 1(f)]. Since oxygen atoms can be observed in any SAB-fabricated interfaces, they would be introduced from the residual gas molecules containing oxygen, such as water and oxygen molecules, in the vacuum chamber. At similar SABfabricated interfaces such as GaAs/Si, 32) diamond/GaN, 33) diamond/aluminum, 34) and diamond/copper, 25) the oxygen density peaks just on the bonding interfaces. Meanwhile, the oxygen density at the SAB-fabricated diamond/Si interface seems to have a negative correlation with the excess number of carbon atoms in the Si side [the blue curve in Fig. 1(f)], that are related to an amorphous layer with carbon and Si atoms self-organized in the SAB process. 23) Since the solubility of oxygen in silicon carbide is rather low, 35) oxygen atoms would be kicked out from the bonding interface via the self-organization of the amorphous compound with carbon and Si atoms.
1000°C annealed interfaces
When the bonding interface is annealed at 1000°C, spherical defects about 5-15 nm in size are formed in the Si side [ Fig. 2(a)]. The defects locate inside the Si matrix just on the interface between Si and c-SiC [ Fig. 2(b)], introduced via the crystallization of the amorphous compound with carbon and At bright spherical defects in HAADF-STEM images, Fe atoms are observed and the number density of Si atoms is slightly decreased [see Figs. 2(b), 2(c), and 2(e)]. No Fe agglomerate is observed at the region free from nano-voids, within the detection limit of our STEM-EDS [ Fig. 2(e)]. These results suggest that Fe atoms would agglomerate inside nano-voids and/or Fe agglomerates are nucleated nearby nano-voids. Therefore, nano-voids would act as gettering sites for Fe atoms. It is shown that electronic properties of SAB-fabricated Si/diamond heterojunction diodes, such as ideality factor, reverse-bias current, and barrier height at Si/ diamond bonding interfaces, are improved by post-bonding annealing, presumably via the reduction of interface states that are formed during the SAB processes. 36) Isolated metal atoms including Fe and point defects, introduced during the surface activation process, can induce defect levels around the bonding interfaces. Meanwhile, metal agglomerates with a moderate size do not affect the electronic properties so much. 37) Gettering of metallic impurities including Fe into nano-voids, as well as the recovery of crystallinity around the bonding interface, 23) would help to reduce the interface states.
Oxygen atoms also impact the electronic properties when they form precipitates (such as oxides) and defect clusters (such as oxygen-vacancy agglomerates) by annealing, while they are electronically inactive when they are isolated. The impacts would be small at the annealed diamond/Si interfaces, since oxygen atoms do not segregate around the interfaces including nano-voids [ Fig. 2(f)]. Figure 2(g) shows that Ar atoms can segregate at nanovoids, as well as at the c-SiC layer, in which the crystallinity is still decreased. 23) Similar Ar segregation into a damaged layer at a SAB-fabricated interface is reported. 38) It is hypothesized that inert Ar atoms would segregate so as to fill the vacant spaces at nano-voids and at the vacant defects introduced during the surface activation process. Agglomerates of inert Ar atoms would not impact the electronic properties, while they may impact the thermal properties of the interfaces.
Finally, we briefly discuss the formation process of nanovoids. Similar nano-voids acting as the gettering sites for Fe and Ar atoms are formed at the Si/Si homointerfaces fabricated by SAB with the same bonding condition (Fig. 3). Unlike at the diamond/Si heterointerfaces, the nano-voids are formed just on the bonding interface, at which the oxygen density is maximum. The results in Figs. 2 and 3 suggest that nano-voids are formed at the region in which a number of oxygen atoms exist. It is hypothesized that oxygen would assist the void formation, by producing joint vacancy-oxygen agglomerates (oxide particles) and by trapping vacancies into VO 2 clusters, as proposed in Czochralski-grown Si ingots. 39) Moreover, nano-voids at the diamond/Si interfaces (5-15 nm in size) are much larger in comparison with the Si/Si interfaces (less than 5 nm in size). It is known that voids are self-organized at the Si/SiC interfaces during the carbonization of Si crystals, 40) and the volume of the voids per unit area is proportional to the oxygen density. 41) Although the formation mechanism is still controversial, the void formation would be correlated with the growth of c-SiC and oxygen impurities.
Conclusions
Chemical composition around SAB-fabricated diamond/Si heterointerfaces is examined by STEM-EDS combined with LT-FIB. Fe and Ar impurities segregate just on the bonding interfaces, while oxygen impurities segregate off the bonding interfaces in the Si side by 3-4 nm, so as to avoid the amorphous compound with carbon and Si atoms introduced at the bonding interfaces. After 1000°C annealing, nanovoids are formed at the region where the oxygen density is high before bonding. They would act as the gettering sites for metal impurities, and the impurity gettering would help to improve the electronic properties of the interfaces by annealing. | 2,829.4 | 2022-03-11T00:00:00.000 | [
"Materials Science"
] |
Sample Selection Based on Active Learning for Short-Term Wind Speed Prediction
Abstract: Wind speed prediction is the key to wind power prediction, which is very important to guarantee the security and stability of the power system. Due to dramatic changes in wind speed, it needs high-frequency sampling to describe the wind. A large number of samples are generated and affect modeling time and accuracy. Therefore, two novel active learning methods with sample selection are proposed for short-term wind speed prediction. The main objective of active learning is to minimize the number of training samples and ensure the prediction accuracy. In order to verify the validity of the proposed methods, the results of support vector regression (SVR) and artificial neural network (ANN) models with different training sets are compared. The experimental data are from a wind farm in Jiangsu Province. The simulation results show that the two novel active learning methods can effectively select typical samples. While reducing the number of training samples, the prediction performance remains almost the same or slightly improved.
Introduction
Energy is the basic industry of the national economy, which plays an important role in guaranteeing the sustained development of the economy and the improvement of people's lives.The shortage of fossil energy and its pollution has become the bottleneck of sustainable social and economic development.Sustainability transitions are necessary and long-term processes, which shift socio-technical systems to more sustainable modes of production and consumption.Better transitions can be achieved by adopting effective support policies of renewable energy and making concrete efforts to improve energy efficiency [1,2].
Wind energy is an important renewable energy source with the advantages of large reserves and wide distribution.Small-scale wind turbines are easy to transport and install.They are suitable for remote areas, mountainous areas, and islands [3,4].As the cleanest source of renewable energy, wind power is rapidly becoming a potential and viable alternative energy source to burning fossil fuels.However, wind power generation has a high volatility and randomness.It may experience voltage fluctuation, even off-grid with large-scale wind power grid integration [5].An accurate wind power prediction is necessary.Short-term wind speed prediction is the key to the safety and scheduling optimization of power systems [6].
In the prior literature, wind speed prediction methods are often divided into three categories based on different mechanisms: physical methods [7], time series methods [8], and machine learning methods [9,10].The estimation of the wind speed can be considered as a nonlinear regression problem; therefore, machine learning methods are frequently adopted for short-term wind speed prediction with accurate results [11,12].In [13], three forecasting techniques were compared: autoregressive moving average with generalized autoregressive conditional heteroskedasticity (ARMA-GARCH), artificial neural network (ANN), and support vector regression (SVR).The results showed that the SVR and ANN, with superior nonlinear fitting, obtained better forecasting accuracy.In [14], ANN was used to predict wind speed.The article used particle swarm optimization to select input parameters to achieve desired results.For machine learning methods, parameter optimization is a problem that needs to be studied.In [15], SVR combined with feature selection was used for wind speed prediction.It validated that SVR was suitable for short-term wind prediction, and that the performance of an SVR model could be improved by adding relevant input features.Machine learning is a data-driven decision or prediction through establishing a model from sample inputs.The forecasting performance also depends on the quantity and the quality of sample inputs used to train the regression and classification model.
Active learning, as a special case of a semi-supervised machine learning method, is used to deal with sample selection.The added new sample is to compensate for deficiencies in existing samples.It selectively queries some useful information to obtain the desired outputs at new data points.In statistics, it is also called optimal experimental design [16].The active learning methods have often been successfully applied to classification problems [17,18].In the work of Douak et al. [19], the active learning method was firstly used for wind speed prediction.The results showed that due to the ability to filter out the training samples, active learning could outperform full samples in some cases.However, the active learning method used in the article was based on the sample information and no model information was added.Based on this, two novel active learning algorithms coupled with model information for short-term wind speed forecasting models are proposed in this work.The motivation of this paper is to use the active learning method to predict short-term wind speed by optimizing the training sample sets, which can reduce the complexity of the model and ensure model accuracy.The main contents of the present work are to: (1) select the training samples by using two novel active learning methods, and (2) develop the prediction model for wind speed using ANN and SVR and compare the two active learning methods.
The remainder of this paper is organized as follows.Section 2 presents two novel active learning methods for forecasting wind speed.The experimental data and forecasting indexes are presented in Section 3. The results and performance analysis are discussed in Section 4. Finally, conclusions are drawn in Section 5.
Active Learning
The samples play an active role in the active learning process.The active learning method usually restricts the input area, then aims at sampling in the less redundant information input area.The samples that are most conducive to improving the performance of the training model are selected.The quality of training sets can be improved by active learning.The active learning mechanism is generally realized by the "Query" approach [20][21][22].Firstly, select the initial training sample sets, then learn by some approach and add useful learning samples to the training sample sets.The training sample sets are obtained through continuous learning and optimization.
Euclidean Distance and Error (EDE-AL)
The first active learning approach (EDE-AL) is proposed by inserting samples that are distant from the current training samples, and removing samples by forecasting error.The Euclidean distances Ed l = {Ed l,t } (t = 1, 2, . . ., n) between each sample x l (l = n + 1, n + 2, . . ., n + m) of the learning subset U i (i = 1, 2, . . ., k) and n different current training samples x t (t = 1, 2, . . ., n) are computed as follows: After that, for each learning sample x l (l = n + 1, n + 2, . . ., n + m), the corresponding minimum distance value is considered as the addition criterion: However, a single distance criterion cannot reflect the validity of samples well.The forecasting errors of the new additional samples are calculated and the samples with lower forecasting errors are removed from the training set.
The strategy selects the samples with larger difference from the current training samples, and avoids choosing samples that are not useful for the model.The flow chart of the Euclidean distance combined with the forecasting error algorithm is shown in Figure 1 and summarized as follows: Step (1) Define the initial training samples x t (t = 1, 2, . . ., n) and the learning subset U i (i = 1, 2, . . ., k); Step The strategy selects the samples with larger difference from the current training samples, and avoids choosing samples that are not useful for the model.The flow chart of the Euclidean distance combined with the forecasting error algorithm is shown in Figure 1 and summarized as follows: Step Step (3) Define the sample similarity as fED(l) = −min{Edl}; Step (4) Label and insert the N most distant samples to the training set and update the forecasting model; Step (5) Calculate the forecasting errors of the new N training samples and remove samples with errors less than the threshold ξ; Step (6) Reestablish the model to predict the next learning subset until the iteration stops.
Support Vector Regression (SVR-AL)
In machine learning, the SVR algorithm is a supervised learning model used for regression analysis.The objective of SVR is to maximize the margin of separation and to minimize the misclassification error [23].The SVR defines a loss function that ignores errors, which are situated within a certain distance of the true value.The function is often called ε-intensive loss function [24].Figure 2 shows an example of a one-dimensional linear regression function with an ε-intensive band.
So, the SVR optimization problem [25] is as follows: where w is weight vector, C is a constant, and L is loss function 0, ( ) ( )
Support Vector Regression (SVR-AL)
In machine learning, the SVR algorithm is a supervised learning model used for regression analysis.The objective of SVR is to maximize the margin of separation and to minimize the misclassification error [23].The SVR defines a loss function that ignores errors, which are situated within a certain distance of the true value.The function is often called ε-intensive loss function [24].Figure 2 shows an example of a one-dimensional linear regression function with an ε-intensive band.
So, the SVR optimization problem [25] is as follows: where w is weight vector, C is a constant, and L is loss function Energies 2018, 11, x FOR PEER REVIEW 4 of 11 One-dimensional linear regression with an epsilon intensive band.
The above formula can be described by introducing slack variables , , 1,..., measure the deviation of samples outside the ε-insensitive zone.Thus, SVR is formulated as minimization of the following function . .( ) ( ) , 0, 1,..., Introducing Lagrange multipliers , , , and , the corresponding Lagrangian function can be written as . ., , , 0 This in turn leads to the optimization problem Introducing the kernel function, the above formula is written as Uncertainty sampling is the main strategy of active learning methods.Geometrically, the sample errors outside ε-intensive bands have great uncertainty and are important to the final design of the model.The proposed strategy selects the samples with uncertainty information.The flow chart of the SVR-AL algorithm is shown in Figure 3 and summarized as follows: Step The above formula can be described by introducing slack variables to measure the deviation of samples outside the ε-insensitive zone.Thus, SVR is formulated as minimization of the following function min Introducing Lagrange multipliers α, α * , η, and η * , the corresponding Lagrangian function can be written as This in turn leads to the optimization problem min Introducing the kernel function, the above formula is written as Uncertainty sampling is the main strategy of active learning methods.Geometrically, the sample errors outside ε-intensive bands have great uncertainty and are important to the final design of the model.The proposed strategy selects the samples with uncertainty information.The flow chart of the SVR-AL algorithm is shown in Figure 3 and summarized as follows: Step (1) Define the initial training samples x t (t = 1, 2, . . ., n) and the learning subset U i (i = 1, 2, . . ., k); Step ( 2
Wind Speed Data Sets
The wind speed data were collected from a wind farm in Jiangsu Province in China
Wind Speed Data Sets
The wind speed data were collected from a wind farm in Jiangsu Province in China.The acquisition equipment is a low-wind-speed wind turbine FD-77 with 1.5 MW rated power.The turbine is composed of three blades with a diameter of 77 m and a sweep area of 4657 m 2 .The collected wind information included real-time wind direction and speed, 5-min average wind speed, and standard deviation.
The 30-min average wind speed was calculated and used in the experiment from 1 June 2011 to 30 July 2011.There were 2729 groups of data.The first 2000 data were used as a training set and the remaining 729 data were for testing.The typical samples were selected from the training set.Therefore, the training set was divided into an initial training set and learning subsets.The first 100 data were used for the initial training set and then each of 100 samples was a learning subset.The final training set was used to train the models for short-term wind speed prediction, and the testing set was used to compare the performance of the two active learning strategies.Figure 4 displays the wind speed time series.Table 1 shows the descriptive statistics of different wind speed datasets.
Wind Speed Data Sets
The wind speed data were collected from a wind farm in Jiangsu Province in China.The acquisition equipment is a low-wind-speed wind turbine FD-77 with 1.5 MW rated power.The turbine is composed of three blades with a diameter of 77 meter and a sweep area of 4657 m 2 .The collected wind information included real-time wind direction and speed, 5-minute average wind speed, and standard deviation.
The 30-minute average wind speed was calculated and used in the experiment from June 1, 2011 to July 30, 2011.There were 2729 groups of data.The first 2000 data were used as a training set and the remaining 729 data were for testing.The typical samples were selected from the training set.
Therefore, the training set was divided into an initial training set and learning subsets.The first 100 data were used for the initial training set and then each of 100 samples was a learning subset.The final training set was used to train the models for short-term wind speed prediction, and the testing set was used to compare the performance of the two active learning strategies.Figure 4 displays the wind speed time series.Table 1 shows the descriptive statistics of different wind speed datasets.
Model Selection
Model selection selects a certain structural statistical model from a set of given data.If the input dimension is too small, the input information is not enough and the prediction accuracy will be reduced.If the input information is redundant, the complex prediction model will also reduce the prediction accuracy [26].The criterion function method is to determine the degree of approximation of the original data based on the residual value.Bayesian information criterion (BIC) is a criterion for model selection and the model with the lowest BIC is preferred.In this paper, the BIC function method was used to determine the model input dimension.The autocorrelation function (ACF) and partial autocorrelation function (PACF) were also used to identify the input dimension.The PACF is zero at lag p + 1 and greater, so the appropriate lag is the one beyond which the partial autocorrelations are all zero.According to the result of PACF and BIC criterion (Figure 5), the input dimension of the model was 3.
Model Selection
Model selection selects a certain structural statistical model from a set of given data.If the input dimension is too small, the input information is not enough and the prediction accuracy will be reduced.If the input information is redundant, the complex prediction model will also reduce the prediction accuracy [26].The criterion function method is to determine the degree of approximation of the original data based on the residual value.Bayesian information criterion (BIC) is a criterion for model selection and the model with the lowest BIC is preferred.In this paper, the BIC function method was used to determine the model input dimension.The autocorrelation function (ACF) and partial autocorrelation function (PACF) were also used to identify the input dimension.The PACF is zero at lag p + 1 and greater, so the appropriate lag is the one beyond which the partial autocorrelations are all zero.According to the result of PACF and BIC criterion (Figure 5), the input dimension of the model was 3.
Prediction Models
The ANN and SVR were used to develop the prediction models for short-term wind speed.The multilayer perceptron (MLP) is one of the most popular ANN algorithms [27].In this study, MLP was used with an input layer, a hidden layer, and an output layer.There were 3 input nodes, 6 hidden-layer nodes, and 1 output node.The transfer function on the hidden layer was a sigmoid function and the training algorithm was Levenberg-Marquardt.
The SVR is a popular non-linear modeling tool.The SVR maps the input data into a high dimensional feature space via a kernel [28].In this study, a radial basis kernel was used for SVR, and the gradient optimization method was used to determine two important parameters: the penalty coefficient and the width of the RBF kernel function.
The models are evaluated synthetically using the following evaluation criteria.The main variables in this paper are shown in the Table 2.
Prediction Models
The ANN and SVR were used to develop the prediction models for short-term wind speed.The multilayer perceptron (MLP) is one of the most popular ANN algorithms [27].In this study, MLP was used with an input layer, a hidden layer, and an output layer.There were 3 input nodes, 6 hidden-layer nodes, and 1 output node.The transfer function on the hidden layer was a sigmoid function and the training algorithm was Levenberg-Marquardt.
The SVR is a popular non-linear modeling tool.The SVR maps the input data into a high dimensional feature space via a kernel [28].In this study, a radial basis kernel was used for SVR, and the gradient optimization method was used to determine two important parameters: the penalty coefficient and the width of the RBF kernel function.
The models are evaluated synthetically using the following evaluation criteria.The main variables in this paper are shown in the Table 2.
(1) root mean square error (RMSE) (2) mean absolute error (MAE) (3) mean absolute percentage error (MAPE) where y t and ŷt are the measured and predicted wind speed, respectively, at time t, and M is the number of test data.
Results and Discussion
In order to better verify the effectiveness of the two proposed active learning methods, the random selection of a similar number of samples was used for comparison.The model was used for 1-step ahead (30 min) and 4-step ahead (2 h) wind speed prediction.The prediction results combining different models and different training sets are shown in Tables 3 and 4. From Tables 3 and 4, it can be seen that the number of training samples were both reduced by half by using two proposed active learning methods, and the performance was similar to the all training samples model.Comparing the results of different models, the persistence model had the worst performance and the SVR model was more suitable for wind speed prediction than the ANN model.Table 4 shows the results of the 1-step ahead (30 min) prediction; it can be seen that the RMSE with all training samples was the best.This means that the model could be trained more adequately with all training samples.However, the ANN model with the typical samples selected by EDE-AL could obtain a similar RMSE and a relatively better MAPE and MAE.At the same time, the MAE and MAPE by the SVR model with SVR-AL sample sets were lowest.The performances of the two active learning methods were better than that of the similar number of random training samples.Compared to the performance of all training sample set, the performances of two active learning methods were similar or slightly worse.The numbers of training samples were reduced by about 60%.Two active learning methods combined with different models had different performances.In conclusion, these two methods significantly reduced the training samples and ensured model accuracy.
Table 4 shows the results of 4-step (2 h) ahead prediction.The RMSE, MAE, and MAPE were poorer than 1-step ahead prediction.Compared to the all training samples, the numbers of training samples by EDE-AL and SVR-AL were reduced by 34 percent.Meanwhile, the two active learning methods outperformed the random method.The performance discrepancy between the two active learning methods was not obvious.
Figures 6 and 7 show the 1-step ahead prediction results by two active learning methods combined with SVR models for short-term wind speed.Figure 6 and Figure 7 show the 1-step ahead prediction results by two active learning methods combined with SVR models for short-term wind speed.
For EDE-AL, two parameters needed to be determined.The larger the N, the more samples were labeled and added.The larger the ξ, the more samples were removed.When the forecasting errors of samples were less than half of the RMSE of all training sample sets, we argued that the samples were useless for the model.Therefore, ξ was chosen to be half of the RMSE of all training sample sets and N was variable.From Figure 6, it can be seen that the changes of RMSE and MAPE were moderate.When the number of training samples was 847, the MAPE was minimal and RMSE was relatively large.Therefore, the point at 680 was selected with relatively small values of RMSE and MAPE at the same time.
For SVR-AL, the number of additional samples gradually increases as ε becomes smaller.From Figure 7, it can be seen that the RMSE gradually decreased as the number of samples increased.
However, the MAPE between 600-800 was significantly less than that of the all training sample set.
The samples outside the ε-insensitive zone generally fluctuated greatly.Due to the addition of these samples, the marginal samples had better predictions.More intermediate samples were added as ε became smaller.Therefore, the MAPE decreased first and then increased.For EDE-AL, two parameters needed to be determined.The larger the N, the more samples were labeled and added.The larger the ξ, the more samples were removed.When the forecasting errors of samples were less than half of the RMSE of all training sample sets, we argued that the samples were useless for the model.Therefore, ξ was chosen to be half of the RMSE of all training sample sets and N was variable.From Figure 6, it can be seen that the changes of RMSE and MAPE were moderate.When the number of training samples was 847, the MAPE was minimal and RMSE was relatively large.Therefore, the point at 680 was selected with relatively small values of RMSE and MAPE at the same time.
For SVR-AL, the number of additional samples gradually increases as ε becomes smaller.From Figure 7, it can be seen that the RMSE gradually decreased as the number of samples increased.However, the MAPE between 600-800 was significantly less than that of the all training sample set.The samples outside the ε-insensitive zone generally fluctuated greatly.Due to the addition of these samples, the marginal samples had better predictions.More intermediate samples were added as ε became smaller.Therefore, the MAPE decreased first and then increased.
Figures 8 and 9 show the 4-step ahead prediction results by two active learning methods combined with SVR models for short-term wind speed.Compared with 1-step ahead prediction results, the performance of 4-step ahead prediction was poor and the trends of RMSE and MAPE were consistent.
Conclusions
Active learning was used to select samples for short-term wind speed prediction in this study.Starting from the initial training set, the proposed method selected typical samples from a large number of samples.Two novel active learning methods using model information to label and add samples were proposed in this study.The ANN and SVR models combined with two novel active learning methods were investigated for 1-step (30 min) and 4-step (2 h) ahead wind speed prediction.The results showed that the EDE-AL and SVR-AL had better performance than the random approach.Compared with all the training samples, the selected samples by the proposed methods were significantly reduced, while ensuring model accuracy.
( 2 )
Compute the Euclidean distances Ed l = {Ed l,t } (t = 1, 2, . . ., n) from the n different training samples x t (t = 1, 2, . . ., n) for each sample x l (l = n + 1, n + 2, . . ., n + m) of the learning subset; Step (3) Define the sample similarity as f ED (l) = −min{Ed l }; Step (4) Label and insert the N most distant samples to the training set and update the forecasting model; Step (5) Calculate the forecasting errors of the new N training samples and remove samples with errors less than the threshold ξ;Step (6) Reestablish the model to predict the next learning subset until the iteration stops.
Energies 2018 ,
11, x FOR PEER REVIEW 3 of 11 However, a single distance criterion cannot reflect the validity of samples well.The forecasting errors of the new additional samples are calculated and the samples with lower forecasting errors are removed from the training set.
Figure 1 .
Figure 1.The flow chart of the active learning approach by Euclidean distance and error (EDE-AL).
Figure 1 .
Figure 1.The flow chart of the active learning approach by Euclidean distance and error (EDE-AL).
( 1 )
Define the initial training samples xt (t = 1, 2,…, n) and the learning subset Ui (i = 1, 2,…, k); Step (2) Establish an ε-SVR model by using training samples, and calculate the model error of each sample xl (l = n + 1, n + 2,…, n + m) of the learning subset; Step (3) Label and insert the samples with model errors outside the ε-intensive band into the training set;Step (4) Update the training set and reestablish the ε-SVR model to predict the next learning subset until the iteration stops.
Figure 2 .
Figure 2. One-dimensional linear regression with an epsilon intensive band.
11 Figure 3 .
Figure 3.The flow chart of the active learning approach by support vector regression (SVR-AL).
Figure 4 .
Figure 4.The wind speed data in Jiangsu wind farm.
Figure 3 .
Figure 3.The flow chart of the active learning approach by support vector regression (SVR-AL).
Energies 2018 , 11 Figure 3 .
Figure 3.The flow chart of the active learning approach by support vector regression (SVR-AL).
Figure 4 .
Figure 4.The wind speed data in Jiangsu wind farm.
Figure 4 .
Figure 4.The wind speed data in Jiangsu wind farm.
Figure 5 .
Figure 5.The result of input dimension selection.
Figure 5 .
Figure 5.The result of input dimension selection.
Figure 6 . 1 -
Figure 6.1-step ahead prediction results of the support vector regression (SVR) model by EDE-AL with ξ = 0.54 and different N values.
Figure 6 . 1 -
Figure 6.1-step ahead prediction results of the support vector regression (SVR) model by EDE-AL with ξ = 0.54 and different N values.
Figure 9 .
Figure 9. 4-step ahead prediction results of the SVR model by SVR-AL with different ε values.
Figure 7 . 1 -
Figure 7. 1-step ahead prediction results of SVR model by SVR-AL with different ε values.
Figure 9 . 4 -
Figure 9. 4-step ahead prediction results of the SVR model by SVR-AL with different ε values.
Table 1 .
Descriptive statistics of wind speed datasets (m/s).
Table 1 .
Descriptive statistics of wind speed datasets (m/s).
Table 1 .
Descriptive statistics of wind speed datasets (m/s).
Table 2 .
The description of the main variables.
Table 3 .
1-step ahead (30 min) prediction of short-term wind speed with different sample sets.ANN is artificial neural network.SVR is support vector regression.
Table 4 .
4-step ahead (2 h) prediction of short-term wind speed with different sample sets.
Table 4
shows the results of 4-step (2 hours) ahead prediction.The RMSE, MAE, and MAPE were poorer than 1-step ahead prediction.Compared to the all training samples, the numbers of training samples by EDE-AL and SVR-AL were reduced by 34 percent.Meanwhile, the two active learning methods outperformed the random method.The performance discrepancy between the two active learning methods was not obvious.
Table 4 .
4-step ahead (2 hours) prediction of short-term wind speed with different sample sets.
Figure 7. 1-step ahead prediction results of SVR model by SVR-AL with different ε values.Figure 8 and Figure 9 show the 4-step ahead prediction results by two active learning methods combined with SVR models for short-term wind speed.Compared with 1-step ahead prediction results, the performance of 4-step ahead prediction was poor and the trends of RMSE and MAPE | 6,399.4 | 2019-01-22T00:00:00.000 | [
"Computer Science"
] |
Low-threshold CMOS Rectifier Design for Energy Harvesting in Biomedical Sensors
The power transfer efficiency of energy harvesting systems is strongly dependent on the power conditioning circuits, especially rectifiers. The voltage drop across rectifier and its leakage current can drastically influence the efficiency. The hybrid energy harvesters impose even more severe constraints on the rectifier. The low Vth transistors and bulk regulation technique are used in this work to mitigated the voltage drop and leakage current, respectively. It has been shown that the bulk regulation stops the current leakage through body of PMOS transistor. A near zero threshold cross connected CMOS rectifier is presented in this work using the standard 180nm UMC technology and experimental analysis are carried out to evaluate the circuit performance.
INTRODUCTION
The increasing demand for biomedical implants is triggered by numerous factors such as reducing healthcare costs, enhancing quality of human life, and understanding human anatomy and physiology [8].Supplying the electrical power to implants, regardless of their function and specifications, is the most stringent constraint to implant functionality.The research on energy harvesting systems is targeted significantly to find autonomous energy supplies as an alternative to batteries in low power electronic devices such as wireless sensor networks and biomedical implants.mechanisms into inertial mass vibrations, near/far field wireless transmission, photo-voltaic cells, etc.The efficiency of energy harvesters generally depends on transducer as well as its following power conditioning circuit.The transducer part is application dependent and its optimization depends on a very wide set of parameters such as inductive or mechanical couplings.However, power conditioning components are all in electrical domain that consist of resonance circuit, rectifier, regulator and converter [6].Regardless of transduction mechanism the output electrical energy is in the form of alternating current or voltage.Rectifiers are needed to generate useful direct current (DC) supply for electronic circuits.Therefore rectifiers are the most essential part of power conditioning in energy harvesters.The development of hybrid structures that can combine distinctive attributes of individual energy harvesting techniques at the same time is the recent challenge in the design of high efficiency energy harvesters [3,5].For example, piezoelectric harvesters have a higher efficiency at high frequencies, while Differential drive (Cross-connected) CMOS rectifier electromagnetic induction generates higher energy levels at low frequencies.Therefore combining these two techniques can lead to a broad band energy harvester.In addition, the radio frequency (RF) energy harvesters are implemented by electrical components only without any mechanical interaction, which gives a higher reliability compared with inertial counterparts.Though the inertial energy harvesters can generate higher output power compared with the RF counterparts.Therefore, in a hybrid system the RF energy scavengers could be used as a backup solution.A hybrid piezoelectric-RF energy harvester is presented in [5], where the piezoelectric output voltage is used to bias the rectifier up to the conduction point, in order to facilitate a more efficient conversion for weak RF inputs.
In this work the rectifier specification due to each energy harvesting technique is discussed and a highly efficient rectifier is implemented in CMOS.The conditions imposed on rectifier by each energy harvesting technique are discussed in Section 2. Section 3 introduces the proposed rectifier and experimental results are presented in Section 4.
CMOS BRIDGE RECTIFIERS
In conventional rectifier applications, Schottky diodes offer a superior performance.Their lower threshold voltage and reverse leakage results in higher conversion efficiency.However, it is impossible to implement Schottky diodes in standard CMOS processes.Popularity of CMOS rectifiers is due to their integration capability with the other electronics including sensor readout circuits, transceivers, etc. Diode connected CMOS transistor can be used as rectifier elements with minimum 1V th voltage drop.This translates to 2V th in a full bridge rectifier, which is more than the typical peak voltage induced in the secondary coil/antenna of RF energy harvesters.In an improved bridge rectifier for low frequency applications, i.e. inertial energy harvesters, transistors are employed in switch configuration instead of diode-connected as illustrated in Figure 2 [1].The minimum input voltage is reduced to 1V th that saves significant power in micro-energy harvesting.This circuit is also used for RF energy harvesting as differential drive CMOS rectifier [2] or complementary CMOS switch rectifier [7].
Zero-threshold NMOS transistors are used in [9] to implement a voltage rectifier-doubler.In CMOS fabrication technology in order to make adjacent complementary transistors an n-well/p-well structure is needed as illustrated in Figure 3, which avoids zero-threshold p-n junction.Therefore, only one type of zero-V th transistors (NMOS) is available in CMOS technology.Though, both types are needed to implement the differentially driven bridge rectifier in Figure 2. A silicon-on-saphire (SOS) fabrication technology is used in [7] to implement complementary near-zero-V th transistors.However, non-standard processes result in extra fabrication costs.
BULK-REGULATED LOW-THRESHOLD (VTH) RECTIFIER
As mentioned earlier in Section 1 hybrid energy harvesters are highly desirable for efficiency and reliability enhancement.However, hybrid transduction imposes different constraints on rectifier circuits.Several researches have been contributed in recent years to improve the efficiency of rectifiers.Though, the reported figure-of-merits fail to thoroughly describe the rectifier performance for hybrid applications.For example, the efficiency can be influenced by either reverse leakage due to high voltage outputs from piezoelectric transducers, or low voltage outputs from electromagnetic transducers that are unable to overcome the rectifier potential barrier.In addition, the high frequency outputs from RF transmitter might not be efficient due to the rectifier frequency response.In most of the researches carried out on the differentially driven CMOS rectifier, leakage currents are assumed to be ignorable.However, simulation results show that in high voltage regime, it can be even larger than the load current.In order to stop leakage current a bulk regulation technique is employed here, as shown in Figure 4.
Low-V th devices are used in this design, in order to strike a compromise between the normal and zero-V th transistors.Low-V th CMOS transistors are available in standard fabrication technologies by low doping levels in well regions.
The NMOS body (P-Substrate) in Figure 4, is connected to ground that avoids any leakage current through body.The PMOS body (N-well) should be connected to the most positive voltage of the circuit.Bulk regulation transistors (MB1-MB4) switch the body of PMOS transistors to the highest voltage, which can be either the output voltage or the input voltage.The simulation results for cross-connected rectifier with normal transistors and low-voltage transistors is compared with the bulk regulated rectifier circuit, as illustrated in Figure 8.In this simulation output load is a 10KΩ resistor in parallel with a 100nF capacitor.The negligible differ- ence in the steady state load voltages is due to the different impedances seen from the rectifier inputs.The aspect ratios are the same for each transistor in corresponding circuits, though the difference is due to unequal impurity doping between normal and low-V th transistor as well as added bulk regulation transistors.The bulk current of PMOS transistor M3 that is activated in the same input cycle in three different circuits is monitored in Figure 4.The bulk leakage current spikes in rectifier with normal transistor (M.b) and low-V th transistor without bulk regulation (MLV.b) is much larger than load current ( V out RL ).However, the bulk leakage current in bulk-regulated circuit (MLVBR.b) is zero.
EXPERIMENTS
The proposed circuit is implemented in UMC 0.18 um CMOS technology and tested with different loads.The efficiency of the proposed circuit was shown in a previous work to be higher than Schottky diodes [4].However adding the bulk regulation transistors influence the large signal frequency response of the circuit.The large signal frequency response of the rectifier is measured by applying sinusoidal inputs and varying the frequency.As illustrated in Figure 7, the power transfer after a certain frequency (1MH in this case) drops down significantly.This is mainly due to the wide aspect The characteristic of bulk-regulated rectifier for several load resistors is measured using X-Y mode of oscilloscope.As illustrated in Figure 8, the slope of the curve in conduction region varies with the load.The minimum voltage required to turn on the rectifier is also dependent on the load resistor.
CONCLUSION
The hybrid energy harvesting can boost the efficiency and reliability of autonomous sensors in applications such as biomedical implants.However, hybrid transduction mechanisms impose stringent constraints on the power conditioning circuits especially the rectifier.A CMOS rectifier with bulk regulation is designed using low-V th transistors to mitigate leakage currents and voltage drop across the rectifier.The proposed circuit is implemented in 0.18um UMC CMOS technology and successfully characterized.
Energy harvesting systems are categorized based on the transduction * Dr. Mohammadi, Dr Redoute and Dr. Yuce are with Biomedical Integrated Circuits and Sensors Research Lab in the Department of Electrical and Computer Systems Engineering at Monash University, Clayton (3800), VIC, Australia.
Figure 3 :
Figure 3: Cross section of standard CMOS transistor.
Figure 5 :
Figure 5: Transient and steady-state load voltage of CMOS rectifiers with normal and low-V th transistor with and without bulk regulation.
Figure 6 :
Figure 6: Transient and steady-state bulk current of CMOS rectifiers with normal and low-V th transistor with and without bulk regulation.
Figure 7 :
Figure 7: The large signal frequency response of CMOS rectifier
Figure 8 :
Figure 8: The rectifier input-output transfer characteristic for several ohmic loads. | 2,102.8 | 2015-09-28T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Pre-Seismic Irregularities during the 2020 Samos (Greece) Earthquake (M = 6.9) as Investigated from Multi-Parameter Approach by Ground and Space-Based Techniques
We present a comprehensive analysis of pre-seismic anomalies as computed from the ground and space-based techniques during the recent Samos earthquake in Greece on 30 October 2020, with a magnitude M = 6.9. We proceed with a multi-parametric approach where pre-seismic irregularities are investigated in the stratosphere, ionosphere, and magnetosphere. We use the convenient methods of acoustics and electromagnetic channels of the Lithosphere–Atmosphere– Ionosphere-Coupling (LAIC) mechanism by investigating the Atmospheric Gravity Wave (AGW), magnetic field, electron density, Total Electron Content (TEC), and the energetic particle precipitation in the inner radiation belt. We incorporate two ground-based IGS GPS stations DYNG (Greece) and IZMI (Turkey) for computing the TEC and observed a significant enhancement in daily TEC variation around one week before the earthquake. For the space-based observation, we use multiple parameters as recorded from Low Earth Orbit (LEO) satellites. For the AGW, we use the SABER/TIMED satellite data and compute the potential energy of stratospheric AGW by using the atmospheric temperature profile. It is found that the maximum potential energy of such AGW is observed around six days before the earthquake. Similar AGW is also observed by the method of wavelet analysis in the fluctuation in TEC values. We observe significant energetic particle precipitation in the inner radiation belt over the earthquake epicenter due to the conventional concept of an ionospheric-magnetospheric coupling mechanism by using an NOAA satellite. We first eliminate the particle count rate (CR) due to possible geomagnetic storms and South Atlantic Anomaly (SAA) by the proper choice of magnetic field B values. After the removal of the statistical background CRs, we observe a significant enhancement of CR four and ten days before the mainshock. We use Swarm satellite outcomes to check the magnetic field and electron density profile over a region of earthquake preparation. We observe a significant enhancement in electron density one day before the earthquake. The parameters studied here show an overall pre-seismic anomaly from a duration of ten days to one day before the earthquake.
Abstract:
We present a comprehensive analysis of pre-seismic anomalies as computed from the ground and space-based techniques during the recent Samos earthquake in Greece on 30 October 2020, with a magnitude M = 6.9. We proceed with a multi-parametric approach where pre-seismic irregularities are investigated in the stratosphere, ionosphere, and magnetosphere. We use the convenient methods of acoustics and electromagnetic channels of the Lithosphere-Atmosphere-Ionosphere-Coupling (LAIC) mechanism by investigating the Atmospheric Gravity Wave (AGW), magnetic field, electron density, Total Electron Content (TEC), and the energetic particle precipitation in the inner radiation belt. We incorporate two ground-based IGS GPS stations DYNG (Greece) and IZMI (Turkey) for computing the TEC and observed a significant enhancement in daily TEC variation around one week before the earthquake. For the space-based observation, we use multiple parameters as recorded from Low Earth Orbit (LEO) satellites. For the AGW, we use the SABER/TIMED satellite data and compute the potential energy of stratospheric AGW by using the atmospheric temperature profile. It is found that the maximum potential energy of such AGW is observed around six days before the earthquake. Similar AGW is also observed by the method of wavelet analysis in the fluctuation in TEC values. We observe significant energetic particle precipitation in the inner radiation belt over the earthquake epicenter due to the conventional concept of an ionospheric-magnetospheric coupling mechanism by using an NOAA satellite. We first eliminate the particle count rate (CR) due to possible geomagnetic storms and South Atlantic Anomaly (SAA) by the proper choice of magnetic field B values. After the removal of the statistical background CRs, we observe a significant enhancement of CR four and ten days before the mainshock. We use Swarm satellite outcomes to check the magnetic field and electron density profile over a region of earthquake preparation. We observe a significant enhancement in electron density one day before the earthquake. The parameters studied here show an overall pre-seismic anomaly from a duration of ten days to one day before the earthquake.
Introduction
The seismic hazards and their preparation mechanism are extremely complex. The overall mechanism and its possible outcomes not only have huge outbursts of mechanical energy but have a wide range of physical and chemical processes attached to the lithosphere, atmosphere, and ionosphere [1]. Several studies have reported that the processes of preparation for earthquakes (EQs) are confined within a preparation or critical zone [2,3]. Electro-kinetic phenomena, emission of radioactive particles, thermal irregularities, emissions of electromagnetic signals, and many other physical processes have been identified as the short-term precursory phenomena of impending EQs. Numerous studies have already established the pre-, co-, and post-seismic anomalies using different parameters [1,[4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. It is found that the pre-EQ processes are nonlinear, anisotropic, and can have a dependency of multiple parameters. Based on such an idea, a coupling mechanism was established to understand the pre-seismic process that begins well before the mainshock known as the Lithosphere-Atmosphere-Ionosphere coupling (LAIC) mechanism [1]. This coupling (LAIC) mechanism is detected through various channels such as thermal [1], acoustic [21,22], and electromagnetic [23]. Many ground-and space-based instruments are used to detect these disturbances [1,24,25]. The anomalies are observed using ground-based observations with the Very-Low-Frequency (VLF)/Low frequency (LF) radio wave, Critical frequency of F2 layers ( f 0F 2 ), Total Electron Content (TEC), Ultra Low-Frequency (ULF) emissions, etc. [26][27][28][29][30][31][32][33][34]. In parallel, a lot of space-based techniques have been used to investigate the LAIC mechanisms. A lot of satellite observations provide the atmospheric ion-concentration, electron density, magnetic field components, etc. Using Detection of Electromagnetic Emissions Transmitted from EQ Regions (DEMETER) and CHAllenging Minisatellite Payload (CHAMP) satellite data, Ryu et al. [35] have computed electron temperature, electron concentration, ion concentration, and ion temperature to study the perturbation in the ionosphere due to the Honshu EQ which happened in Japan on September 2004 and have reported the ion temperature enhancement one week before the mainshock. Ryu et al. [36] showed the Equatorial Ionospheric Anomaly (EIA) one month before the Wenchuan EQ (12 May 2008) using DEMETER satellite data. They have concluded that this kind of anomaly happened due to the external electric field generated from the EQ epicenter that perturbs the E × B drift which already exists. There are several reports on an investigation of pre-seismic anomalies based on multi-parametric approaches in recent years. Pulinets et al. [37] have used the GPS (Global Positioning System)-TEC, Outgoing Longwave Infrared Radiation (OLR), Atmospheric Chemical Potential (ACP), and the b value from the Gutenberg-Richter Frequency Magnitude Relationship (FMR) simultaneously. They have observed depletion in the cross-correlation coefficient in GPS-TEC one day before, OLR anomaly 19 days before, anomalies in ACP distribution 10 days before, and decrement in b value for the Kamchatka EQ on 20 March 2016. In De Santis et al. [38], they have used four parameters, specifically skin temperature, methane, total column water vapor, and aerosol optical thickness. They observed positive increment in all of these chosen parameters before the California EQ on 5 July 2019. They have also observed the disturbance related to this EQ in the Swarm satellite-A on 3 June 2019, in the Y component of magnetic field variation. This anomaly was observed in the nighttime period. They have also observed the disturbances in the electron density profile from the same track. In Chetia et al. [39], they have reported anomalies in three parameters to study the ionospheric perturbation related to the Kokrajhar, Assam EQ on 12 September 2018. The examined parameters were TEC, geomagnetic total field intensity, and soil radon. They noticed significant anomalies in all parameters before the corresponding EQ. Recently, Piersanti et al. [40] have shown an extensive research on this multiparametric study using Magnetospheric-Ionospheric-Lithospheric Coupling MILC) model and justified their various parameters for 2018 Bayan EQ. In order to make an accurate comparison between the co-seismic and pre-seismic observations, they first separated the data set based on relative to the co-seismic and pre-seismic observations and then analyzed each of the four parameters: atmospheric oscillations (AGW), ionospheric plasma (TEC), electric field perturbations, and magnetospheric FLR eigenfrequencies. In this manuscript, we use ground-and space-based observation of some of the well-established parameters such as TEC, Atmospheric Gravity Waves (AGW), energetic particle bursts in the inner radiation belt, magnetic field intensity, and ionospheric electron density profile during the 2020 Samos earthquake (EQ) (M = 6.9, epicenter (37.9001 • N, 26.8057 • E)in Greece on 30 October 2020.
The ionospheric disturbances can be observed by using a dual-frequency GPS receiver to understand the behavior of TEC in the ionosphere. The total number of free thermal electrons present between the GPS satellite and the receiver is known as TEC. It is also known as columnar electron density. As the ionosphere is a dispersive medium for electromagnetic waves, it introduces a time delay in radio signals during the propagation of GPS satellite signals [41], which can be determined by comparing the two GPS satellite signal frequencies. Absolute TEC values were calculated using dual-frequency GPS receivers in this study. Numerous researchers have already established that TEC can be used as one of the seismic precursor parameters. The electron density variation before 1999 M = 7.6 Chi-chi quake is first reported by Liu et al. [27] utilizing TEC processed from the GPS receiver. They found that there was a decrement of TEC during the evening time frame on days 1, 3, and 4 preceding this EQ. In this paper, we utilized the method described in [28]. They confirm this outcome through a statistical analysis of a global ionosphere map (GIM) TEC during the 20 M ≥ 6.0 EQs in Taiwan from September 1999 to December 2002. After that, more comparable work has been reported by utilizing GIM to examine TEC peculiarities before huge EQs, and numerous conspicuous outcomes have been found. At that point, the factual investigation before 17 M ≥ 6.3 seismic EQs during the long-term time frame from 1 May 1998 to 30 April 2008 appeared from Liu et al. [42] utilizing GIM TEC. They tracked down that the TEC is diminished 3-5 days before the EQ over the focal point. The seismo-ionospheric antecedents of odd abatements in TEC, which seem to be five days preceding 26 December 2004 M9.3 Sumatra-Andaman EQ are accounted for in [43]. There is additionally some amazing perception around both focal point and its form point during the 2008 M 8.0 Wenchuan EQ [44][45][46][47]. Akhoonzadeh et al. [48] revealed that the measurable outcomes show the positive and negative oddities in both of DEMETER and TEC information during days 1-5 preceding all determined EQs during calm geomagnetic conditions are exceptionally viewed as seismo-ionospheric forerunners. The TEC over the focal point or form point accomplish their most extreme qualities on the day before the EQ over the mid scope area, while the northern peak of tropical ionization peculiarity (EIA) moves poleward [49]. There are various prompt perceptions of seismo-ionospheric inconsistencies by utilizing GPS-TEC in situ electron density from circling satellites [50][51][52]. There is an enhancement of TEC over the epicenter 1 day before 12 January 2010, M = 7 Haiti EQ. Tao et al. [53] contemplated the ionospheric varieties over the focal point of 17 July 2006 M = 7.7 south of Java seismic EQ. The outcomes show that seismo-ionospheric inconsistencies in the GPS-TEC and in situ plasma density happen at practically similar times over the epicenter by utilizing GPS-TEC and DEMETER plasma information. Vita et al. [54] showed the identification of ionospheric GPS TEC anomalies before the EQ in Sumatra between 2007-2012 using the correlation technique. Moreover, there are some additional works on TEC variation during the EQ in the Indian-subcontinent region. Kumar et al. [55] examined the ionospheric TEC because of the seismic EQ of Tamenglong on 3 January 2016. They contemplated the Tamenglong EQ (M = 6.7, 3 January 2016) by utilizing the information from the stations at Lhasa, China, Hyderabad, India (17.410 • N, 78.55 • E), and Patumwan, Thailand (13.730 • N, 100.53 • E). They tracked down huge augmentation during the five days before the EQ and decrement of TEC after the mainshock of the EQ. Sharma et al. [56] proposed an ionospheric TEC demonstrating to notice the variety of TEC In addition, they also tracked down that the estimation of TEC was lower before 13-14 days from the initial two seismic EQs. Generally speaking, the examination has uncovered that low TEC that followed a few high TEC are well associated with the seismic events in the Himalayan region.
The traveling ionospheric disturbances (TIDs) are created by EQs and tidal waves. Komjathy et al. [57] had identified "normal" TIDs across Japan around 5 h following the 2011 Tohoku EQ. The variety of observed TEC irritations can be predicted using JPL's ongoing Global Assimilative Ionospheric Model (GAIM) system, which includes wavecreated gravity waves, auroral rotation, regular TIDs, and tropical fluctuations. The variety of observed TEC irritations can be predicted using JPL's ongoing Global Assimilative Ionospheric Model (GAIM) system, which includes wave-created gravitational waves, auroral rotation, regular TIDs, and tropical fluctuations. Amin et al. [58] have reported that lightning-induced AGW can be easily detected from the GPS-TEC by utilizing a filtration method on it. After computation of the fluctuations, the spectral analysis has given the wave-like structures. He studied the lightning of South Africa during 2012 and noticed the connection between ionospheric abnormalities and surprising lightning occasions, and this phenomenon might generate the gravity wave that travels up to the F region and modulated ionospheric electron density. Oikonamou et al. [59] made a broad examination of ionospheric TEC antecedents identified with the M = 7.8 Nepal and M = 8.3 Chile EQs in 2015 based on spectral and statistical analysis and discover the wave-like structures before the EQ. It was shown that peculiar TEC patterns appeared from a few days to 2-3 h prior to the events that lasted up to 8 h, but an extended TEC wave-like pattern with periods of 20 or 2-5 min was also recognized, which could be correlated with the approaching seismic events.
The formation of the AGW is furthermore expected as a fundamental part of phenomena antecedent of seismic events. The fundamental acting agent of the acoustic channel of LAIC is AGW, which can show up on account of the atmospheric oscillation close to the epicentral zone of the relating shudder, this effect going to the upward heading and irritating the ionosphere. Different front-facing frameworks and wind currents are the primary justification for making Gravity Waves (GWs). These GWs are the fundamental system to move the energy from the lower atmosphere to the upper stratosphere and mesosphere. An AGW is considered as quite possibly the main wave in the climate on account of its solid effect on nearby and worldwide environmental constructions. These motions could be produced from the difference in ground movements, temperature, and pressing factors. Groundbased estimations have been completed to acquire wind and temperature information to look at AGW activity [60]. AGW is produced a few days before the EQ and engenders along the upward bearing [14,61,62]. During EQ preparation, varieties in temperature, conductivity, and pressing factor bring about AGW generation in the atmosphere.
The speculation of AGW excitation due to seismic events before an EQ has been reported by [63][64][65]. Murayama et al. [66] has observed that the variation of AGW energies are directly related to the jet stream by using the middle and upper environment (MU) radar observational data in Japan. AGW development and its relationship with convections during Indian southwest tempest were studied by [67,68] using the Mesosphere-Stratosphere-Troposphere (MST) radar at Gadanki, India. Tsuda et al. [69] used a GPS-temperature profile to investigate the overall allocation of potential energy over mid-scope and showed that potential energy is more imperative in winter seasons. Korepanov et al. [14], with the help of surface ecological squeezing factor and appealing field data, assumed that AGW can be a huge limit in the seismo-ionospheric study. Zhang et al. [70] and Yang et al. [71] used a Sounding of the Atmosphere utilizing Broadband Emission Radiometry (SABER) instrument installed on the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) satellite temperature profile to examine E p related to AGW. Nakamura et al. [72] have played out a similar assessment and endeavored to find the relating seismogenic sway for certain EQs. For the 2004 Niigata-Chuetsu EQ (M = 6.8), wavelet examination of those limits showed changes periods of 10 to 100 min which is in the extent of AGW. Yang et al. [21] have first announced that the AGW theory can be utilized as EQ antecedent by utilizing the ERA5 temperature profile for the 2016 Kumamoto EQ. They noticed the abnormalities in the AGW movement only 4-6 days before the EQ. After that, in Yang et al. [22], they have detailed similar speculation for an alternate oceanic 2011 Tohoku seismic utilizing both the SABER and ERA5 temperature profiles and contrasted the outcomes against the Kumamoto EQ ones. In our past examination [73], we have detailed that the geomagnetic storm that happened around the EQ can not contaminate the AGW processed from the SABER temperature profile and checked our outcome with the AGW acquired from the VLF signal.
Past studies reflect that the perturbation in the ionospheric and magnetospheric height is induced by seismic events. This inducing perturbation occurred within the EQ preparation zone before the seismic event [33]. VLF electromagnetic discharges are known to be created a couple of hours before the large EQs in their epicenter region or above them [74]. When it travels through the atmosphere and the ionosphere, it might disturb the high energetic particles traveling path in the Van Allen Radiation Belt (VAB). This makes the high energetic particles to come down to the height of the Low Earth Orbital (LEO) satellite which is below VAB. Now, if the mirror points of the particles are higher than 100 km, then the high energy particles drift across the magnetic field lines (L-shell) of the earth [75]. This unusual disturbance consists of the entire perturbed L-shell and the time frame of particle longitudinal drift. At the point when an LEO satellite crosses the perturbed L-shell, an onboard particles detector detects a sharp enhancement in particle count numbers. For this situation, the L-shell of particle bursts agrees with the L-parameter of the local disturbance in the radiation belt. Thus, the investigation of particle bursts gives the chance to determine the zone of the radiation belt particle disturbance and subsequently to look for the generation of a seismic event [76][77][78]. Several attempts are taken to investigate the high energetic particle number enhancement due to seismic events. In 1980, the first proof of EQ-induced PBs is given by [79,80]. They showed short-term enhancement in the particle counts number in the inner radiation belt due to EQs. The conclusion was made based on the result using MARIA-1 experiment [81].
Fidani et al. [82] and Battiston et al. [83] presented the results of particle fluctuation related to a seismic event. For this detection of particle count number, they have used National Oceanic and Atmospheric Administration (NOAA) satellite data. The NOAA and the National Aeronautics and Space Administration (NASA) mutually built up a Polar Operational Environmental Satellites (POES). The framework comprises sets of satellites, which guarantee that each part of the Earth is consistently seen at twice every 12 h from around 800 km elevation. Beginning with the NOAA-15 satellite in 1998, an upgraded form of the Space Environment Monitor (SEM-2) is being flown. The SEM-2 contains two arrangements of instruments that detect the high-energy charged particle counts close to the Earth. It distinguishes and monitors the flux of energetic particles and electrons into the atmosphere and the flux of the particles situated at the height of the satellite. Obara et al. [84] showed the electron flux enhancement due to a geomagnetic storm using an NOAA satellite. Soraas et al. [85] did a similar job using this NOAA satellite data but for the computation of proton flux enhancement related to a geomagnetic storm. Particle bursts due to seismic activity is based on the phenomena of ionospheric-magnetospheric coupling process. The vast description of this coupling process has been given by Walt et al. [86]. Bortnik et al. [87] found the fluctuation in the VLF/LF signal at the same time-frame in which Ref. [86] detected the particle bursts in the radiation belt. Further investigations are conducted by China Seismo-Electromagnetic Satellite (CSES) and ARINA experiments. These examinations show the seismogenic particle bursts [88,89]. Fidani et al. [90] used 0 • telescope data from NOAA-15 satellite for 11 years to study the correlation between energetic particle bursts with EQs. They have reported the need for a specific condition to ensure the validity of the decided connections. The relationship is conditioned by several EQs and occasional increments of PBs, which can interfere with each other. Recently, Chakraborty et al. [23] proved significant enhancement of particle count number before two types of the EQ (land and ocean).
In the first 2.5 years of Swarm satellite mission, De Santis et al. [91] studied seismic anomalies before twelve major EQs. They have used a total of 60 days of Swarm satellite data (one month before and one month after the EQ day) for their investigation. For tracking the anomaly, they confined the investigation area by a circle. The EQ epicenter is used as a center for this circle. They have observed fluctuation in the magnetic field and electron density before all these 12 EQs. They also observed that the anomalies reflect a linear relation with the magnitude of the EQ. In the next year, Marchetti et al. [92] reported another case study of EQ anomaly using Swarm satellite data. They have used all the EQs that happened in Central Italy from 2016 to 2017. They also found the anomaly in the Y(East)-component of the magnetic field before the EQs.
In this paper, we adopt a multi-parametric approach to detect the pre-seismic anomalies in the atmosphere and ionosphere for the 2020 Samos EQ. This large EQ, having a magnitude of M = 6.9, occurred in the Aegean Sea, off the coast of the Samos Island (Greece), close to the Greece-Turkey border at 11:51 UTC on 30 October 2020, with a depth of ∼21 km according to the USGS (United States of Geological Survey). In this approach, we have tried to find out seismic anomalies in two different channels of LAIC, namely the acoustic and electromagnetic channels. For the observation purpose, we have used both ground-and space-based instruments. Firstly, for the ground-based observation, we compute the ionospheric TEC anomalies using GPS observation. For the space-based observation, we analyze AGW activity related to the SABER satellite. We also compute the AGW from GPS-TEC to verify the outcomes of SABER. The small-scale function in TEC is detected by the fitting model and wave-like structures are obtained from wavelet analysis. For the ionospheric-magnetospheric coupling process, we examine the perturbation in the variation of particle counts number obtained from the NOAA-15 satellite. From the information of the magnetic field and plasma density from the Swarm satellite, we proceed with a similar methodology as described in De Santis et al. [91] with some necessary modifications to it. In the next section, we present the methodologies we adopt in this manuscript. In Section 3, we present our results, and, finally, in Section 4, we present our conclusions.
Methodology
In this manuscript, we investigate the multi-parametric approach for the 2020 Samos EQ, a very strong EQ that took place off the coast of the Northern part of Samos Island (Greece): M = 6.9, epicenter (37.9001 • N, 26.8057 • E), focal depth = 12 km, time of occurrence 11:51:57 UTC [93,94]. For the ground-based observation, we utilize two GPS-IGS stations (i) DYNG (38.078 • N, 23.93 • E) in Greece, and (ii) IZMI (38.39 • N, 27.082 • E) in Turkey, which are close to the EQ epicenter. The locations of the GPS-IGS stations (blue squares), the EQ epicenter (red disk), the EQ preparation zone (EPZ) (blue circle), and the critical zone (CZ) (red circle) are shown in Figure 1. The track of the Swarm satellite is also marked with a black line. The distance of the epicenter from DYNG and IZMI is 251 km and 58 km, respectively. The GPS-RINEX observation and navigation files are taken from the IGS NASA archive (https://cddis.gsfc.nasa.gov/gnss/information/daily, accessed on 22 November 2020). These GPS-RINEX data are fed in a software name "GOPI SEEMALA software" [95] and computed the Vertical Total Electron Content (VTEC) using Equations (1) and (2) [96][97][98].
It is well known that the VTEC can be expressed as [41]: where STEC is Slant Total Electron Content, TEC cal = b s + b R + b RX , b s is satellite bias, b R is receiver bias, and b RX is the receiver inter-channel bias. The M(α) is obtained by the same methodology as used by Mannuchi et al. [96,97] and Langley et al. [98] as follows: Here, R e is the radius of the earth and h min is the Ionospheric Pierce Point (IPP) at 350 km [99]. α and β are the zenith angle at the receiver site and elevation angle at the IPP, respectively. All of the bias correction and calibration of TEC are made by using the previous methodology in Seemala and Valldares [95] developed and freely distributed by the Institute for Scientific Research, Boston College, MA, USA to compute the STEC and VTEC. It is mandatory to focus on the geomagnetic condition during the investigation of any seismogenic anomalies to eliminate any possible contamination. We gather A p averages data directly from the World Data Center for Geomagnetism, Kyoto (http://wdc.kugi.kyoto-u.ac.jp/, accessed on 22 November 2020). In Figure 2, we present the variation of the D st , K p , A p 3 h average and daily average, Sudden ionospheric disturbance (SID) and interplanetary magnetic fields (IMF-Bz) during the period from 17 October to 4 November 2020. It is evident that, during that time period, the minimum D st value was −36 nT and the maximum daily sum K p value was 45 (<50). It is widely published that IMF-Bz and associated variables (D st , K p , A p , etc.) are used to indicate the presence of solar-geomagnetic storms. Malik et al. [100] and Ayomide and Emmanuel [101], have analyzed many strong geomagnetic storms with high IMF-Bz values. They confirmed the condition that the value of IMF-Bz within the limit of −10 to +10 nT is considered to be a solar quiet day. They observed a significant enhancement in TEC variation during those strong storms when the interplanetary magnetic fields (IMF-Bz) cross that quiet range. There are also many publications [102][103][104][105][106][107][108][109] where it has been widely found that the moderate IMF-Bz (within the above-mentioned limit) do not have a significant effect on TEC variation. 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313
Computation of Ionospheric TEC from GPS-IGS Station
To study the anomalies in TEC, we process the median X of the TEC values for the past 15 days of the EQ and the related interquartile range IQR. The upper bound (UB) and the lower bound (LB) at a specific time(UT) is given by We use a technique similar to that used by [28] to calculate the anomalies in VTEC. Assuming a conventional dispersion with a mean (µ) and standard deviation (σ) for the VTEC, the usual values of X and IQR are µ and 1.34σ, respectively [110]. The VTEC crosses either the lower or upper bound with an 80-85% confidence level that is chosen as an anomaly. After calculation of the upper and lower bound, we determine the anomaly by taking the enhancement and decrements above and below the upper and lower bound, respectively [52]:
Computation of AGWs from the SABER Temperature Profile
SABER is an instrument of the TIMED satellite that was launched in December 2001 at an orbital height of 625 km. The period of the satellite is 1.7 h with a tendency at 74.1 • [111]. SABER mentioned its first observable fact in January 2002. It records the temperature with a height inclusion of 20-100 km by utilizing the wavelength range from 1.27 to 17 µm. From toward the north view, the latitudinal scope of SABER is between 50 • S-82 • N and from toward the south view, it is 50 • N-82 • S. The perception seeing method of SABER changes in each 60-day time frame. Remsberg et al. [112] have examined the technique for extraction of temperature from the SABER satellite.
Preusse et al. [113], Preusse et al. [114], and Fetzer et al. [115] have effectively built up the strategy to compute the AGW. The mechanism to get the AGW from the temperature profile has been acquired by a few techniques in the previous few decades [21,22,71,73,[116][117][118][119][120][121][122]. In the following, we summarize the methodology applied for the extraction of the AGW from the SABER temperature profile. We gather the altitude variation of temperature profile for the elevation range 20 to 100 km for the region of our investigation (around the EQ epicenter) from the SABER archive (http://saber.gats-inc.com/, accessed on 22 November 2020) and take the logarithm of the acquired individual temperature profiles. At that point, a third-order polynomial is fitted on the logarithm temperature profile and, to get the residual temperature, we subtract the fitted profile from the original one. As AGW has a wavelength longer than 4 km, so to eliminate the other small-scale waves, a 4 km boxcar filter is applied to the residuals of the individual profiles. After such filtration, the filtered data are added back to the fitted profile, and this gives the final profile. An antilogarithm of the final profiles is known as least square fit (LSF), which is utilized to acquire the daily zonal mean temperature and other zonal wave components from 1-5. The obtained background temperature (T 0 ) is obtained from the summation of all wave components 0-5. The background temperature profiles are subtracted from the original temperature profile to get the perturbation temperature (T ). The obtained background temperature profiles are put in Equation (8) to get the Brunt Vaisala frequency (N) for a particular profile: where z is the altitude, and c p is the specific heat at constant pressure. The potential energy (E p ) associated with the AGW can be estimated for individual temperature profiles by putting the perturbed and background temperature in Equation (6): where g is the acceleration due to gravity, and N is the Brunt Vaisala Frequency. This technique is now utilized in our past attempts to contemplate the activity of AGW during an EQ that happened on 3 January 2016, in Imphal, India [73] and the investigation of AGW action during numerous EQs. The strategy is initially utilized by [119] to remove the GW from the SOFIE temperature profile to examine the activity of AGW during different Sudden Stratospheric Warming (SSW) events. To compute the AGW excitation during the EQ, we choose a period from 17 October to 4 November 2020. The AGW is computed for a region of 30 • -50 • N (latitude range) and 10 • -40 • E (longitude range), which mostly covers the EQ epicenter and the EPZ (∼1023 km) and CZ (∼200 km).
AGW Detection from GPS-TEC
Alternatively, we use the GPS-TEC information to get the AGW excitation that is a TID of a period between 60 to 120 min. Fundamentally, the perturbations made by the TIDs are small-scale fluctuations that are essentially produced because of the EQ and meteorological events. This small-scale fluctuation can be captured by GPS signals during proliferation. Even at a short distance, the occurrence of TID often leads to large gradients in the TEC.
To detect this, we use a fitting method known as "Savitzky-Golay filtering" (sgolayfilt) [123,124] (Equation (9)). In Savitzky et al. [123], they showed that the weighting coefficients for a smoothing operation may be determined by a set of integers (C −n , C −(n1) ..., C n−1 , C n ). The utilization of these weighting coefficients, known as convolution integers, is precisely the same as fitting the information to a polynomial, as explained and is computationally more efficient and faster. The Savitzky-Golay technique therefore provides the smoothed data point (x i ) s using the following equation: The deviation of dVTEC is obtained by subtracting the modeled profile from the original data. To obtained the modeled VTEC f profile, we utilize the sgolayfilt with polynomial order 5 and 90 min time window in MATLAB. We use the filtering in a large time window to remove the additional noise in the dVTEC. The small-scale fluctuations are computed by taking the difference between the observed and fitted values as shown in Equation (10) [58]: After getting the small-scale fluctuations (dVTEC), a spectral analysis of dVTEC is performed to examine the possible wave-like structures associated with it. We do a wavelet scalogram analysis of dVTEC by using the complex Morlet continuous wavelet through MATLAB [125]. Given the computation of the scalogram, we investigate in more detail wave movement in the time frame of 15 min to 1 h (i.e., MSTIDs) and of 30 min to 3 h (i.e., LSTIDs). The maximum power of the spectrum (MPS) represents proxies for the typical spectral amplitude for a given measurement path in the individual range. The Wave like structures within the Cone of Influence (COI) are the signatures of the AGWs [58].
Computation Process of Energetic Particle Bursts
It is notable to us that a large EQ greatly affects radiation belt energetic particle counts and, for this reason, we have assembled NOAA satellite information. NOAA-15 satellite gives processed and raw data information, and we choose these datasets for a period of 26 days starting from 15 October 2020 to 9 November 2020. For the particle counts calculation, our initial step is to make another dataset on an everyday schedule. Each dataset contains time in milliseconds, latitude in degree, longitude in degree, MEPED (Medium Energy Proton Electron Detector) electron channel information (electron count rates for example CR's), IGRF magnetic field (B), MEPED telescope pitch angle (α), and L values (McIlwain L-parameter). We average out these datasets every 8 s. To eliminate this cumulative sum in the newly-made datasets, we make a difference in the energy values and growing new energy channels according to our further computation. These new energy channels are 30-100 keV, 100-300 keV, and <300 keV; likewise, we exclude the energy values that contain zero. We make a three-dimensional matrix with L, α, and B values where the binning of L value is from 0.9 to 2. µT and, by the utilization of this matrix, we need to count the high energetic particles included in each shell distinguished by the NOAA-15 satellite. Now, our next step of this computation is to make a condition for which the count rates are considered as particle bursts (PB's). Based on the previous theory, the 8 s CR's are consistent with Poisson distribution [82]. Thus, to determine the non-Poissonian fluctuation with 99% probability, the defined condition is 4σ value. The energetic particle numbers which exceed this value are considered PB's. Now, the seismic-induced PBs are selected by putting up the condition |∆L| ≤ 0.1, where ∆L is characterized by the contrast between the L values related to EQ and particle bursts.
Analysis of Swarm Satellite Data
The fifth mission in ESA's fleet of Earth Explorers is Swarm that provides magnetic field and plasma information by its three satellites. Electron density, electron temperature, and spacecraft potential data are provided by LP, and TII provides ion drift and ion velocity in higher resolution. Now, the third instrument in Swarm is the vector field magnetometer (VFM), which gives a magnetic field in vector form. This instrument also provides altitude data. Besides these three main instruments, Swarm has a star tracker, GPS receiver, and an accelerometer.
Magnetic Field and Plasma Data Structure in Swarm Satellite VFM provides the magnetic field data that consist of magnitude and direction because it is a fluxgate magnetometer with a compact spherical coil sensor, and it gives data with two types of resolution. One is a higher resolution having a sampling rate of 50 Hz and another one is a lower resolution with a sampling rate of 1 Hz. In a reference frame of North, East, and Center (NEC), the geomagnetic field values are provided in a spherical coordinate. ASM is based on the electron spin resonance (ESR) theory using the Zeeman effect, and it is used to calibrate the VFM data. Swarm data are available in ESA earth online and are freely accessible (http://Swarm-diss.eo.esa.int, and ftp://Swarm-diss.eo.esa.int, accessed on 22 November 2020). In our study, we use low rate (1 Hz) VFM data, and we use time (in UTC), latitude (in degree), longitude (in degree), X-component, Y-component, Zcomponent data of magnetic field strength, scalar intensity (F), and also the plasma density, local temperature of plasma, and the potential energy of spacecraft. We mainly use electron density as a vital parameter to detect the anomaly due to EQs and want to verify the possible variation in this parameter related to the 2020 Samos EQ. For computation purposes, we follow three algorithms by which we can easily determine the unusual variation in the magnetic field component and in the electron density which are corresponding to our study.
I. MASS Algorithm:
Anomalies in the magnetic field component are searched in each track of the Swarm satellite by MASS (magnetic Swarm anomaly detection by spline analysis). It follows a few steps such as: Firstly, we summarize our conventional data that is being used in the MASS algorithm from CDF data. After extracting these CDF datasets, the extracted data file consists of additional information of SAT-A, SAT-B, and SAT-C, semi-orbital track number, data quality flag, time (in UTC), and local time (LT). From the extracted data, we create a file having seven columns (time (UTC), latitude (in degree), longitude (in degree), scalar form of the magnetic field intensity, X-component of the magnetic field, Y-component of magnetic fields, and Z-component of the magnetic field. For the Charlie satellite, no ASM (scalar) data are available from 5 November 2014 (7:37 p.m. UTC) due to a technical problem. Thus, we use only VFM data for SAT-C. Next, we transform all the geographic latitude into geomagnetic latitude using IGRF-12 (https://www.ngdc.noaa.gov/IAGA/vmod/igrf.html, accessed on 22 November 2020).
It is well known that geomagnetic storms play a major role on ionospheric irregularities. Thus, for the accuracy of the anomaly related to our considering EQ, we exempt the geomagnetic effect on the entire calculation. To eliminate the geomagnetic activity from this computation process, we analyze the geomagnetic index Dst and A p values for each time of the different satellite tracks (every one hour of Dst and every 3 h of A p ). The analyzed area is already mentioned in SABER methodology. Now, we develop different codes in MATLAB following these steps: We plot each satellite track from 0 h to 24 h. We consider only those tracks that pass very close to the EQ epicenter to detect the anomaly related to this EQ. As the epicenter of the 2020 Samos EQ is 37.918 • N and 26.79 • E, we chop the Swarm data in the latitude range 0 • to 55 • N and longitude range 0 • to 55 • N and finally create the individual dataset for each hour from 0 to 24. We take the first derivative (difference between two consecutive values) of the magnetic field components in each track to acquire more information about the data and to remove the higher degree values. Now, we use the cubic spline method as the best fitting technique to remove the remaining long trend along the selected track. We apply the fit to the X, Y, Z component of the magnetic field and F. Finally, we compute the residue corresponding to the best fit and identified the anomaly related to the seismic event.
II. NeLOG Algorithm:
Electric field instrument data provide the electron density with a sampling rate of 2 Hz. By the use of the NeLOG program, we successfully identify the anomalies in the electron density on a logarithmic scale. Regarding 2020 Samos EQ, we analyze the electron density, electron temperature, and potential energy data from one month before to one month after the EQ day. In the NeLOG program, we select the ±15 • around the epicenter and time is taken from 12:00 p.m. UT to 1:00 p.m. UT. This algorithm provides the plot of decimal logarithmic electron density, and this plot is compared with the geographic representation of the EPZ, CZ, with EQ epicenter and the selected track. We only use electron density values as it is less affected by the instrumental error than the other plasma parameters. Now, we fit the plot with a 10-degree polynomial as it is the best fit for this variation of logarithmic electron density. Finally, the root means square is calculated within the above-mentioned area. To eliminate the fitting edge error, we cut the first 5 • from the fitted curve. In our case, we choose track number 15 as it contains all the anomalous behavior related to the EQ. We only consider SAT-C because SAT-A and SAT-B do not provide electron density information from 24 to 26 October 2020.
III. NeSTAD Algorithm:
The Ne single track anomaly detection (NeSTAD) algorithm is used to detect the anomalous behavior of electron density in a given interval of time. For our case, this interval is between 12:00 p.m. UTC to 1:00 p.m. UTC on 29 October 2020 (one day before the 2020 Samos EQ). This program runs with the input data of Langmuir Probe of Swarm satellite (Swarm EFIx-PL and Plasma Preliminary). We first select the region for examining the geographical range as mentioned in the MASS algorithm and similar binning for the latitude-longitude in the above-mentioned time interval. To remove the large-scale electron density gradient in other regions, we select the geographic region for our analysis to determine the anomaly in the electron density data in a single track. Now, for every single track in the geographic range and the time range, the following parameter is calculated: An outlier is simply an unusually large or small value among the rest. In statistical analyses, they may generate issues. We computed the outliers in the anomalous track with respect to the normal tracks. For this computation, we have used a ±15 day period from the day of the EQ. It is necessary to utilize interquartile ranges (IQR) to set the outliers. This is the difference between the 3rd and 1st quartiles (IQR = Q3 − Q1). Outliers are defined in this context as the distribution ∆ ∆N e N e are either below (Q1 − k· IQR) or above (Q3 + k·IQR). The following quantities (referred to as fences) are required to identify extreme values in the distribution's tails for the values k = 1.5 or 3: lower inner fence is Q1 − 1.5· IQR, upper inner fence is Q3 + 1.5· IQR, lower outer fence is Q1 − 3· IQR, and upper outer fence is Q3 + 3· IQR. A mild outlier is defined as a point beyond an inner fence on each side. An extreme outlier is a point that lies beyond the outer fence [126] is able to identify steep variations in the electron density time profile, behaving like a highpass filter on the electron density data. The track anomaly parameters are derived by where σ is the standard deviation of ∆ ∆N e N e and % = percentage of outliers recognized in the track, ∆_ f ilt denotes the strength of the outliers after the filtration process, and ∆ denotes the strength of the outliers before the filtration process.
Ionospheric Perturbation Observed from GPS-TEC
The diurnal variation of VTEC from 17 October to 4 November 2020, with the upper and lower bound are shown in the upper panel of Figure 3 for the DYNG station. The maximum enhancement in VTEC is observed on 29 October. In addition, on 22 and 24 October, similar unusualness is observed. The lower panel quantifies the fluctuations in TEC. It is found that the anomaly in TEC is 2.5, 2.9, and 4.5 TECU on 22, 24, and 29 October, respectively. It is evident that the pre-seismic anomalies started eight to nine days before the EQ and becomes the maximum just a day before the EQ. Figure 4 is the same as Figure 3 for the IZMI station. For the IZMI station, the daily TEC variation crosses the upper bound on 21 and 29 October with an amount of anomalies of around 2.5 and 3 TECU, respectively. Therefore, for both of the stations, the VTEC is only the maximum on just the day before the EQ. Even though IZMI is closer to the epicenter, maximum changes in TEC are observed in the DYNG station. It can be concluded that our anomalous TEC variation during the EQ was not influenced by these low IMF-Bz values (22 October 2020: 8 nT, 24 October 2020: 11 nT, 29 October 2020: 6 nT). Though, on 24 October 2020, the IMF Bz maximum value is slightly greater than 10, it could not have a significant impact on TEC variation that can contaminate the anomalies in TEC from the seismogenic cause.
AGW Anomalies Observed SABER Satellite
After the computation of E p (See Section 2.2), a nine-dimensional matrix is obtained for individual temperature profiles. The nine-dimensional matrix contains latitude, longitude, date, or day of the year in UT, altitude, original SABER temperature profile, reconstructed fitted temperature profile, perturbation temperature, Brunt Vaisala frequency, and AGW associated potential energy (E p ).
The time-altitude variation of E p with the associated AGW from 17 October to 4 November is shown in Figure 5. The altitude ranges from 30 to 50 km. The E p is presented as a color bar that ranges from 0 to 10 J/kg. This spatio-temporal profile indicates a significant amount of E p during 23 October to 25 October at around 46 km to 48 km altitude. The EQ day is marked with a black dashed line, and it is evident that the AGW activity is significantly enhanced around six to eight days before the EQ. Figure 6 describes the spatial distribution of E p for the same time period. Here, the intensification of E p is projected on a two-dimensional map for which the latitude and longitude range from 30 • N to 45 • N and 10 • E to 40 • E with the EQ epicenter at approximately in the center (magenta diamond). We take the altitude of 47 km and present the E p values. Our selection of altitude follows the maximum enhancement in E p . It is evident from Figure 6 that maximum AGW activity is observed on 24 October 2020, near the EQ epicenter. The patch of AGW stays at the northeast of the epicenter and the maximum enhancement is found within the 500-km radius from the epicenter. On 25 October, a similar but much smaller patch is observed in an opposite direction of the epicenter. It is obvious that there is a strong signature of the presence of AGW as computed from the SABER satellite outcomes. There is no such secondary strong peak of AGW in both temporal and spatial variation of AGW, and thus it can be generated due to the studied EQ.
AGW Anomalies from GPS-TEC
To validate the outcomes of AGW from SABER observation, we present the indirect method of AGW computation as retrieved from the GPS-TEC findings. For TEC fluctuations, the normal unperturbed ionospheric condition follows the range of −0.25 ≥ dVTEC ≤ 0.25. Any perturbation for which the estimation of dVTEC or the small scale fluctuations in dVTEC value crosses this envelope is considered as a seismogenic anomaly. It is evident from Figure 7 that the estimation of dVTEC satisfies the above-mentioned condition for 19, 21, 27, and 29 October for the DYNG station. The maximum fluctuation dVTEC is seen on 19 November 2020. 19 October is plotted separately in Figure 9 for better representation. The top, middle, and bottom panels show the observed and fitted TEC profile, dVTEC, and the scalogram, respectively. The scalogram shows significant enhancement of wave-like structure having the period of gravity waves within the COI. The corresponding wavelets corroborate the same where an intense wavelike structure is observed for those similar days. Just like IZMI, the intensification is a maximum on 19 November with a period of 60 to 80 min. A sharp difference is observed in the scalogram IZMI where the intense path of AGW has more temporal spread over the day. As similar to Figure 9, the overall scenario of 19 November is shown in Figure 12. Figure 12. Same (a-c) as Figure 9, but for IZMI station. Figure 13 shows the daily average count rates (CRs) for 30 October 2020 (day of the EQ). Each 3D shell reflects the CRs for which the satellite passes at least 20 times. The x-axis indicates the L values, the y-axis shows the pitch angle, and the color bar indicates the number of times the satellite passes in each 3D shell. The satellites pass ≥20 times for a cell for which the magnetic field value exceeds 22.0 µT. As mentioned in the previous section, we choose those regions for further analysis, for which this magnetic field condition is satisfied. In our investigation, we have barred the South Atlantic Anomaly (SAA) and the external VAB by picking B > 22.0 µT also, L < 2.2 individually. By this procedure, we eliminate the SAA part from the entire analysis of particle bursts computation, as the South Atlantic region always shows a sharp gradient of particle counts number. 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2 9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2 9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2 9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2 9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2 9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2 CRs that exceed four times the standard deviation, according to the Poisson distribution, are detected as non-Poisson fluctuations with a probability of 99%, and these CRs are called particle burst events [82,83]. To eliminate the normal background statistical fluctuation, we set a constraint to eliminate the same by computing the σ values of the particle counts profile. In Figure 14, we present the 8 second average of particle counts and the computed 4σ level. The particles that crossed this level of 4σ are treated as the particle bursts, and the remainder of the number of particles are viewed as the normal statistical fluctuation. Here, the x-axis demonstrates the electron included in 8 s average and, along with the y-axis, it shows the daily energetic particle count numbers. The blue dashed line demonstrates the 4σ level. For this EQ, the estimation of this 4σ is 3.32. Energetic particles are getting perturbed in the radiation belt due to many other sources. Thus, the elimination of these sources is an essential task to take care of additional examination for the true detection of EQ-associated particle bursts. Thus, to take out the other sources which assume a vital part to enhance the quantity of the high energy particle count number, we examine the A p index values which genuinely distinguish the presence of solar activity. We utilize the geomagnetic record A p information from the World Data Centre for Geomagnetism, Kyoto. From the American Association of Variable Star Observers (AAVSO) (https://www.aavso.org/sid-database, accessed on 06 July 2021) database, solar quiet days were characterized as those with daily averages of A p < 16, A p < 25 in each of the three hour intervals throughout the day and SID = 0. If any of these conditions were violated on any given day, we designated that day as a solar active day (A p < 16, A p > 25, SID = 0 or A p < 16, A p < 25, SID = 1) [82,83,127]. Based on this selection, we separate the contaminated and non-contaminated PBs and effectively eliminate the geomagnetic impact on the high energetic particle count number. We have found the solar active days are 290, 292, 297, 298, 301, 303, 306, 309, 310, 311, 312, and 313 (red histogram shown in Figure 15 upper panel). The rest of the days are solar quiet days (black histogram shown in Figure 15 upper panel). The monthly quiet day average value of PB is 30.4555, denoted by a horizontal dashed line. As mentioned earlier (See Section 2.4), by putting |∆L| ≤ 0.1 condition, we successfully detected the seismic-induced PBs for the studied EQ. The lower panel of Figure 15 shows that the number of PB is zero on the day of the EQ (30 October 2020). We have observed that the EQ induced PB on the day of years 294, 300, and 305. There are a significant number of PBs on ten, four days before the EQ. It is obvious that, even though there are few PBs found in this entire observation period of 26 days, not all PBs are initiated by this EQ. The lower and upper panel of Figure 15 show that mismatch. The reason for such decrement (in lower panel) in the number of PB is the removal of PBs in our computation that originates from various L-shells. We only consider those PBs which are produced from the L eq (EQ-related L-shells). It is also evident that, for a geomagnetically quiet condition, the EQ-induced PBs (See Figure 15 lower panel) have a lower value than those number of PBs for which we do not impose the ∆L condition although they are non contaminated (see Figure 15 upper panel).
Outcomes from the Swarm Satellite
From the MASS algorithm, Figure 16 shows that the anomaly is mainly observed in the X component of the magnetic field (first panel) in SAT-C. This disturbance occurred one day before (29 October 2020) the EQ at the geomagnetic latitude range 10 • N to 12 • N, while the source (EQ epicenter) is at the geomagnetic latitude 32.20 • N. No such anomaly is observed in the Y and Z components of the magnetic field (second and third panels). As SAT-C does not provide the scalar component values of the magnetic field, it remains constant (fourth panel). The anomalous day is geomagnetically quiet having A p = 27 nT, K p = 4, and D st = −28 nT. This anomalous track is detected in the afternoon time 1:00 p.m. UTC (Local Time = 15 EET), which is a strong and unexpected behavior in the magnetic field lines, as it was geomagnetically quiet. The anomalies in the electron density profile from the NeLog algorithm are shown in Figure 17. Figure 17 shows the comparative results of the MASS algorithm (first to the fourth panel) along with the result of the NeLog algorithm (fifth panel) for the same day and same time as mentioned above. We observe small fluctuations around 8 • N to 11 • N geomagnetic latitude in the electron density profile (fifth panel). In Figure 18, we present Log 10 (N e ) (first panel), time derivative of electron density (second panel), and the residual value of the electron density (third panel). The electron temperature (black line) and the corresponding potential energy (red line) are presented in the fourth panel, respectively. The parameter dN e /dt (time derivative of electron density) confirms an anomaly in the latitude range 8 • N to 11 • N. A similar kind of fluctuation is observed in electron temperature (T e ) in the same latitude range Figure 18.
The observation through NeSTAD analysis is shown in Figure 19 for the SAT-C. Figure 19 represents the electron density variation (left panel) with geographical latitude range from 5 • N to 30 • N at 13:00 UTC. In the right panel, the red curve represents the data of ∆ ∆N e N e before the outlier analysis and the black curve denotes the variation after the outlier analysis. The value of the "track anomaly parameters" are R = 0.957, σ = 0.0085, and % = 0.0423, respectively. Figure 20 shows a comparison of dates of abnormal activity in the stratosphere, ionosphere, and magnetosphere related to 2020 Samos EQ (Table 1) as indicated by the multiparametric analysis technique. This figure indicates that AGW activity in the stratosphere was reached around six days prior to 2020 Samos EQ. However, the TEC anomalies that were observed on 8, 6, and 1 day before the EQ day occurred over a longer period of time, with the Swarm magnetic field and plasma density being observed over the course of 1 day before the EQ in ionosphere. This causes the PB, as the resulting EQ, to be observed for 10 and 4 days before the EQ day. On days 10 and 4 before the day of the EQ, the magnetospheric perturbation (here PB) caused by this event has been noticed.
Discussion
This work presents a multi-parametric approach to study the pre-seismic anomaly during and before the 2020 Samos EQ that took place on 30 October 2020 in Greece. To achieve our goal, we use a group of ground and space-based techniques and observe stratospheric, ionospheric, and magnetospheric parameters.
1.
At first, regarding ground-based observation, we use ionospheric GPS-TEC information from two IGS station DYNG (Greece) and IZMI (Turkey) which are close to the EQ epicenter. We compute the diurnal TEC variation and, to detect the preseismic anomaly in it, we use the method of statistical upper and lower bounds. The pre-seismic enhancement starts around 8-9 days before the EQ, and the maximum enhancement occurred one day before the mainshock for both DYNG and IZMI stations. The maximum anomaly in TEC is found of 4.5 TECU for DYNG station, which is comparatively farther from the epicenter, while the maximum change is found to be 2.5 TECU for IZMI.
2.
For computing the AGW associated with the EQ, we use both direct and indirect methods. In the direct method, the AGW activity is observed through the spacebased satellite SABER/TIMED. We computed the potential energy E p associated with the AGW from the atmospheric temperature profile as recorded from SABER. A significant enhancement in E p associated with AGW is observed 6-8 days (23 to 25 October 2020) before the EQ. The enhancement of E P is found to be at an altitude range of around 46-48 km. From the spatial variation of AGW, we observe the maximum enhancement in E p on 24 October 2020, at 47 km altitude with a radius of 500 km in the northeast direction of the EQ epicenter.
3.
To validate the outcomes of SABER, we use another indirect method, where wave-like structures are investigated in the small scale fluctuations from GPS-TEC using a filtration method. The wave-like structures of periodicity 65-110 min are obtained from the wavelet analysis of small-scale fluctuations for both of the stations. We observe the most intense wave-like structure on 19 October 2020, for both stations. The AGW enhancement in the wavelet spectrum is much concentrated for the DYNG station, whereas, for the IZMI station, it is scattered around a period.
In space-based observation, we use two satellites, namely NOAA-15 and Swarm, to investigate ionospheric and magnetospheric irregularities associated with the EQ. 4.
Based on the NOAA-15 satellite particle database, we computed radiation belt energetic particle counts associated with the EQ. By eliminating SAA and considering geomagnetic quiet conditions, we present the number of particle bursts. We observe a significant number of particle counts on 10 and 4 days before the EQ. 5.
We examine the anomalies in the magnetic field, electron temperature, and electron density profile using Swarm satellite magnetic field and plasma information. In the computation, we use MASS, NeLOG, and NeSTAD algorithms (demonstrated in the methodology section). We observe the anomalous behavior in the X component of the magnetic field by using the MASS algorithm. The anomalous track number is 15 (SAT-C), and the fluctuation in the magnetic field is observed one day before the EQ. This fluctuation was observed at the afternoon period (12:00 p.m.-1:00 p.m. UT) and around −15 • latitude from the epicenter. We also observe the anomaly in the time derivative of electron density and electron temperature, around the same latitude and at the same period as derived from the NeLOG algorithm. In addition, by using the NeSTAD algorithm, we observed a similar anomaly in the strength of the outliers. For this case, we detected the mild outliers having a k value of 1.5, associated with this EQ. We recognize 0.0423% of outliers in the anomalous tracks.
Our investigation based on this multi-parametric approach re-established some of the major facts of the LAIC mechanism. First of all, the pre-seismic processes (anomalies in diurnal TEC, enhancement in AGW activity, generation of the EQ-induced PBs, anomalies in the magnetic field, and electron density) are the key ingredients of the three channels of LAIC having completely different characteristics. Although all of the parameters are found to be detectable before the EQ, they have shown similar kinds of differences following the precursory time profile. For instance, the magnetic field anomaly, rate of change of electron density, electron temperature, electron potential, and GPS-TEC showed an anomaly for a short period before the main-shock. These parameters become maximum just one day before EQ day studied. For the other parameters like particle precipitations, the pre-seismic irregularities have an intermediate time distance from the EQ day. Most importantly, as the time period ±15 days around the EQ is geomagnetically quiet, for the obvious reason, the anomalies observed in all the chosen parameters are possible due to the EQ. We also examined some other thermal parameters such as the surface latent heat flux (SLHF), relative humidity (RH), and outgoing long-wave radiation (OLR), which are usually found anomalous but could not identify any significant indication for this EQ.
Conclusions
This manuscript deals with pre-seismic anomalies during the 2020 Samos EQ as observed from some well-known parameters as observed and computed from ground and satellite sources. We examine two major channels viz. (a) acoustic and (b) electromagnetic of the LAIC mechanism. We use TEC, AGW, energetic particle burst in radiation belt, magnetic field, electron density, and electron temperature for our study. In terms of analytical and observational points of view, these parameters are found to be successful for a convincing pre-seismic signature; however, there is a significant difference in a precursory time frame among them. The overall parameters show the pre-seismic anomalies from ten to one day before the EQ. We follow the EPZ concept, and all the parameters are taken within that zone. It is evident from Figure 2, however, that there are few examples of quasi-active days in between the solar quiet days where the A p values marginally touch the upper threshold values. We eliminate the possible contamination by using the proper choice of such variables carefully to get only the seismogenic irregularities in a solar quiet condition. This will eventually minimize the possibilities of any such contamination in the seismogenic impression.
It is well established that the wide range of pre-seismic processes perturbs the atmosphere and ionosphere with different spatio-temporal ranges. Thus, it is expected that the parameters of each domain of the LAIC (beneath the surface of the earth, surface of the earth, troposphere, stratosphere, ionosphere, and magnetosphere) will excite and show intensification from their normal value with a different time range. In addition, it is extremely important to understand the preparation mechanism of an earthquake in terms of the physical processes beneath the earth that deal with the generation of potential energy. It is obvious that the temporal range of such a mechanism has a wide range of dependency. The propagation processes and the cause-and-effect relationship between various pre-seismic phenomena at different altitudes are still not very understandable. Therefore, it could be a possible reason for our observable parameters to have different pre-seismic time domains. Depending on the most sensitive and convincing parameter, one can have a sufficient idea of such a precursory time frame. For this, numerous numbers of EQs have to be studied through this process. In addition, some other significant parameters like ULF emissions, ozone concentration, atmospheric conductivity profile, etc. need to be studied additionally. This will improve the concept of the LAIC mechanism and give a better idea of solving this well-known problem of precursory phenomena of seismic hazards.
The importance and difficulties of the numerical model associated with the pre-seismic mechanism have already been mentioned. Not every physical model for LAIC has an identical hypothesis and produces similar results. As mentioned in the Introduction, Piersanti et al. [40] give highly convincing outcomes by using GPS-TEC and AGW variations. Though our outcomes for TEC and AGW are pre-seismic in nature, they differ from their findings. This is a similar problem of "difference in the time frame" and the fundamental driven force responsible for the change in TEC and AGW. Our work has a limitation of having only two IGS stations, and we are unable to generate a spatial distribution of such TEC variation. Of course, it will be a high priority to run our outcomes through their model for some other earthquake which will be done in the future. Secondly, as mentioned above, this work will also provide an opportunity to understand the internal mechanism of earthquake processes and how they get migrated to the different layers of the atmosphere producing different temporal variations for different seismogenic parameters. Thus, it will bring great motivation to understand the pathology of the inner earth, and thus this internal physical mechanism needs to be understood thoroughly. We will apply all such processes in the near future. | 15,439.2 | 2021-01-01T00:00:00.000 | [
"Physics",
"Geology"
] |
Exploiting DNA repair pathways for tumor sensitization, mitigation of resistance, and normal tissue protection in radiotherapy
More than half of cancer patients are treated with radiotherapy, which kills tumor cells by directly and indirectly inducing DNA damage, including cytotoxic DNA double-strand breaks (DSBs). Tumor cells respond to these threats by activating a complex signaling network termed the DNA damage response (DDR). The DDR arrests the cell cycle, upregulates DNA repair, and triggers apoptosis when damage is excessive. The DDR signaling and DNA repair pathways are fertile terrain for therapeutic intervention. This review highlights strategies to improve therapeutic gain by targeting DDR and DNA repair pathways to radiosensitize tumor cells, overcome intrinsic and acquired tumor radioresistance, and protect normal tissue. Many biological and environmental factors determine tumor and normal cell responses to ionizing radiation and genotoxic chemotherapeutics. These include cell type and cell cycle phase distribution; tissue/tumor microenvironment and oxygen levels; DNA damage load and quality; DNA repair capacity; and susceptibility to apoptosis or other active or passive cell death pathways. We provide an overview of radiobiological parameters associated with X-ray, proton, and carbon ion radiotherapy; DNA repair and DNA damage signaling pathways; and other factors that regulate tumor and normal cell responses to radiation. We then focus on recent studies exploiting DSB repair pathways to enhance radiotherapy therapeutic gain.
INTRODUCTION
Ionizing radiation has been used to treat cancer for more than 120 years, and radiotherapy is widely used to treat many types of cancer. More than half of cancer patients receive radiation as monotherapy or in combination with surgery, genotoxic chemotherapy, and targeted therapy. Radiation is usually delivered with external beams, but radioactive implants (brachytherapy) are used to treat prostate, head and neck, breast, eye, and other cancers [1] . Regardless of the mode of delivery, ionizing radiation is effective because it causes cytotoxic DNA damage (i.e., it is genotoxic), and in this way it is similar to genotoxic chemotherapy. However, radiotherapy is only effective for local tumor control and isolated metastases, whereas genotoxic chemotherapy, delivered systemically, can also treat widespread metastatic disease. There is evidence that radiotherapy may be effective against distant disease, through immune-mediated, non-targeted abscopal effects, but this approach is currently limited to pre-clinical studies [2] . Radiotherapy has several benefits for patients: It is non-invasive, painless, and has low rates of severe side-effects, highlighting another difference from systemic, genotoxic chemotherapy which often causes side effects that compromise patient quality of life. Although metastatic disease is ultimately responsible for most cancer deaths, the importance of local tumor control should not be underestimated. As noted in a widely used radiation oncology textbook, "…for tumors with high metastatic potential, such as breast, prostate, and lung…improved locoregional control by radiotherapy with or without chemotherapy enhances overall [patient] survival" [3] . Among the ongoing challenges in the radiotherapy field are the adverse effects of radiation on sensitive, normal tissues adjacent to tumors, in particular brain, spinal cord, and heart. In contrast, systemic genotoxins cause widespread damage, in particular to proliferative normal tissues including gastrointestinal lining and bone marrow, causing nausea and anemia, as well as non-proliferating brain tissue, causing chemotherapyinduced cognitive impairment or "chemo-brain" [4] . For both genotoxic chemotherapeutics and radiation, there is great interest in understanding mechanisms of intrinsic and acquired tumor cell resistance to these agents [5][6][7][8] .
The goal of radiotherapy is to completely eradicate tumor cells while sparing nearby normal tissue. The efficacy of radiotherapy has greatly improved with the development of advanced techniques for diagnostic imaging, beam-focusing, and beam-shaping [9,10] , and treatment outcomes continue to improve as combination therapeutic strategies mature [11] . Two ways that combination therapies can improve therapeutic gain are to radiosensitize tumor cells, especially those with high intrinsic or acquired radioresistance, and protect normal tissue. There are many biological parameters that modulate tumor and normal cell responses to radiation, such as cell type, cell cycle phase, tissue/tumor microenvironment, oxygen levels, DNA repair capacity, and others. We begin with a synopsis of radiation damage to cellular components; cellular responses to radiation damage; environmental and cellular factors that determine normal and tumor cell radiosensitivity; and strategies used to counter tumor radioresistance or protect normal tissue from radiation damage. We then discuss how DNA repair and DNA damage response (DDR) pathways can be exploited to radiosensitize tumor cells and protect normal tissue during radiotherapy.
IONIZING RADIATION DAMEGE TO CELLULAR COMPONENTS AND CELL RESPONSES
Genotoxic chemotherapeutics and ionizing radiation kill cells by directly or indirectly damaging DNA or interfering with DNA metabolism (DNA polymerases, topoisomerases, or chromosome segregation machinery). Ionizing radiation, whether delivered by X-rays, protons, or carbon ions, causes damage to cellular components through direct energy absorption or indirectly by ionizing water to generate reactive oxygen species (ROS), including hydroxyl radicals, superoxide, and hydrogen peroxide [12] . ROS are highly reactive and interact almost immediately with cellular components, causing oxidative and other damage to proteins, nucleic acids, and membrane components. ROS are also generated during normal cell metabolism, primarily from mitochondrial function [13,14] . Cells survive and thrive despite > 100,000 spontaneous DNA lesions/cell/day, including ~10,000 single-strand breaks and ~50 DNA double-strand breaks (DSBs) [15][16][17] .
Nearly all DNA lesions block DNA replication, although some can be bypassed by error-prone translesion DNA polymerases [18] . The ability of cells to manage this remarkable daily lesion load is a reflection of the high efficiency of DNA repair systems. That said, DNA damage can cause mutations, chromosome structural alterations, cell cycle arrest, senescence, and cell death. Among the hundreds of types of DNA lesions, DSBs are among the most cytotoxic, and the cytotoxicity of genotoxic chemicals and ionizing radiation is largely due to DSBs [19,20] . Other double-strand lesions, such as inter-strand crosslinks, are also highly cytotoxic [21] .
Cells respond to DNA damage by activating checkpoint signaling and DNA repair pathways, collectively termed the DDR. DDR promotes cell survival and suppresses cancer by promoting genome stability, but it also triggers programmed cell death when damage is excessive. Altered expression or mutation of DDR proteins predispose to cancer, determine tumor response to chemo-and radiotherapy, and underlie several congenital conditions including multiple types of Seckel syndrome, primordial dwarfism, and premature aging syndromes [22][23][24] . The DDR is a major determinant of cancer cell responses to chemo-and radiotherapy, and is thus an enticing target to augment cancer therapy [25][26][27][28][29][30] . DDR components are often defective in cancer, but because the DDR is a complex network of interacting/cross-talking pathways, cells can respond to alterations in one pathway with compensatory changes in other pathways. Compensatory pathways within the DDR network represent formidable obstacles to successful cancer treatment. A better understanding of DDR pathways can reveal synthetic lethal relationships that can be exploited to augment cancer therapy in general, and to develop personalized therapies [31][32][33][34][35] .
The DDR includes two checkpoint signaling pathways, one centered on ataxia telangiectasia mutated (ATM), a kinase that responds to DSBs and one centered on ataxia telangiectasia and Rad3 related (ATR) kinase that is triggered by single-stranded DNA (ssDNA) generated by 5'-3' resection of DSB ends and by decoupling of the replication machinery from MCM helicase at stalled replication forks [36][37][38][39] . ATM and ATR, along with DNA-PKcs, are PI3 kinase-like kinases (PIKKs) that are "early responders" to DSBs and replication stress. PIKKs phosphorylate large networks of proteins [40][41][42] including the downstream effector kinases Chk1 and Chk2 that phosphorylate p53 and other targets to arrest the cell cycle in response to damage, promote DNA repair, and promote programmed cell death pathways when damage exceeds a threshold [43][44][45][46] [ Figure 1]. The DDR thus presents two broad targets to manipulate for therapeutic gain: inhibiting DNA repair sensitizes cells to damage and inhibiting checkpoint signaling prevents cell cycle arrest in response to damage, increasing replication stress, fork collapse to DSBs, genome instability, and cell death [20,[47][48][49][50] .
RADIOBIOLOGICAL PROPERTIES OF THERAPEUTIC IONIZING RADIATION
Three types of external beam radiation are used to treat cancer. X-rays and protons are low linear energy transfer (LET) radiation, although proton LET varies (see below). LET is a measure of ionization density, thus low LET X-rays (and protons for the most part) are sparsely ionizing. This means that most X-ray lesions, including DSBs, are widely dispersed. X-rays are massless photons that interact weakly with Figure 1. DDR signaling. Ionizing radiation and genotoxic chemotherapy create single-and double-strand DNA damage including DSBs that activate three PIKKs: DNA-PK, ATM, and ATR. Single-strand breaks and base damage, if not repaired by base excision repair (BER), block replication, which produces ssDNA when the replisome decouples from the MCM helicase or stalled forks are cleaved to produce DSBs, which, along with frank DSBs, are resected to 3' single-stranded tails that are coated by RPA. This activates ATR to signal checkpoint responses through Chk1 and p53. Non-resected DSB ends are bound by the Ku70/Ku80 heterodimer, which recruits and activates DNA-PKcs in the DNA-PK holoenzyme, LigIV/XRCC4 ligates DNA ends to effect NHEJ. The competing HR pathway initiates with limited DSB end resection by MRE11/RAD50/NBS1 (MRN), more extensive resection by Exo1 and Dna2, and RAD51 binding to ssDNA (mediated by BRCA1, BRCA2, and other proteins) to yield the RAD51-ssDNA nucleoprotein filament that effects HR. DDR: DNA damage response; DSBs: DNA double-strand breaks Artemis is required to trim certain types of end-structures, and small gaps may be filled with polymerases m and l prior to LigIV/XRCC4/XLF-mediated ligation. NHEJ repair usually produces small indels (1-20bp deletions, few-bp insertions). (Right) Resected 3' single-strand ends are coated with RPA, which is then exchanged with RAD51, mediated by BRCA2, RAD52, RAD54, and RAD51 paralogs. The RAD51 nucleoprotein filament seeks and invades a homologous donor duplex (grey). RAD51 dissociates before repair synthesis; the newly synthesized strand (red dash) is released from the donor duplex and anneals with the complementary strand on the opposite side of the DSB. A second round of repair synthesis and nick sealing completes repair. DDR: DNA damage response; DSB: DNA double-strand break; NHEJ: non-homologous end-joining; HR: homologous recombination tissue, thus the highest X-ray doses are near the skin at the entrance point. To concentrate X-ray doses within tumors, beams are intensity modulated and delivered to patients from several angles, spreading low doses to a large volume of normal tissue [3] . Protons have a small mass and a single positive charge. Proton interactions with tissue slow and eventually stop these particles at a defined depth (within a tumor), termed the Bragg peak [63] . This feature provides a clear benefit as normal tissue beyond the tumor receives essentially no dose. Carbon ions with high mass and six positive charges are high LET radiation. Because of their mass, carbon ions also stop at depth and eliminate exit dose, similar to protons. However, the high mass and high charge of carbon ions produces dense ionization tracks, especially at the end of the track as particles slow and stop [64,65] .
X-rays, protons, and carbon ions induce the same number of DSBs per unit dose (~40 DSBs/Gy). Exposures to 1 Gy of X-rays or protons kills ~10%-20% of cells [66][67][68] . In contrast, the same dose of carbon ions kills 2-3-fold more cells, hence the relative biological effect (RBE) of carbon ions is ~2.5. Proton LET increases somewhat in the distal region of the Bragg peak, and RBE correspondingly increases to perhaps as high as 1.7 [65,69] . The high RBE of carbon ions reflects the fact that these ions efficiently induce clustered DSBs, defined as two or more DSBs separated by < 200 bp [64,68,70] . Clustered DSBs are repaired inefficiently and are hence more cytotoxic than isolated DSBs. Low LET X-rays and protons induce occasional clustered DSBsit is thought that these lesions primarily determine low LET radiation cytotoxicity, not the more prevalent isolated DSBs [64,68,[71][72][73] . The greater cytotoxicity (RBE) of carbon ions reflects their greater efficiency at inducing clustered DSBs. NHEJ, the dominant DSB repair pathway, initiates with Ku70/Ku80 (Ku) binding to DNA ends and recruitment of DNA-PKcs [ Figure 2] [51] . Ku appears to efficiently bind both large and small DNA fragments, generated by isolated and clustered DSBs, respectively. However, short fragments do not activate DNA-PKcs kinase [74] , which has critical roles in NHEJ, HR, DDR signaling, and checkpoint activation [75] . Thus, short DNA fragments appear to be refractory to repair by NHEJ, and this may account for both the greater cytotoxicity of clustered vs. isolated DSBs, and the shift from NHEJ toward HR in cells exposed to high LET radiation [64,[76][77][78][79] . A greater dependence on HR was also observed with protons than X-rays [80] , perhaps reflecting the higher proton LET in the Bragg peak. However, a more recent study showed minimal differences when cells were treated with X-rays vs. protons, and inhibitors of NHEJ or HR [81] , suggesting additional factors determine repair pathway choices among cell types. That cells struggle to repair clustered DSBs may reflect their rarity in nature and the lack of selective pressure to evolve repair systems for this class of complex DNA lesion.
Low and high LET radiation are distinguished in two other ways. Low LET X-rays and protons induce ROS most efficiently in well-oxygenated tissue. At low oxygen levels, the cytotoxic effects of X-rays and protons is reduced ~3-fold, the so-called oxygen enhancement ratio (OER) [82] . Importantly, high LET carbon ions show far less reliance on oxygen (lower OER), owing to the greater ionization potential of these high mass/high charge ions [82,83] . Radiosensitivity varies during the cell cycle. Low LET X-rays and protons show highest cytotoxicity during G1 and M phases, and ~2-fold less cytotoxicity during S-phase, termed S-phase radioresistance [84] . Interestingly, high LET carbon ions show the opposite effect: ~2-3-fold S-phase radiosensitivity relative to G1 cells (Kato, unpublished results). This suggests one mechanism by which mixed high and low LET exposures might yield synergistic cell killing [85][86][87] .
The highly damaging effects of high LET radiation initially raised concerns about the safety of carbon ions in radiotherapy [88] , but serious side effects occur no more often than with X-rays or protons [89][90][91] . This safety profile probably reflects the fact that high LET ions behave similarly to low LET X-rays and protons while traveling through (normal) tissue at high speed, gaining their high LET properties only when slowing and stopping at the end of their tracks (in tumors) [63,92] . Thus, carbon ion LET and RBE are relatively low in the entrance region and increase dramatically in the Bragg peak, and the most damaging effects are confined to the tumor volume [63,93,94] .
Cellular radiosensitivity and radioresistance
Many physical, biological, and environmental factors influence cell responses to ionizing radiation, including those that determine the level and types of damage to cell components; cell state (proliferating or quiescent, cell cycle phase); DDR signaling and DNA repair capacity; propensity for programmed cell death; cellular "memory" of past adaptive exposure; and tissue macro-and microenvironments. For example, RB status influences intrinsic radiosensitivity among individuals [95,96] , and such biomarkers can be exploited to personalize radiotherapy treatment planning [97,98] . The physical natures of ionizing radiation (photon vs. particle and large vs. small mass/charge) determine lesion spatial distributions, reparability, and cytotoxicity. Nonetheless, as noted by Willers, Xia, and colleagues [99] , "there is no absolute resistance to radiation". If enough radiation can be delivered, all tumor cells will be eradicated regardless of environmental, genetic, or metabolic factors. The practical limitation, of course, is collateral damage to normal tissue. Hence, any strategy that increases radiation dose to tumors, decreases doses to normal tissues, increases tumor-specific cytotoxic effects of radiation, or protects normal tissue from unavoidable exposure can improve therapeutic gain and/or reduce side effects.
Hypoxia
An important environmental factor that regulates DNA damage induction is oxygen level, which varies among tumor types, within different regions of a tumor, and between tumor and normal tissue. Normal tissue is well-oxygenated, but tumors are often hypoxic as they struggle to supply oxygen during their rapid growth. To a degree, tumors adapt to the hypoxic state, for example, by stabilizing HIF1a, which regulates oxygen metabolism and angiogenesis via vascular endothelial growth factor, among other effects [100] . Although certain solid tumors are frequently characterized as "hypoxic", e.g., head and neck and pancreatic cancers, it is now clear that most solid tumors have hypoxic regions. The degree of hypoxia is regulated by passive oxygen diffusion, creating somewhat stable oxygen gradients across tumor masses, and by transient effects such as altered perfusion by tumor vasculature [100] . Given the importance of oxygen for ROS production during irradiation (OER), hypoxic regions within tumors are naturally radioresistant; this is a particularly vexing problem given that normal (well-oxygenated) tissue may suffer greater ROS damage than adjacent tumors, reducing therapeutic gain. Several strategies have been proposed to mitigate hypoxia-related radioresistance, including modulation of dose fractionation, inflammatory responses, and hypoxia itself [101,102] . For example, investigators have explored hyperbaric oxygen to radiosensitize tumors, and tourniquets to promote normal tissue radioresistance, but these approaches have fallen out of favor [3] . Another idea is to mimic oxygen with agents such as nitroimidazoles, which radiosensitize hypoxic tumors. Although these are effective, clinical use has been restricted because of associated neurotoxicity [103,104] .
CELL PROLIFERATION RATES
Solid tumors comprise rapidly growing ("bulk") tumor cells and small numbers of so-called cancer stem cells (CSCs). Much of tumor sensitivity to genotoxic chemo-and radiotherapeutics reflects the fact that rapidly dividing, bulk tumor cells are more sensitive to DNA damage than most (non-dividing) normal cells. CSCs, similar to normal stem cells, divide more slowly than bulk tumor cells, hence CSCs are naturally radioresistant. Because CSCs are tumor-initiating cells that support both local tumor growth and seed distant metastases, CSC radioresistance is a significant barrier to durable chemo-and radiotherapy treatment responses [105][106][107] . Similar to CSCs, some tumor cells may be quiescent; tumor dormancy is seen locally and at metastatic sites, it can be induced by therapy, and it confers radioresistance [108] . Changing fractions of bulk, CSC, and quiescent tumor cells may cause regional variations in tumor radioresistance, complicating radiotherapy treatment planning.
Hyperthermia
The sensitizing effects of hyperthermia have long been investigated in vitro and in pre-clinical models, but it has not yet advanced to clinical practice [109] . Hyperthermia alters tissue perfusion to mitigate hypoxia, and it has been debated whether it directly induces DNA damage, but there is clear evidence that it triggers DDR signaling and suppresses DNA repair [110,111] . Ultrasound waves generate heat, and this technology is being explored to induce local hyperthermia for tumor radiosensitization [112] .
Radioprotectors and radiosensitizers
Because most radiation damage is induced indirectly through ROS, intrinsic and extrinsic modulation of cellular re-dox status strongly affects radioresistance. Re-dox mechanisms have been investigated to sensitize tumors and/or protect normal tissue during radiotherapy, including modulating NAD + , glucose, and other re-dox metabolic pathways; use of antioxidants (e.g., vitamins C and E) and isoflavones; use of Mn-porphyrin compounds such as manganese-dependent superoxide dismutase (Mn-SOD) mimetics; modulating superoxide dismutase; and modifying patient exercise routines [113][114][115][116][117][118][119] . Metformin, which reduces hypoxia by reducing oxygen consumption, and melatonin, a natural hormone with antioxidant and anti-inflammatory effects, are also under investigation for radiosensitization or protection [120][121][122][123] . Radiation countermeasures are designed to protect individuals from adverse effects of accidental or intentional (i.e., dirty bomb) total-body irradiation; these strategies may be useful for normal tissue protection during radiotherapy [124][125][126] .
Adaptive responses
Cells exposed to a low dose of radiation and then subsequently challenged by a high, cytotoxic dose show enhanced survival compared to cells that did not receive a "priming" dose. This effect, termed the adaptive response, typically refers to enhanced cell survival, but radioadaptive responses have been observed with other endpoints, including chromosome aberrations, mutation, micronuclei formation, sister chromatid exchange, delayed genome instability, and cellular transformation [127][128][129][130][131][132][133] . These radioadaptive responses are transient, usually subsiding within 24 h of the priming dose. Several regulatory proteins are known to positively or negatively influence cell survival adaptive responses to radiation, including Mn-SOD, NFkB, p53, and NOX4, several of which are mediated by the anti-apoptosis factor survivin [134][135][136][137][138][139] . Adaptive responses may be problematic, for example, if CT scans used to locate tumors induce tumor radioresistance [134] , but other radioadaptive effects, such as immunomodulatory responses, may prove beneficial [140,141] . These transient radioadaptive responses are distinct from two other types of tumor adaptative responses to therapy: adaptive (upregulated) mutagenesis, which accelerates tumor evolution, and modulation of tumor microenvironments, both of which can drive tumor resistance to radio-and chemotherapy [142,143] .
TARGETING DSB REPAIR TO ENHANCE RADIOTHERAPY
DSB repair is a major determinant of cellular radioresistance, and key NHEJ and HR proteins are attractive tumor radiosensitization targets. Because DNA repair and DDR systems are tightly integrated, radiosensitization can be achieved by interfering with these networks in a multitude of ways. In addition, "omics" analyses hold promise for personalizing radiotherapy doses based on radiation response profiles [144] . For additional perspectives on these topics, readers are referred to these recent reviews [31,35,97,[145][146][147][148][149] . Current experimental and therapeutic options that target DSB repair and DDR factors are listed in Table 1 and discussed in the following sections.
TARGETING NHEJ
DNA-PKcs is activated when complexed with Ku-bound DNA ends at DSBs, leading to phosphorylation of itself and other targets including Ku, RPA, and H2AX. DNA-PKcs autophosphorylation at two clusters (ABCDE and PQR, including T2609 and T2056 residues) is critical for subsequent NHEJ steps [51,75,184] , and DNA-PKcs inhibitors are strong radiosensitizers. However, because NHEJ is active in all nucleated cells, and cells need to repair spontaneous DSBs, inhibiting NHEJ non-specifically may adversely affect normal tissues, especially those within the radiation field. In certain solid tumors, such as ovarian and liver cancers, DNA-PK activity is elevated and this correlates with poor prognoses [185,186] . In these cases, DNA-PKcs inhibition may improve therapeutic gain. Several small molecule DNA-PKcs inhibitors, and other targeted approaches, have shown promising results in vitro and in pre-clinical models to enhance radio-and/or chemotherapy, but few have advanced to human clinical trials, due at least in part to challenges associated with cross-inhibitory effects against PIKKs (ATM, ATR, and mTOR) or bioavailability.
NU7441 is a fairly specific DNA-PKcs inhibitor that showed promising results as a radiosensitizer against nasopharyngeal and liver cancer [150,151] , and low concentrations of NU7441 enhance radiosensitivity of lung cancer cells to both X-rays and carbon ions [152] . Targeting DNA-PKcs with NU7441 in combination with the PARP1 inhibitor rucaparib radiosensitized Ewing sarcoma cells [181] . The DNA-PKcs inhibitor VX-984 radiosensitizes glioblastoma cells in vitro and in orthotopic tumors [153] . Two recently developed small molecule DNA-PKcs inhibitors are NU5455 and AZD7648. NU5455 is a highly selective DNA-PKcs inhibitor that increases the efficacy of radiotherapy and genotoxic chemotherapy treatment of lung cancer xenografts [154] . AZD7648 is a highly selective and potent DNA-PKcs inhibitor that enhances radiotherapy of lung tumor xenografts alone and when combined with the PARP1 inhibitor olaparib; this drug is advancing to clinical trials [182] . Precise selectivity is not necessarily required: the DNA-PKcs inhibitors, LY3023414 and CC-115, cross-inhibit mTOR (another PIKK) and show promising pre-clinical results. LY3023414 has advanced to clinical trials [155,156] . In preclinical studies, selective radiosensitization of hypoxic tumors was achieved using the hypoxia-activated pro-drug BCCA621C to inhibit DNA-PKcs [157] .
Many tumors overexpress wild-type or mutant versions of the epidermal growth factor receptor (EGFR). The EGFR pathway feeds into the PI3K/AKT/mTOR pathway that drives cell cycle progression. Interestingly, EGFR pathway activation stimulates DSB repair, and this was traced, at least in part, to an interaction between AKT1 and DNA-PKcs [187] . In a parallel EGFR pathway, radioresistance of tumor cells that overexpress Rab5C, Ku70, and Ku80 was traced to Rab5C regulation of EGFR internalization and its translocation to the nucleus, where EGFR stimulates Ku70/Ku80 expression [188] . Cetuximab, a clinically useful monoclonal antibody that targets EGFR, inhibits DNA-PKcs [158] and enhanced radiotherapy in early clinical trials to treat cutaneous squamous cell carcinoma [159] . EGFR nuclear translocation is stimulated by radiation mediated by Cavelolin-1 (CAV-1), and CAV-1 knockdown radiosensitizes triple-negative breast cancer, a tumor type for which there are no current targeted therapies and poor prognoses [189] . Mutant forms of EGFR (D746-750, L858R, and the targeted-therapy resistant T790M mutant) confer radiosensitivity to hypoxic lung cancer cells, at least in part due to downregulation of RAD50, a member of the MRE11/RAD50/NBS1 complex that plays early end-processing and signaling roles in NHEJ and HR [190] . These results suggest that tumor EGFR status can be used to personalize radiotherapy treatment plans and augmentation with NHEJ inhibitors. The link between EGFR and DSB repair suggests strategies to modulate tumor radiosensitivity by inhibiting NHEJ indirectly with available drugs that target EGFR and AKT1/3 pathways [148,160] .
TARGETING HR
A key step in HR is formation of RAD51 nucleoprotein filaments that seek and invade homologous duplex DNA repair template [ Figure 2]. RAD51 sub-nuclear foci are observed ~1 h after irradiation and are often interpreted as evidence of "HR activity". However, RAD51 nucleoprotein filament formation marks only the initial phase of HR; once the filament invades a donor duplex, RAD51 must dissociate to allow extension of the invading strand by repair-associated DNA polymerases [191] . Thus, RAD51 foci are markers of HR initiation, but persistent RAD51 foci may reflect failure to complete HR due to downstream HR defects [192] . Functional HR, therefore, is best assayed by directly detecting HR products. There are several types of HR assay systems, including plasmid transfection systems, integrated HR repeat substrates, and HR-mediated gene editing [193,194] . When assaying RAD51-dependent HR using linked (direct or inverted) repeats, it is important that the design detects RAD51-dependent gene conversion but not RAD51-independent singlestrand annealing [62] . Plasmid transfection assays are convenient, but substrates may not be chromatinized before or during HR, and therefore may not accurately reflect the full constellation of HR functions in chromatin [195] . Similarly, gene editing involves transfection of a non-chromatinized, homologous donor DNA sequence. Plasmid and gene editing assays are useful in rapid HR screens that can be complemented by analysis of HR products in a chromosomal context.
HR is important for repair of frank DSBs, but its other critical role is repairing single-ended DSBs that arise when replication stress causes fork collapse [ Figure 3] [196] . A 1-Gy dose of ionizing radiation induces ~40 frank DSBs, but hundreds of single-strand lesions that can cause "secondary DSBs" due to fork collapse [197,198] . HR is critical for repair of these one-ended DSBs because mis-repair by NHEJ necessarily involves a distant DSB end (from a different broken replication fork or a frank DSB), causing large-scale genome rearrangements including deletions, translocations, and dicentric chromosomes that can trigger cell death or genome instability from persistent bridge-breakage-fusion cycles [199] . Thus, care must be taken when interfering with HR to enhance radiotherapy, as HR is critical for maintaining genome stability in normal tissues to prevent induction of secondary cancers.
Because RAD51 plays a central role in HR, it is an attractive target for radiosensitization. The Bishop and Connell labs developed a small molecule RAD51 inhibitor, RI-1, that blocks RAD51 binding to ssDNA [200] and radiosensitizes glioma and glioblastoma cells [161,162] . New RAD51 inhibitors have been developed, including one that blocks D-loop formation (strand invasion) and HR but does not affect RAD51 binding to ssDNA or formation of radiation-induced RAD51 foci [201,202] . A recently developed antibody fragment linked to a cell-penetrating peptide blocks RAD51 DNA binding, sensitizes cells to radiation, and is synthetically lethal with PTEN defects in glioma and melanoma cells [163][164][165] . Another small molecule RAD51 inhibitor, CYT-0851, is currently in a clinical trial as monotherapy against several types of cancer [166] .
HR defects pre-dispose to cancer, including breast, ovarian, and other cancers with defects in BRCA1, BRCA2, PALB2, MRE11, and RAD51, as well as DDR factors that regulate HR, such as ATM [203][204][205][206] . HR proteins function as tumor suppressors by maintaining genome stability by promoting accurate DSB repair, stabilizing stressed replication forks, and repairing and restarting collapsed replication forks [207] . PARP1 inhibitors cause replication stress by inhibiting PARP1-dependent repair of single-strand damage and from PARP1-trapping on damaged DNA, accounting for the synthetic lethality of PARP1 inhibitors in HRdeficient tumor cells [35,208] . PARP1 inhibitors are widely used in clinical management of HR-defective breast and ovarian cancers [209,210] and are being explored as adjuncts to radiotherapy [168][169][170] . To inhibit proteins such as BRCA2 for which there are no small molecule inhibitors, genetic approaches such as siRNA knockdown offer another means to transiently induce HR defects to enhance radiosensitivity [167] . HR defects, whether intrinsic to the tumor or induced by drugs or other means, may be particularly useful when paired with high LET carbon ions given the greater importance of HR in repair of clustered DSBs [64,[76][77][78]167] . Just as HR defects sensitize cells to radiation and genotoxic chemotherapy, therapeutic resistance to these agents, and to PARP1 inhibitors, correlates with restoration or upregulation of HR [211][212][213][214][215] . Radiosensitization of tumors with HR inhibitors may thus be most effective against cancers that upregulate HR.
TARGETING DDR SIGNALING FACTORS
The DDR is important for tumor suppression, and it also comprises important targets that mediate therapeutic resistance to radiation and chemotherapy [216] . ATM and ATR are key regulators of critical HR factors, including MRE11, NBS1, CtIP, p53, RPA, BRCA1, PALB2, H2AX, and RAD51 [37,192,217,218] . ATM, ATR, and DNA-PKcs collaborate to regulate HR, NHEJ, and DNA damage checkpoint responses [30] . Targeting these PIKKs and other DDR factors, including Chk1, Chk2, and Wee1, are very active research topics [35,97,146,148,149,219] . Some DDR inhibitors show significant toxicity, hence delivery during protracted, fractionated radiotherapy raises safety concerns; these might be mitigated by using localized drug delivery. Nonetheless, several DDR inhibitors have advanced to clinical trials, including two phase 1 trials to augment radiotherapy with the ATR inhibitors VX-970 and AZD6738 [34,220] . ATM inhibitors, including AZD1390 and AZD0156, have shown promise for radiosensitizing various solid tumors in preclinical studies, including glioblastoma, head and neck cancer, and lung cancer [34,[221][222][223] . ATM and ATR inhibitors are also being tested for synthetic lethal effects with PARP1 inhibitors [220][221][222] ; such combinations may also augment radiotherapy. The PI3K/AKT/mTOR pathway has well-defined roles in suppressing apoptosis and promoting cell proliferation, but it also interfaces with the DDR, promoting both HR and NHEJ [171] . PI3K/ AKT/mTOR inhibitors sensitize tumor cells to PARP1 inhibitors [224,225] and to radiotherapy [172] . HPV, the causative agent for most cervical cancers, modulates the DDR to confer therapeutic resistance, and DDR inhibitors are being explored to improve cervical cancer outcomes [226] . HPV is not alone: many viruses hijack different parts of the DDR to complete their life cycles [227] . ATM, ATR, and Chk1 signaling modulates PD-L1 expression in response to DSBs induced by radiation or chemotherapeutics [141] . In preclinical studies, inhibition of ATM during radiotherapy enhanced tumor immunogenicity and tumor sensitivity to PD-L1 immune checkpoint blockade [183] . These findings highlight the pleiotropic effects of PIKK signaling networks and suggest new opportunities for combination therapy to radiosensitize tumors and exploit antitumor activity of the immune system.
SIMULTANEOUS TARGETING OF NHEJ AND HR WITH HSP90 INHIBITORS
Given the importance of DSB repair for cell survival, and the central roles of NHEJ and HR in DSB repair, simultaneously blocking these pathways can exquisitely sensitize tumors to radio-and chemotherapy. Hsp90 inhibitors have emerged as important tools for simultaneous downregulation of NHEJ and HR. Hsp90 is a protein chaperone that regulates stress responses and tumor growth proteins, and Hsp90 inhibitors are being used to treat cancer in monotherapy and to augment traditional therapies [228][229][230] . Although Hsp90 is not mutated in tumor cells, it has an altered conformation and higher ATPase activity than in normal cells. Hsp90 inhibitors exploit this difference to selectively affect tumor cells [174,229,231,232] . The radiosensitizing effects of Hsp90 inhibitors to low and high LET radiation have been studied for more than a decade [174][175][176][177][178][179] . The Hsp90 inhibitor 17-AAG, suppresses HR [176] , radiosensitizes tumor cells, and suppresses tumor growth after radiotherapy [174] . Interestingly, the greatest radiosensitization was observed with carbon ions [174] , another example of how HR inhibition potentiates radiosensitization with high LET radiation. Because protein chaperones affect many cellular processes, Hsp90 inhibitors can have pleiotropic effects, and early Hsp90 inhibitors caused serious side effects including ocular degeneration [233,234] . Second and third generation Hsp90 inhibitors (PU-H71 and TAS-116) proved to be safer alternatives. These drugs are tumorspecific radiosensitizers that suppress both NHEJ and HR by downregulating RAD51, RAD51 foci, and DNA-PKcs Ser2056/Thr2609 phosphorylation [175,178] . TAS-116 showed promising results in a phase 1 trial as monotherapy against advanced, heavily pre-treated gastrointestinal and lung cancers, with an acceptable safety profile (e.g., no greater than grade 1 ocular disorders and nausea) and anti-tumor activity [180] . It will be interesting to test TAS-116 as an adjunct to radiotherapy, and to carbon ion radiotherapy in particular.
SUMMARY AND FUTURE PERSPECTIVES
DDR signaling, DNA repair, and DNA replication systems are tightly integrated, and they are key regulators of genome integrity, genome replication, and cell viability/cell proliferative capacity. This means that agents that target DDR and DNA repair factors can be highly effective against tumors, especially when exploiting a tumor-specific synthetic lethal weakness. Unfortunately, these systems are also critical in normal cells, and DDR and DNA repair inhibitors can cause unacceptable normal tissue damage, especially if delivered systemically, reducing patient quality of life, both short-and long-term, and potentially reducing lifespan due to organ failure, accelerated tumor progression, or secondary cancers. This delicate balance is exemplified by a recent study showing that ATM counters toxic NHEJ at collapsed replication forks -an important finding because it points to new synthetic lethal approaches to treat ATM-defective tumors [235] . However, it also raises the possibility of ATM inhibition enhancing NHEJ-mediated mis-repair of singleended DSBs during (therapy-induced) replication stress. This would destabilize the genome and may accelerate progression of surviving tumor cells or induce secondary cancers.
Once radio-modulators are proven effective in pre-clinical studies, it is important to determine safe and effective ways to administer to patients. These will vary depending the type of radio-or chemotherapy being augmented, the types of agents administered, tumor location, and the organs at risk. Therapeutic efficacy can be increased, and side effects decreased, by employing multi-targeted approaches [236] . For example, the Li lab combined physical (radiation) targeting with two other targeting approaches. The first was an oncolytic adenovirus to deliver hTERT promoter-driven E1a gene for conditional replication in hTERT-positive (tumor) cells, and the second was a replication-defective adenovirus expressing shRNA to repress DNA-PKcs [237] . This downregulated NHEJ specifically in tumor cells within the (physically-targeted) radiation beam. Another tumor-specific targeting approach is illustrated by recent studies targeting triplenegative breast cancer. Here, CRISPR/Cas9 designed to knock out the Lcn2 oncogene was delivered to breast cancer cells using a tumor-tropic, ICAM1 antibody-linked nanomaterial [238,239] . These and other targeting strategies can be combined to enhance a wide variety of therapeutic interventions.
The adaptive response raised concerns about improved tumor cell survival when tumors are "primed" with 5-10 mGy diagnostic CT scans to localize tumors before treatment with a 2-10 Gy "challenge" (therapeutic) dose [134] . It may be possible to invert this paradigm and exploit the adaptive response to protect normal tissue and increase therapeutic gain. This might be done, for example, by using a transverse photon (X-ray) beam to expose normal tissue above the tumor to low (mGy) doses. This could induce a transient adaptive response in at-risk normal tissue [specifically, organs at risk (OAR)], protecting this tissue from high dose radiotherapy delivered with a perpendicular beam [ Figure 4A]. Such a strategy might be optimized with particle radiation, as priming doses can be delivered to just the normal tissue region that will be subsequently exposed to therapeutic doses in the entrance region, and particles also spare distal tissue [ Figure 4B].
In conclusion, multi-targeted strategies that combine DNA repair and DDR-modulated tumor-specific radiosensitization, advanced photon and particle beam focusing, and radioprotection of normal tissues are a rational path to tumor cures with minimal side effects.
Acknowledgments
We thank Ryuichi Okayasu, Akira Fujimori, Tom Borak, Susan Bailey, Michael Weil, Claudia Wiese, and members of the Nickoloff and Kato labs for many helpful discussions. We thank the anonymous Reviewers for their helpful suggestions.
Authors' contributions
conception and preparation of this manuscript: Nickoloff JA, Taylor L, Sharma N, Kato TA Figure 4. Proposed approach to protect normal tissue by stimulating radioadaptive responses. Horizontal X-ray beam delivers a priming dose to protect OAR (blue), but not the tumor (red) from subsequent, high doses delivered with a vertical beam(A); low priming dose of charged particles (left) protect OAR (blue) from subsequent high doses (right) (B). With charged particles, priming and therapeutic doses can be delivered along the same beamline since particles stop at predetermined depths. Charged particles also protect normal tissue distal to the tumor (larger blue section). OAR: organs at risk | 8,621.2 | 2020-12-22T00:00:00.000 | [
"Biology"
] |
Human Tissue Angiotensin Converting Enzyme (ACE) Activity Is Regulated by Genetic Polymorphisms, Posttranslational Modifications, Endogenous Inhibitors and Secretion in the Serum, Lungs and Heart
Objective: Inhibitors of the angiotensin converting enzyme (ACE) are the primarily chosen drugs to treat heart failure and hypertension. Moreover, an imbalance in tissue ACE/ACE2 activity is implicated in COVID-19. In the present study, we tested the relationships between circulating and tissue (lung and heart) ACE levels in men. Methods: Serum, lung (n = 91) and heart (n = 72) tissue samples were collected from Caucasian patients undergoing lung surgery or heart transplantation. ACE I/D genotype, ACE concentration and ACE activity were determined from serum and tissue samples. Clinical parameters were also recorded. Results: A protocol for ACE extraction was developed for tissue ACE measurements. Extraction of tissue-localized ACE was optimal in a 0.3% Triton-X-100 containing buffer, resulting in 260 ± 12% higher ACE activity over detergent-free conditions. SDS or higher Triton-X-100 concentrations inhibited the ACE activity. Serum ACE concentration correlated with ACE I/D genotype (II: 166 ± 143 ng/mL, n = 19, ID: 198 ± 113 ng/mL, n = 44 and DD: 258 ± 109 ng/mL, n = 28, p < 0.05) as expected. In contrast, ACE expression levels in the lung tissue were approximately the same irrespective of the ACE I/D genotype (II: 1423 ± 1276 ng/mg, ID: 1040 ± 712 ng/mg and DD: 930 ± 1273 ng/mg, p > 0.05) in the same patients (values are in median ± IQR). Moreover, no correlations were found between circulating and lung tissue ACE concentrations and activities (Spearman’s p > 0.05). In contrast, a significant correlation was identified between ACE activities in serum and heart tissues (Spearman’s Rho = 0.32, p < 0.01). Finally, ACE activities in lung and the serum were endogenously inhibited to similar degrees (i.e., to 69 ± 1% and 53 ± 2%, respectively). Conclusion: Our data suggest that circulating ACE activity correlates with left ventricular ACE, but not with lung ACE in human. More specifically, ACE activity is tightly coordinated by genotype-dependent expression, endogenous inhibition and secretion mechanisms.
Introduction
The renin-angiotensin-aldosterone system (RAAS) plays a crucial role in the fluid and salt homeostasis. One of the key biochemical steps within the RAAS is conversion of the inactive angiotensin I decapeptide (AngI) to active angiotensin II (AngII) octapeptide by angiotensin converting enzyme (ACE). ACE was first identified in 1956 by Skeggs et al. [1], and ACE inhibitors were subsequently introduced in clinical practice. They represent a first line therapy for a wide range of cardiovascular maladies, including hypertension [2,3] and heart failure [4]. It is important to note that AngII generation by ACE is reversed by its isoform ACE2 (which eliminates AngII). Therefore, the physiological level of AngII is usually determined by the balance between ACE and ACE2 activities in tissues. This balance is important in cardiovascular diseases [5][6][7], and also in COVID-19. Regarding the latter, ACE2 is the cellular receptor for the SARS-CoV-2 [8] and it is proposed that some symptoms of COVID-19 are mediated by disrupted ACE/ACE2 balance [9,10].
The molecular properties of a successful ACE inhibitor generally include low lipophilicity (with the exception of fosinopril) [11]. This indicates that the primary target of these drugs is the water-soluble (circulating) form of the enzyme. Accordingly, factors affecting circulating ACE activities have been implicated in the pathomechanism of cardiovascular disease. Circulating ACE concentration is controlled by a genetic polymorphism of the ACE gene (an insertion/deletion polymorphism, I/D polymorphism) [12], being implicated in systolic heart failure [13].
According to a widely accepted consensus, ACE is expressed primarily by endothelial cells, particularly those of the lung [14], and subsequently released into the circulation. However, the human heart also expresses ACE [15], suggesting that the lung is probably not the only organ contributing to circulating ACE in humans. Moreover, levels of ACE expressions in kidneys and in small intestines were also found to be comparable to those in the lung [16].
Another important finding was the identification of an endogenous inhibitor for circulating ACE [17], which was later identified as serum albumin [18]. Serum albumin almost fully inhibits circulating ACE activity at its physiological concentrations [18]. Accordingly, ACE activity is localized to the tissues (where albumin concentration is low), suggesting that ACE inhibitory drugs are acting on tissue-localized ACE. Tissue ACE/ACE2 balance (tissue AngII production) can be modulated by expression (affected by polymorphisms of both ACE and ACE2), by shedding and potentially by interacting proteins (endogenous inhibition, similarly to albumin in the serum). These factors implicate a potentially complex interplay between tissue ACE/ACE2 expression and circulating ACE/ACE2 activity in contexts of both cardiovascular disease and COVID- 19. In the present study, we tested the links between ACE activity and its genotype-specific expression, endogenous inhibitors and ACE secretion in clinical samples. ACE activity and ACE concentration were measured in human sera and tissue (lung and heart) samples obtained from patients undergoing lung surgery or heart transplantation. Circulating, but not lung tissue ACE expression was regulated by ACE I/D genetic polymorphism. In contrast, both circulating and lung tissue ACE activities were regulated by endogenous inhibition. Finally, there was a correlation between circulating and left ventricular ACE activity, but not between circulating and lung ACE activity/expression, suggestive for cardiac-specific secretion mechanisms contributing to circulating ACE activity.
Patients
This prospective study was done involving patients with lung surgeries at the clinical ward of the University of Debrecen and patients undergoing heart transplantation at the Heart and Vascular Center at Semmelweis University, Budapest. The study was authorized by the Medical Research Council of Hungary (20753-7/2018/EÜIG for patients undergoing lung surgery and ETT TUKEB 7891/2012/EKU (119/PI/12.) for patients with heart transplantation). Tissue and blood samples were obtained from patients undergoing thoracic-surgical interventions (lung samples) or heart transplantation (pseudonymized explanted heart samples from the left ventricular anterior wall and blood plasma samples were obtained from the Transplantation Biobank of the Heart and Vascular Center at Semmelweis University, Budapest, Hungary). All enrolled patients gave their individual informed consents according to the Declaration of Helsinki.
Native blood samples were aliquoted for DNA isolation and subsequently frozen, or incubated at room temperature for 60 min and centrifuged at 1500× g for 15 min. The obtained sera fractions and tissue samples were stored at −70 • C until the biochemical measurements were performed. Case history, medication, comorbidities and basic cardiovascular parameters were recorded in agreement with the General Data Protection Regulation (EU GDPR 2016/679) of the European Parliament and Council. Selected patient characteristics are summarized in Table 1.
ACE I/D Genotype Determination
Patient's DNA was extracted from peripheral blood using a commercial DNA extraction kit (FlexiGene; Qiagen GmbH, Hilden, Germany). DNA fragments were amplified with polymerase chain reaction primers (forward: CTGGAGACCACTCCCACTCTTTCT and reverse: GATGTGGCCATCACATTCGTCAGAT), as done before in the laboratory [18]. After amplification, PCR products were separated using electrophoresis on 3% polyacrylamide gels and genotypes (II, ID, DD) were identified by SybrSafe staining.
Tissue Processing for ACE Activity and Expression Measurements
Human lung and left ventricular heart tissue samples were mechanically crushed in liquid nitrogen with a pestle and mortar. Five ml of 100 mM TRIS-HCl, pH 7.0 was then added to each g of tissue (wet weight) on ice. Samples were homogenized with a tissue homogenizer (Bio-Gen PRO200, PRO Scientific, Oxford, CT, USA) and centrifuged Cells 2021, 10, 1708 4 of 13 at 16,100× g for 5 min. Supernatants were collected and kept frozen until biochemical determinations.
Procedures to Measure the Effects of Detergents
In many cases, effects of detergents were also tested on the ACE extraction from the lung tissue. In these experiments, homogenization buffer (100 mM TRIS-HCl, pH 7.0) was supplemented with the indicated concentration of Triton-X-100, Triton-X-114 and SDS.
ACE Activity Measurements
Tissue and circulating ACE activity measurements were performed as described before [19]. In short, cleavage of the quenched fluorescent substrate (Abz-FRK(Dnp)P-OH was used to measure the activity in a kinetic assay. The measurement mixture contained 100 mM TRIS-HCl, pH 7.0, 50 mM NaCl, 10 µM ZnCl 2 , 10 µM Abz-FRK(Dnp)P-OH and the intended amount of sera/tissue sample, in addition to the detergents mentioned above. ACE activity was measured at 37 • C. ACE activity measurements were performed with a plate reader at λ ex 340 nm and λ em 405 nm (NovoStar, BMG Labtech, Ortenberg, Germany). Results were accepted when the goodness of the fit (r 2 ) was at least 0.90. The ACE activity was calculated based on the rate of the observed increase in fluorescent intensity (AU/min), which was transformed to absolute units based on a calibration curve with the Abz fluorophore.
ACE Concentration Measurements
ACE expression was measured by an Enzyme-Linked Immunosorbent Assay (Catalog No. DY929; R&D Systems, Minneapolis, MN, USA) according to the manufacturer's instructions. In short, the capture antibody was diluted to working concentration of 80 ng/well in Dulbecco's phosphate-buffer saline (DPBS) at room temperature. The remaining binding sites were blocked with 10 mg/mL bovine serum albumin dissolved in DPBS. Human serum/lung samples were diluted 100-fold in the same buffer (10 mg/mL of bovine serum albumin in DPBS) and incubated with the immobilized primary antibodies for 2 h. Capture antibody-bound ACE was labeled using a biotinylated detection antibody, 20 ng/well for 2 h. Streptavidin-conjugated horseradish-peroxidase (200-fold-diluted stock from the kit) was added to the wells and incubated for 30 min. Immunocomplexes were detected with a chromogenic substrate solution containing 0.3 mg/mL TMB (3,3 ,5,5tetramethylbenzidine), 0.1 mM H 2 O 2 and 50 mM acetic acid (incubation time was approximately 30 min). The reaction was terminated by the addition of 0.5 M HCl and was evaluated by measuring the absorbance at 450 nm. ACE concentration was calculated using a calibration curve. Serum ACE concentration was given as ng/mL of serum.
Chemicals
All chemicals were from Sigma-Aldrich (St. Louis, MO, USA) if not indicated otherwise.
Statistical Analysis
Normality check was performed by Kolmogorov-Smirnoff test. An ANOVA was applied for values showing normal distribution. Mann-Whitney or Kruskal-Wallis tests were used as non-parametric tests with Dunn's multiple comparison post hoc test. A χ 2 test was performed to compare the different clinical and genotype subgroups. The differences were considered to be significant when p < 0.05. Statistical analyses were performed by GraphPad Prism, version 5.0 (GraphPad Software, Inc., San Diego, CA, USA).
Development of a Protocol for Tissue ACE Extraction
We aimed to compare circulating and tissue ACE activities and expressions. First, we developed a suitable tissue extraction protocol for the lung. Detergents (Triton-X-100, Triton-X-114 and SDS) were tested in the range of 0.06-5.0 v/v%. Application of these detergents increased the yield to approximately 250% ACE concentration even at their lowest (0.06%) concentrations ( Figure 1A), when compared to the buffer without detergents (100%, represented by the dotted line, Figure 1A). The increase in ACE solubilization was approximately 550% at the highest (5.0 v/v%) detergent concentrations ( Figure 1A). Nonetheless, this gradual increase in solubilized ACE concentration was only partially paralleled by the ACE activity. The activity increased at the lower detergent concentrations, reaching the maximum at 0.3 v/v% (at approximately 250%, Figure 1B) and declined at higher concentrations. Calculated specific activities suggested inhibition of the ACE activity by the detergents. SDS inhibited ACE even at the lowest tested concentrations, while the Triton based detergents inhibited the enzyme activity at concentrations of 0.6% and higher ( Figure 1C).
Development of a Protocol for Tissue ACE Extraction
We aimed to compare circulating and tissue ACE activities and expressions. First, we developed a suitable tissue extraction protocol for the lung. Detergents (Triton-X-100, Triton-X-114 and SDS) were tested in the range of 0.06-5.0 V/V%. Application of these detergents increased the yield to approximately 250% ACE concentration even at their lowest (0.06%) concentrations ( Figure 1A), when compared to the buffer without detergents (100%, represented by the dotted line, Figure 1A). The increase in ACE solubilization was approximately 550% at the highest (5.0 V/V%) detergent concentrations ( Figure 1A). Nonetheless, this gradual increase in solubilized ACE concentration was only partially paralleled by the ACE activity. The activity increased at the lower detergent concentrations, reaching the maximum at 0.3 V/V% (at approximately 250%, Figure 1B) and declined at higher concentrations. Calculated specific activities suggested inhibition of the ACE activity by the detergents. SDS inhibited ACE even at the lowest tested concentrations, while the Triton based detergents inhibited the enzyme activity at concentrations of 0.6% and higher ( Figure 1C).
Solubilized ACE was collected in the supernatant and the pellets were re-processed two additional times (same protocol than that for initial tissue ACE extraction) to see how much ACE remained in the processed tissue. Significant ACE activity remained in the pellets without detergents, suggestive for incomplete ACE extraction ( Figure 1D). ACE extraction was significantly improved in the presence of Triton-X-100 (at 0.3 V/V%) ( Figure 1E). homogenates were two-fold diluted in the activity measurement mixture). The homogenates were centrifuged and the supernatants were collected, diluted to 1 mg/mL (with the same buffer, containing the indicated detergents). ACE concentration was measured by ELISA (A), while activity was measured by a kinetic assay (B). The ratio of the activity and concentration values yielded the specific activity, which is shown in (C). In some cases, the homogenization procedure was repeated using the ACE depleted pellets after homogenization. The number of these repeated extraction cycles is indicated in panels (D,E). In these cases, the ACE activity was expressed as the percentage of the first supernatants ((D): buffer without detergent, (E): Buffer with 0.3% Triton-X-100). Symbols (A-C) and bars (D,E) represent the mean, error bars show the S.E.M. of n = 5-9. Significant differences (p < 0.05) are labelled by the asterisks.
Solubilized ACE was collected in the supernatant and the pellets were re-processed two additional times (same protocol than that for initial tissue ACE extraction) to see how much ACE remained in the processed tissue. Significant ACE activity remained in the pellets without detergents, suggestive for incomplete ACE extraction ( Figure 1D). ACE extraction was significantly improved in the presence of Triton-X-100 (at 0.3 v/v%) ( Figure 1E).
3.1.2. ACE I/D Polymorphism as a Quantitative Trait Locus for ACE Expression in the Blood, but Not in the Lung Serum ACE concentration was influenced by ACE I/D polymorphism in patients with pulmonary surgery (Figure 2A). Patients with II, ID and DD genotype had 166 ± 143 ng/mL, 198 ± 113 ng/mL and 258 ± 109 ng/mL circulating ACE concentrations, respectively, suggestive for a dominance of the D allele in determining circulating ACE concentration (values are in median ± IQR). In contrast, there was no effect of ACE I/D polymorphism on ACE expression in lungs of the same patients (tissue ACE concentration was 1423 ± 1276 ng/mg, 1040 ± 712 ng/mg and 930 ± 1273 ng/mg, respectively). ACE D genotype resulted in elevated circulating (serum) ACE activities in patients without ACE inhibitory medications (3.1 ± 1.4 U/mL, 4.0 ± 1.4 U/mL and 5.0 ± 2.5 U/mL, Figure 2B). A similar correlation was found in all patients (irrespective to ACE inhibitory medication) when the dilution level was high enough to neutralize the effects medications and endogenous ACE inhibition (8.4 ± 4.9 U/mL, 8.9 ± 4.2 U/mL and 10.3 ± 3.9 U/mL, Figure 2C). In contrast, there were no apparent links between ACE I/D genotype and lung ACE activities in the same patients (37 ± 18 U/mg, 37 ± 18 U/mg and 39 ± 15 U/mg, Figure 2B and 156 ± 161 U/mg, 115 ± 68 U/mg and 108 ± 121 U/mg, Figure 2C). Note that the higher activities at higher dilution levels ( Figure 2B vs. Figure 2C) illustrate the presence of endogenous inhibitors affecting enzyme activity at physiological (undiluted) conditions. Cells 2021, 10, x FOR PEER REVIEW 6 of 14 supernatants were collected, diluted to 1 mg/mL (with the same buffer, containing the indicated detergents). ACE concentration was measured by ELISA (A), while activity was measured by a kinetic assay (B). The ratio of the activity and concentration values yielded the specific activity, which is shown in (C). In some cases, the homogenization procedure was repeated using the ACE depleted pellets after homogenization. The number of these repeated extraction cycles is indicated in panels (D,E). In these cases, the ACE activity was expressed as the percentage of the first supernatants ((D): buffer without detergent, (E): Buffer with 0.3% Triton-X-100). Symbols (A-C) and bars (D-E) represent the mean, error bars show the S.E.M. of n = 5-9. Significant differences (p < 0.05) are labelled by the asterisks.
3.1.2. ACE I/D Polymorphism as a Quantitative Trait Locus for ACE Expression in the Blood, but Not in the Lung Serum ACE concentration was influenced by ACE I/D polymorphism in patients with pulmonary surgery (Figure 2A). Patients with II, ID and DD genotype had 166 ± 143 ng/mL, 198 ± 113 ng/mL and 258 ± 109 ng/mL circulating ACE concentrations, respectively, suggestive for a dominance of the D allele in determining circulating ACE concentration (values are in median ± IQR). In contrast, there was no effect of ACE I/D polymorphism on ACE expression in lungs of the same patients (tissue ACE concentration was 1423 ± 1276 ng/mg, 1040 ± 712 ng/mg and 930 ± 1273 ng/mg, respectively). ACE D genotype resulted in elevated circulating (serum) ACE activities in patients without ACE inhibitory medications (3.1 ± 1.4 U/mL, 4.0 ± 1.4 U/mL and 5.0 ± 2.5 U/mL, Figure 2B). A similar correlation was found in all patients (irrespective to ACE inhibitory medication) when the dilution level was high enough to neutralize the effects medications and endogenous ACE inhibition (8.4 ± 4.9 U/mL, 8.9 ± 4.2 U/mL and 10.3 ± 3.9 U/mL, Figure 2C). In contrast, there were no apparent links between ACE I/D genotype and lung ACE activities in the same patients (37 ± 18 U/mg, 37 ± 18 U/mg and 39 ± 15 U/mg, Figure 2B and 156 ± 161 U/mg, 115 ± 68 U/mg and 108 ± 121 U/mg, Figure 2C). Note that the higher activities at higher dilution levels ( Figure 2B vs. Figure 2C) illustrate the presence of endogenous inhibitors affecting enzyme activity at physiological (undiluted) conditions.
Missing Correlation between Serum and Lung Tissue ACE Expression/Activity
Surprisingly, no significant correlation was found between ACE concentrations in lung tissue and circulation ( Figure 3A). Similarly, there was no correlation between ACE activities in lung tissue and in blood ( Figure 3B). In contrast, circulating and cardiac (left ventricular) ACE activities significantly correlated with each other ( Figure 3C).
Missing Correlation between Serum and Lung Tissue ACE Expression/Activity
Surprisingly, no significant correlation was found between ACE concentrations in lung tissue and circulation ( Figure 3A). Similarly, there was no correlation between ACE activities in lung tissue and in blood ( Figure 3B). In contrast, circulating and cardiac (left ventricular) ACE activities significantly correlated with each other ( Figure 3C).
ACE Expression and Activity Are Correlated in Serum and Lung Tissue
The missing correlation between ACE levels of lung tissues and sera can, hypothetically, be explained by methodological errors. To rule out methodical inaccuracies ACE activities were plotted as a function of the ACE concentration from the same sources (sera for the circulating ACE and lung tissue homogenate for tissue ACE, Figure 4). A positive linear correlation was found for sera ( Figure 4A) and lung tissues ( Figure 4B). The observed linear relationships (significant differences from the horizontal lines as characterized by the low p values and acceptable fits as shown by the high r 2 values) indicated that the measurements were sufficiently accurate for both sera and lung tissue samples.
ACE Expression and Activity Are Correlated in Serum and Lung Tissue
The missing correlation between ACE levels of lung tissues and sera can, hypothetically, be explained by methodological errors. To rule out methodical inaccuracies ACE activities were plotted as a function of the ACE concentration from the same sources (sera for the circulating ACE and lung tissue homogenate for tissue ACE, Figure 4). A positive linear correlation was found for sera ( Figure 4A) and lung tissues ( Figure 4B). The observed linear relationships (significant differences from the horizontal lines as characterized by the low p values and acceptable fits as shown by the high r 2 values) indicated that the measurements were sufficiently accurate for both sera and lung tissue samples.
ACE Activity Is Regulated by Endogenous Inhibition in the Lung
We reported earlier that an endogenous inhibitor controls circulating ACE activities [18,20]. In the present study, we paralleled an estimation for this ACE inhibitory effect in the circulation (serum) and lung tissue ( Figure 5). Endogenous inhibition was confirmed by the lower apparent activities at low dilutions compared to those at high dilutions for both serum ( Figure 5A) and lung tissue ( Figure 5B) samples. In addition to this endogenous regulation of ACE activities, there was an apparent difference in the uninhibited (determined at the highest dilutions) specific activities. The specific activity for serum ACE was 0.06 ± 0.004 U/ng, which was approximately half of that for lung tissue ACE (0.13 ± 0.009 U/ng, p < 0.05, Figure 5C). The level of endogenous ACE inhibition in sera was 53 ± 2% in patients without ACE inhibitory medication ( Figure 5D). The level of inhibition increased to 83 ± 2% (p < 0.05, Figure 5D) in patients with ACE inhibitory medication, indicating efficient medical (drug) treatment. The level of endogenous inhibition in lung tissue samples (69 ± 1%, Figure 5D) was higher than that for circulating ACE. The effect of ACE inhibitory medication was absent in lung samples (level of inhibition was 74 ± 1% in patients with ACE inhibitory medication, and no significant difference was observed when it was compared to lung samples without ACE inhibitory medications, Figure 5D).
ACE Activity Is Regulated by Endogenous Inhibition in the Lung
We reported earlier that an endogenous inhibitor controls circulating ACE activities [18,20]. In the present study, we paralleled an estimation for this ACE inhibitory effect in the circulation (serum) and lung tissue ( Figure 5). Endogenous inhibition was confirmed by the lower apparent activities at low dilutions compared to those at high dilutions for both serum ( Figure 5A) and lung tissue ( Figure 5B) samples. In addition to this endogenous regulation of ACE activities, there was an apparent difference in the uninhibited (determined at the highest dilutions) specific activities. The specific activity for serum ACE was 0.06 ± 0.004 U/ng, which was approximately half of that for lung tissue ACE (0.13 ± 0.009 U/ng, p < 0.05, Figure 5C). The level of endogenous ACE inhibition in sera was 53 ± 2% in patients without ACE inhibitory medication ( Figure 5D). The level of inhibition increased to 83 ± 2% (p < 0.05, Figure 5D) in patients with ACE inhibitory medication, indicating efficient medical (drug) treatment. The level of endogenous inhibition in lung tissue samples (69 ± 1%, Figure 5D) was higher than that for circulating ACE. The effect of ACE inhibitory medication was absent in lung samples (level of inhibition was 74 ± 1% in patients with ACE inhibitory medication, and no significant difference was observed when it was compared to lung samples without ACE inhibitory medications, Figure 5D).
Tissue-Localized ACE Activity Did Not Correlate with Age or Sex
There was no statistical difference in tissue-localized ACE activity in patients younger or older than 60 years, nor in females or males (Table 2).
Tissue-Localized ACE Activity Did Not Correlate with Age or Sex
There was no statistical difference in tissue-localized ACE activity in patients younger or older than 60 years, nor in females or males (Table 2).
Discussion
It is a widely accepted that angiotensin converting enzyme (ACE) is primarily expressed in endothelial cells [21]. It is also believed that the primary source of circulating ACE is the lung, based on the observation that all lung capillaries express ACE, while ACE expression is only approximately 20% of that in other organs [22]. It was estimated that 75% of blood ACE originates from lung capillaries [23]. However, it was also found that additional organs, such as small intestines and kidneys, have comparable ACE expression levels to that in the lungs [24]; moreover, the conversion of angiotensin I into angiotensin II (the physiological function of ACE) is extremely high in the human heart when compared to dog, rabbit and mouse hearts [25].
ACE insertion/deletion (ACE I/D) genotype was suggested to be a genetic trait locus determining circulating ACE levels [12] and was associated with various cardiovascular diseases, including heart failure [13]. This suggests that the organ which provides the circulating ACE must have an ACE I/D genotype-dependent expression pattern. Indeed, tissue ACE levels showed a correlation with the genetic background in the human heart [15]. It is important to note that ACE expression is not only regulated by the genetic background, but also by physiological factors, such as redox state. In particular, it appears that ACE expression is inhibited by NO and facilitated by NOS inhibition [26]. It suggests that ACE expression is also regulated by endothelial function not only passively by the number of endothelial cells. In general, the biochemical milieu is the driver of ACE enzymatic activity, acting on the cells capable to express ACE and regulating proteins.
Using this genotype-dependent expression pattern as a tracer, we tested if the primary source of circulating ACE is the lung in humans. To do so, we tested ACE levels in serum and lung samples of the same patients in parallel, using techniques developed in our laboratory in the past years [17,18,20,27,28]. Patients with the DD genotype had significantly higher circulating ACE concentrations and activities than patients with the ACE II genotype, while patients with the ACE ID genotype showed intermediate values confirming earlier reports. However, we did not find any correlation of lung tissue ACE expression or activity with the ACE I/D genotype. This finding suggests that ACE expression in the lungs is independent of ACE I/D genotype, and consequently, the genotype-dependent serum ACE secretion must have an alternative source of ACE.
The question is, therefore, where do circulating ACE in humans come from? It appears that the lung has the majority of ACE, but it does not contribute proportionally to circulating ACE levels. In accordance, we found a positive correlation between circulating and cardiac (left ventricular) ACE activities. This suggests that secretion of ACE from the human heart significantly contributed to circulating ACE levels. The secretase cleavage site in somatic ACE has been mapped to Arg-1203/Ser-1204 [29]. ADAM17 (also called tumor necrosis factor-α-converting enzyme, TACE) has been proposed as potential secretase [29], but ACE secretion seems to be unaltered in ADAM17/TACE knock out mice, suggesting alternative pathways for ACE secretion [30]. Indeed, one report suggested that the ACE secretase is different from ADAM17/TACE [31]. Nevertheless, there is an apparent consensus in that the mysterious ACE secretase is a membrane-bound enzyme [29,30].
A recent report on lung ACE reported tissue ACE expression decreases in lung cancer [23]. Indeed, a negative correlation between lung cancer and circulating ACE activity was shown half a century ago, in a limited number of patients [32]. This study suggested that if lung microcapillaries are being lost in the tumor, then circulating ACE activities will decline. However, none of the previous studies attempted to directly test the relationships between circulating and lung tissue ACE expressions. To the best of our knowledge, this report is the first to do so in a fairly large human population.
Our data suggest that the source of circulating ACE is independent of lung capillaries. In line with that, the human heart was identified as an alternative source for circulating ACE. Additional ACE expressing and secreting cells can also be found in the apical surface of epithelial cells in the proximal tubule of kidney, the mucosa of small intestine, the syncytial trophoblast of placenta and the choroid plexus, in addition to various regions within the central nervous system [24]. Moreover, ACE was also found to be expressed by macrophages [33]. While the role of these potential ACE sources in the circulating ACE levels is unknown, it is well established that circulating ACE level increases in sarcoidosis [34]. We also confirmed elevated circulating ACE levels in patients with sarcoidosis and proposed that it can be used as a biomarker for sarcoidosis [27,28]. Using a similar approach to ours, an independent study reported genotype-dependent ACE expression in the human heart [15] in full accordance with our findings in the present study, suggestive of a relationship between serum and cardiac ACE activities.
Another finding of this study is the endogenous regulation of ACE activity by inhibitors. The first results on potential endogenous inhibitors of ACE were reported as early as 1979 [35]. Later human results also suggested the existence of endogenous ACE inhibitors in the heart [36] as well as in the serum by identifying C-type natriuretic peptide [37]. Moreover, it was also shown that dilution can be a valuable tool to investigate the endogenous inhibition of ACE [38], suggesting that ACE is generally inhibited in rat tissues. Our previous reports on the endogenous inhibition of circulating ACE activity [17] by serum albumin [18] were confirmed in the present study. Applying the same technique, we observed a significantly higher endogenous inhibition (approximately 70%) in lung tissue than in blood. These ACE inhibitory levels were comparable in patients with and without ACE inhibitory medications, suggesting a negligible effect of the drug on tissue ACE activities. The concentration of human serum albumin is too low in the lung tissue samples to provide significant ACE inhibition [18], and thus, this implicates an alternative mechanism for ACE inhibition in the present study. These findings were in accordance to that found in the rat, suggesting at least 85% endogenous ACE inhibition in the lung [38]. Further studies are required to identify the molecular nature of the endogenous ACE inhibitor in human lung tissue.
Specific ACE activities were significantly higher in human lung tissues than that in the sera of the same patients. This difference suggests that ACE processing is different in these tissues, resulting in different post-translational modifications. This finding is in accordance with the "conformational fingerprinting" introduced by Danilov et al. [39]. Nonetheless, it is important to note that AngII can also be generated by chymase. Although the human heart exhibits high ACE activity in comparison with other species [25], it appears that chymase predominates over ACE in AngII generation [25,40]. This implies that tissuelocalized AngII in the human heart is not determined by the ACE/ACE2 balance, but rather by the chymase/ACE2 balance. On the other hand, ACEi medication is particularly effective in patients with heart failure with reduced ejection fraction (HFrEF), suggesting an important role for ACE in the heart. Unfortunately, chymase activity was not measured in the present study to address this important issue.
We need to acknowledge the limitations of the study. First of all, this is a singlecenter clinical study, selectively performed on Caucasian patients; therefore, clinical data regarding the correlation between ACEi medication and biochemical efficacy (inhibition) should be considered as exploratory. This study is unbalanced in respect of the results with lung and heart samples: we were unable to provide a side-by-side characterization of lung and heart tissue-localized ACE as a result of limited tissue availability. In particular, we were unable to provide evidence for genotype-dependent ACE expression in the heart or information on the effect of ACEi medication on heart ACE activity. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the clinical information involved.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,382.2 | 2021-07-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Savitzky-Golay filtering for Scattered Signal De-noising
Extraction of distinguishing features and decision of classifiers are highly influenced by the low signal-to-noise ratio (SNR) rates, when target identification from scattered electromagnetic waves is considered. In order to increase the correct identification rates, smoothing operations, which should increase the SNR without greatly distorting the signal, are occasionally employed. However, this operation is mostly performed to the complete scattered signal via an over-complete basis. On the other hand, Savitzky-Golay filters can de-noise the signal by fitting successive sub-sets of adjacent scattered data points with a low-degree polynomial through the use of linear least squares. Thus in this study, both computational burden and accuracy of Savitzky-Golay filters are compared to three well-established smoothing techniques in time domain, frequency domain and time-frequency analysis. The analyses are performed with both simulated and measured data from various conductor and dielectric targets having different size, geometry and material type.
Introduction
Classification of targets from scattered electromagnetic signals become much more difficult as SNR levels decrease. The effect of strong noise cause problems on eliminating the effects of aspect angle and prevents extracting distinguishable features for identification of different targets [1]. Previously, we propose a target classification method, which uses a structural feature set extracted from scattered signals. In those earlier works, various multi-scale approximation methods are performed for de-noising prior to feature extraction [2]. Among them, hierarchical radial basis function network (HRBFN) topology has performed best in suppressing the adverse effects of noise on scattered signals [3]. Unfortunately, still the effectiveness of the structural properties of time domain scattered signals are highly susceptible to the level of noise and distortion. Moreover, the de-noising operations should be performed in real time to satisfy operating conditions. Opposed to well-known wavelet multi-resolution analysis using the discrete wavelet transform [4] or HRBFN strategy, Savitzky-Golay filters can reconstruct the signal in a timely manner (using consecutive frames of a signal) rather than processing the complete signal block by reconstructing the signal via adding and/or removing frequency components [5].
When a neural network (NN) based systems is developed for a non-cooperative system, the system is first trained using a set of references and the classification results is determined by the best match.
There are two key components of such systems, which are 1) the features extracted from scattered signals in order to represent the target characteristics and 2) the classifier which performs the decision [6][7]. Both of these components are highly sensitive to noise and therefore, the distortions due to low SNR should be handled carefully.
Recently, a target identification system is developed by using a novel feature set [3]. The feature set mainly relies on the structural differences of the scattered signal waveforms and aims to use the structural differences corresponding to the position, amplitude, rise/fall times and the number of the hills/valleys of the scattered signals. In order to obtain these features, a piece-wise linear approximation process is carried out and once the sub-waveforms are represented by triangles; their peaks, widths, slopes are calculated for each of them together with their inter-distance. Then, HRBFN employs a number of Gaussian units to fit to the valleys/hills, which are combined to create an approximation.
However, the above mentioned operation is mostly performed to the complete scattered signal via an over-complete basis. On the other hand, Savitzky-Golay filters can de-noise the signal by fitting successive sub-sets of adjacent scattered data points with a low-degree polynomial through the use of linear least squares. Thus in this study, both computational burden and accuracy of Savitzky-Golay filters are compared to three well-established smoothing techniques in time domain, frequency domain and time-frequency analysis. The analyses are performed with both simulated and measured data from various conductor and dielectric targets having different size, geometry and material type. For evaluation of the proposed technique in target classification, cross validation learning strategy [9] is used. The results show that, up to -10 dB SNR, proposed method can effectively be used for target classification.
Generation of Simulated Data
The simulated signals used in this study are obtained by Matlab based simulation of Hertz and Debye potentials as in [2]. The analytical solutions of these potentials are extracted for a plane wave excitation which is linearly polarized in x-direction and propagates in z-direction ( Figure 1). The far field scattered responses are computed over a bandwidth from zero to 12 GHz at 873 frequency sample points with a frequency resolution of 13.75 MHz. The responses are obtained at φ= π/2 plane, with a radial distance of 72 cm from the sphere center for twelve different Bistatic Aspect Angles (BAA), 180-θ = θb = 10 ○ , 20 ○ , … 180 ○ degrees. The resulting time signals have 1024 sample points with a total time span of 5.115 ns. The noisy scattered time domain signals at all the aspect angles stated above are obtained at the SNR levels of 10, 0 and -10 dB to be used for classifier design and performance testing. The second measured target set of model aircrafts contains three conducting small-scale aircrafts targets including a Boeing 747, a DC-10 and a Boeing 767. The models are scaled by 1/500 for Boeing 747 and DC-10; but, by 1/600 for Boeing 767. The body, wing and tail lengths of each target are 14.5 cm, 12.7 cm and 4.8 cm, for Boeing 747; 12.48 cm, 12.54 cm and 5 cm, for Boeing 767; and 12.7 cm, 11.4 cm and 5.25 cm, for DC10, respectively. The measurement setup is given in Figures 2 and 3 while examples of simulated and measured scattered signals are given in Figures 4 and 5.
De-Noising Scattered Signals
When the scattered signals are corrupted by noise, their waveform structures change significantly. This distortion affects the identification algorithms that rely on structural time-domain properties. In order to prevent the deteriorations in performance, the scattered signal should be recovered. Previously, three well-established multi-resolution techniques are compared for the task of de-noising scattered signals at low SNR conditions [8]. The differences between those three methods, namely wavelet decomposition (WD), HRBFN approximation and matching pursuit (MP), have two folds. The first one is the differences at the type of base functions. Wavelet and HRBFN use a single basis function and re-use it by changing its scale, amplitude and other parameters while MP uses multiple bases. The second difference is the de-noising strategy such that WD performs a fine-to coarse approximation by eliminating high frequency components step-by-step while HRBFN employs a coarse-to fine strategy by utilizing bases with increasing frequency.
On the other hand, these techniques have a property in common. They all handle the complete timedomain signal at once requiring all the signal to be acquired before processing. However, there might be cases during which the time span of the scattered signal is long and the de-noising procedure require real time analysis. In such cases, it would be inefficient to wait for the complete signal acquisition. Thus, in this study, Savitzky-Golay filters, which allows processing and de-noising sub-frames (i.e. time intervals).
Wavelet Decomposition (WD) with Gabor Basis
Discrete WD is proven to be very effective for analyzing non-stationary signals in various applications [15]. In this application, the Gabor wavelet is chosen as the motherlet since it is tunable specific frequencies and allows detection of high frequency information together with noise filtering in a single step [14]. The Gabor wavelet is composed of a complex exponential multiplied by a Gaussian function is defined as where, µx is the center and σx is the width and A is the amplitude. A Gabor wavelet function can be defined as where k determines the frequency of the complex exponential. Briefly, WD based de-noising corresponds to application of a group of Gabor filters followed by down-sampling as the levels of the decomposition proceed.
B. Hierarchical Radial Basis Function Network (HRBFN)
Despite fine to coarse approach of WD, HRBFN uses several Gaussian units having decreasing variances and amplitudes as given in (3)
Matching Pursuit (MP)
The MP uses multiple bases from an over-complete dictionary [12]. Since the scattered signals may contain abrupt changes that can be represented with multiple bases especially under low SNR conditions. Since the quality of approximation depends on how well the signal characteristics are matched by the bases, MP offers advantages over single basis approximations. This property of the MP might be particularly useful when the noise (i.e. unwanted information) has a narrow frequency spectrum. In such cases, MP might be considered as a superior alternative to the wavelets, which are more effective on determination of isolated discontinuities. The use of the MP is carried out by using trigonometric, exponential, and polynomial bases, which are selected through an extensive experimentation process.
Savitzky-Golay (SG)
Savitzky-Golay is a digital smoothing filter that can be used to increase the SNR of a corrupted signal without distorting it very much. This is achieved by fitting successive time frames (or windows) with a pre-defined degree polynomial via linear least squares. As in the case of scattered signals, when the data points are equally spaced, an analytical solution can be derived in the form of a single set of convolution coefficients. A demonstrative application result is given in Figure 6.
Application & Results
The four de-noising techniques in Section III are applied to the targets described in Section II via a time domain classification method, which extracts a feature set that collects the structural differences of the scattered signal waveforms such as the position, amplitude, rise/fall times and the number of the hills/valleys after a triangularization process [3].
During the tests, cross-validation techniques, specifically 9-fold splitting [9], is employed in order to compare the results with earlier studies. Only the signals in the testing fold are corrupted with noise such that the classifier is trained against the features of the high SNR scattered signals. An MLP (i.e. Multi-Layer Perceptron) network having six hidden and four output neurons is trained by back-propagation with adaptive learning rate [10]. The training goal is chosen to be 0.001.
Considering the results presented in Table I, the de-noising has no significant contribution to the result at 0dB SNR since the triangularization process can be performed more accurately when the distortions on the scattered signal do not alter its waveform. However, at lower -10 dB SNR, the denoising methods seem to increase the performance of the classification results. In all cases, the highest improvement is achieved when the HRBFN method is used while the SG filter performs the second best.
When the dielectric rods, which represent the target set with different material properties, and small scale aircraft models, which represent the targets having different geometries are considered, the contribution of the SG filter becomes more apparent. Although, the results show that the HRBFN is superior to other, the SG filter results are the closest to HRBFN on improving the classification performance (Table II).
Overall, the results for all target types (i.e. targets having different sizes, geometries and dielectric) show that, under low SNR conditions, the employment of de-noising techniques prior to feature extraction and classification process significantly improves the correct classification rates. Although the coarse-to-fine approach of HRBFN seems to outperform the other methods, SG filter performs significantly close to HRBFN and provide superior performance compared to wavelets and MP. Together with the advantages of convergent decomposition and real time processing, SG can be a useful alternative to HRBFN especially when the scattered signal acquisition is long. | 2,660 | 2018-12-01T00:00:00.000 | [
"Geology"
] |
CGMD Platform: Integrated Web Servers for the Preparation, Running, and Analysis of Coarse-Grained Molecular Dynamics Simulations
Advances in coarse-grained molecular dynamics (CGMD) simulations have extended the use of computational studies on biological macromolecules and their complexes, as well as the interactions of membrane protein and lipid complexes at a reduced level of representation, allowing longer and larger molecular dynamics simulations. Here, we present a computational platform dedicated to the preparation, running, and analysis of CGMD simulations. The platform is built on a completely revisited version of our Martini coarsE gRained MembrAne proteIn Dynamics (MERMAID) web server, and it integrates this with other three dedicated services. In its current version, the platform expands the existing implementation of the Martini force field for membrane proteins to also allow the simulation of soluble proteins using the Martini and the SIRAH force fields. Moreover, it offers an automated protocol for carrying out the backmapping of the coarse-grained description of the system into an atomistic one.
Introduction
Cellular processes lean on biological macromolecules and on interactions among them to serve as building blocks for large-scale functional complexes. These often contain many copies of the same or different biomolecules that aggregate through long-range interactions into functional suprastructures. These macromolecular complexes perform their function either in a soluble environment or embedded in the cell membrane. Furthermore, the way in which membrane proteins and their macromolecular complexes associate with and within the lipid bilayer may have an effect on the function of the protein itself. Thus, the availability of techniques that allow a systematic study of these lipids/membrane protein systems spanning large time-and space-scales is fundamental for the understanding of their functions.
Molecular dynamics (MD) simulations have emerged as a powerful tool to study biological systems at varying lengths and timescales. Indeed, all-atom (AA) molecular dynamics simulations
Results and Discussion
In this section, we describe the different features of the platform, as well as three application cases.
CGMD Platform Architecture
The platform is organized as four different stand-alone web servers. These include: (i) the new version of MERMAID; (ii) the 'water Martini' web server dedicated to the simulation of soluble proteins using the Martini force field; (iii) a completely new web server for the preparation and running of CGMD simulations using the SIRAH force field in explicit solvent; (iv) a server dedicated to the backmapping of Martini CG representation of the protein and/or some lipids to an atomistic-level description of the system.
The workflows for the different web servers can be appreciated in Figures 1-3. Each of them can be divided into two different stages, which include the user interface (front-end) and the data retrieval, as well as the back end of the server. •
Front-End
File upload. The users can interact with a renewed web client interface to submit a protein structure. In particular, either a custom PDB structure or a structure automatically downloaded from the OPM Server (https://opm.phar.umich.edu/) [19] (for membrane proteins), or RCSB PDB for soluble proteins, can be submitted ( Figure 1). The custom PDB file can be either an experimental or a modeled structure. If the experimental structure is an NMR-derived ensemble, the first conformer is automatically chosen using an in-house script. In the case of GPCRs, the user can employ any modeling program or server (for a review, see [17]). Alternatively, we offer a direct link from our GOMoDo web server [18] for the modeling of GPCRs. The models generated using methods other than GOMoDo have to be previously aligned along the Z-axis. This can be done manually or by using web servers like the PPM server (https://opm.phar.umich.edu/ppm_server). If the submitted structure contains missing atoms, they will be automatically added using an in-house script that implements the complete_pdb function of Modeller 9.25 [20]. At this point, the user is asked to register in the Modeller webpage (https://salilab.org/modeller/) and to add the corresponding Modeller license key in the appropriate field; Interactive preparation. This interface allows the users to choose all the parameters for the simulation. The server suggests some default values, but expert users have the freedom to change them according to their needs or directly upload precompiled parameter files. Two versions of the Martini force field are offered, i.e., Martini22 and Elnedyn22. The users are then funneled through different panels. Each panel allows the choice of a large variety of parameters, including those related to the martinize.py [21] (Version 2.6) (charges of termini and chain breaks, disulfide bridges, and position restraints) and insane.py [22] (PBC type, distance between periodic images, box dimensions, lipid type and abundance in both upper and lower leaflets, etc.) python scripts. Counterions can also be chosen (Na + , Ca 2+ , and choline (NC3 + ) as positive ions, Cl − as negative ion) to neutralize the charge of the simulated system. Moreover, expert users can directly upload their own mdp files with custom-made parameters; Membrane selection. The users can choose from among 57 already parametrized lipids to generate a custom-made membrane composition. On the other hand, the users can select one among several physiological-like membranes, where the known lipidic composition is already set up. The composition of the default membrane offered is the Golgi apparatus membrane [23]. Some of the offered membranes include: (a) Golgi membrane with a composition of: CHOL:18%, POPC:36%, POPE:21%, POSM:6%, POPS:6% and POPI:12%; (b) endoplasmic reticulum membrane: CHOL:8%, POPC:54%, POPE:20% and POPI:11%; (c) plasma membrane: CHOL:34%, POPC:23%, POPE:11%, POSM:17% and POPS:8% and (d) mitochondrial membrane: CDL:22%, POPC:37%, POPE:31% and POPI:6% [23]. Moreover, the user can "model" a customized membrane, varying the concentration and type of lipids. The choice of membrane composition can be done either for both leaflets of the membrane or considering the inner and external leaflets independently. The users have access to all the generated input and output files for each CGMD process at any time. Data can be accessed either from the bookmarked web link or directly from the server user directory by providing the required credentials in the MERMAID search bar, as indicated in the 'on-the-fly' documentation; Output and data retrieval. Results can be viewed and downloaded for 2 weeks (renewal possible) by bookmarking the link, or alternatively by using the corresponding IDs. The full output of the preparation can be downloaded as a compressed archive file including the input, output, and log files of all preparation and simulation steps. The downloaded files can be used to continue the CGMD simulations locally. Experienced users have the possibility of downloading the prepared system and tuning the parameters before running the simulation on their local computer. An array of trajectory analyses is available. These include the calculation of the Root Mean Square Deviation (RMSD), density, pressure, temperature, and gyration radius, among others. The corresponding plots are also visualized. This new version of MERMAID also allows for displaying the simulation run directly on the browser at three different speeds using the NGL Viewer [24].
• Back-End. After submitting all files and parameters, the web server creates a local user directory where all operations will be performed. During the initial setup of the system, the atomistic protein structure is converted into a CG representation with the help of the martinize.py python script [21].
In the case of membrane proteins, the CG structure is then embedded into a user-defined CG lipid membrane with the help of the python script insane.py [22]. Subsequently, the CG simulations are run within our servers. A typical CGMD protocol consists of four phases: Minimization run; Equilibration runs in two different ensembles, namely canonical ensemble (NVT) and isobaric-isothermal ensemble (NPT); Production run continued in an NPT ensemble; Analysis of all the trajectories produced during the simulation.
The suggested default parameters are the ones recommended by Martini developers, based on extensive testing [25]: for example, it is advisable to treat coulomb interactions using a reaction-field, as this gives slightly better results at a negligible extra computational cost. A straight cutoff can be used in Martini simulations, with a cutoff distance of 1.1 nm. Good temperature control can be achieved with the velocity rescale (V-rescale) thermostat, using a coupling time constant of at least 0.5 ps. For bilayer systems, the pressure coupling should be semi-isotropic. However, not all systems have been tested and it is recommended that the users perform their own tests. o Output and data retrieval. Results can be viewed and downloaded for 2 weeks (renewal possible) by bookmarking the link, or alternatively by using the corresponding IDs. The full output of the preparation can be downloaded as a compressed archive file including the input, output, and log files of all preparation and simulation steps. The downloaded files can be used to continue the CGMD simulations locally. Experienced users have the possibility of downloading the prepared system and tuning the parameters before running the simulation on their local computer. An array of trajectory analyses is available. These include the calculation of the Root Mean Square Deviation (RMSD), density, pressure, temperature, and gyration radius, among others. The corresponding plots are also visualized. This new version of MERMAID also allows for displaying the simulation run directly on the browser at three different speeds using the NGL Viewer [24].
• Back-End. After submitting all files and parameters, the web server creates a local user directory where all operations will be performed. During the initial setup of the system, the atomistic protein structure is converted into a CG representation with the help of the martinize.py python script [21]. In the case of membrane proteins, the CG structure is then embedded into a userdefined CG lipid membrane with the help of the python script insane.py [22]. Subsequently, the CG simulations are run within our servers. A typical CGMD protocol consists of four phases: o Analysis of all the trajectories produced during the simulation.
The suggested default parameters are the ones recommended by Martini developers, based on extensive testing [25]: for example, it is advisable to treat coulomb interactions using a reaction-field, as this gives slightly better results at a negligible extra computational cost. A straight cutoff can be used in Martini simulations, with a cutoff distance of 1.1 nm. Good temperature control can be achieved with the velocity rescale (V-rescale) thermostat, using a coupling time constant of at least 0.5 ps. For bilayer systems, the pressure coupling should be semi-isotropic. However, not all systems have been tested and it is recommended that the users perform their own tests. • Front-End. With this web server, the users have the possibility of preparing, running, and analyzing CG simulations using the SIRAH 2.2 force field [26] ( Figure 2). This feature allows for using a completely independent approach for running CGMD simulations.
File upload. The users must provide a PDB file containing the correct protonation state for each residue to map at a proper pH the SIRAH CG beads. Since hydrogen nomenclature is not entirely standardized across different software packages, we strongly suggest the use of the pdb2pqr web server [27] (http://server.poissonboltzmann.org/); Notice that, if the system contains disulfide bridges, the cysteine residue names involved in disulfide bridges have to be manually edited from "CYS" to "CYX" in the PDB file provided as input. Similarly, to simulate a cysteine in thiolate state, the residue name must be changed from "CYS" to "CYM". Protonation states of aspartates and glutamates can be set to neutral by editing the residue name to ASH or GLH, respectively; Simulation Parameters. The server shows the precompiled MD parameters (mdp) files for the simulation. In this case, the parameters are fixed and visible in read-only mode; Running. After the submission of the file, the user's job is queued and its status can be monitor at any time during the simulation; Output and data retrieval. As for the MERMAID web server, the results are stored for 2 weeks. The full output of the preparation can be downloaded as a compressed archive file including the input, output, topologies, and log files of all the preparation and simulation steps. The downloaded files can be used to continue the CGMD simulations locally. The offered analysis tools are the same as for the water Martini case. It can also display the simulation run directly on the browser at three different speeds (1×, 2×, and 5×) using NGL Viewer [24].
• Back-end. The protonated model is converted to CG using SIRAH Tools [28] and solvated using pre-stabilized boxes of the WatFour (WT4) CG water model and electrolytes [29]. Each run can be divided into two: solvation and addition of counterions, and the five molecular dynamics steps (two minimization steps, two equilibration steps, and one production step). The mdp files are displayed during the preparation. These parameters were extensively tested and should allow a smooth and fully automated preparation of the system in explicit solvent. Figure 2.
• Front-End. With this web server, the users have the possibility of preparing, running, and analyzing CG simulations using the SIRAH 2.2 force field [26] (Figure 2). This feature allows for using a completely independent approach for running CGMD simulations.
o File upload. The users must provide a PDB file containing the correct protonation state for each residue to map at a proper pH the SIRAH CG beads. Since hydrogen nomenclature is not entirely standardized across different software packages, we strongly suggest the use of the pdb2pqr web server [27] (http://server.poissonboltzmann.org/); o Notice that, if the system contains disulfide bridges, the cysteine residue names involved in disulfide bridges have to be manually edited from "CYS" to "CYX" in the PDB file provided as input. Similarly, to simulate a cysteine in thiolate state, the residue name must be changed from "CYS" to "CYM". Protonation states of aspartates and glutamates can be set to neutral by editing the residue name to ASH or GLH, respectively; o Simulation Parameters. The server shows the precompiled MD parameters (mdp) files for the simulation. In this case, the parameters are fixed and visible in read-only mode; o Running. After the submission of the file, the user's job is queued and its status can be monitor at any time during the simulation; o Output and data retrieval. As for the MERMAID web server, the results are stored for 2 weeks. The full output of the preparation can be downloaded as a compressed archive file including the input, output, topologies, and log files of all the preparation and simulation steps. The downloaded files can be used to continue the CGMD simulations locally. The offered analysis tools are the same as for the water Martini case. It can also display the simulation run directly on the browser at three different speeds (1×, 2×, and 5×) using NGL Viewer [24].
• Back-end. The protonated model is converted to CG using SIRAH Tools [28] and solvated using pre-stabilized boxes of the WatFour (WT4) CG water model and electrolytes [29]. Each run can be divided into two: solvation and addition of counterions, and the five molecular dynamics steps (two minimization steps, two equilibration steps, and one production step). The mdp files are displayed during the preparation. These parameters were extensively tested and should allow a smooth and fully automated preparation of the system in explicit solvent. The implicit loss of resolution of CG representations is a limiting factor when trying to interpret the details of the simulations. Indeed, atomistic level details, such as specific contacts, are the key for understanding molecular recognition and specific intra/intermolecular details. The process of retrieving atomistic details from a CG representation is known as reverse transformation, inverse mapping, or backmapping. There are several different backmapping protocols that follow two different steps, i.e., generation of an atomistic structure based on the CG coordinates, and relaxation step of the generated atomistic structure [30]. Here, we implement the backward program [31] (Figure 3).
• Front-End. The backmapping procedure can be reached from an independent menu. The users can backmap a protein in water from the Martini force field to Amber [32], Charmm36 [33] or Gromos [34] force fields. For membrane systems, the following lipids are supported: CHOL, DOPC, DOPE, DOPG, DOPS, DPPC, POPC, POPE, and POPG. Slipids' force field [35] topologies are used, and consequently the associated protein is backmapped to the amber force field [32]; • Back-end. MERMAID backmapping allows reconstructing the protein from a CG to AA representation using the backward program [31]. The latter consists of three scripts and a number of CG to atomistic mapping definition files. For a description of the backmapping procedure, see [31].
Backmapping Web Server
The implicit loss of resolution of CG representations is a limiting factor when trying to interpret the details of the simulations. Indeed, atomistic level details, such as specific contacts, are the key for understanding molecular recognition and specific intra/intermolecular details. The process of retrieving atomistic details from a CG representation is known as reverse transformation, inverse mapping, or backmapping. There are several different backmapping protocols that follow two different steps, i.e., generation of an atomistic structure based on the CG coordinates, and relaxation step of the generated atomistic structure [30]. Here, we implement the backward program [31] (Figure 3).
• Front-End. The backmapping procedure can be reached from an independent menu. The users can backmap a protein in water from the Martini force field to Amber [32], Charmm36 [33] or Gromos [34] force fields. For membrane systems, the following lipids are supported: CHOL, DOPC, DOPE, DOPG, DOPS, DPPC, POPC, POPE, and POPG. Slipids' force field [35] topologies are used, and consequently the associated protein is backmapped to the amber force field [32]; • Back-end. MERMAID backmapping allows reconstructing the protein from a CG to AA representation using the backward program [31]. The latter consists of three scripts and a number of CG to atomistic mapping definition files. For a description of the backmapping procedure, see [31].
Documentation
Extensive documentation for each of the steps and parameters is offered through the use of dynamic pop-ups generated using the Bootstrap Tour tools (https://bootstraptour.com/) locally installed within our machine. The availability of a dynamic, always-accessible guide through the entire process allows the users to be documented "on the fly", avoiding the opening of new pages or movement along long manual pages.
Documentation
Extensive documentation for each of the steps and parameters is offered through the use of dynamic pop-ups generated using the Bootstrap Tour tools (https://bootstraptour.com/) locally installed within our machine. The availability of a dynamic, always-accessible guide through the entire process allows the users to be documented "on the fly", avoiding the opening of new pages or movement along long manual pages.
The tutorial with examples (see Section 2.3) offers pre-calculated systems that can be easily accessed by the user at any time. These cases show all the possible calculations that can be performed through the CGMD platform.
Application Case: The Martini Force Field
As an application case, we present the preparation and running of the Rhodobacter sphaeroides translocator protein (RsTSPO), also known as the peripheral benzodiazepine receptor (PDB accession code: 4UC3). TSPO is an 18 KDa membrane protein conserved across the three domains of life [36] with a vast spectrum of roles ranging from an environmental sensor to a functional bioregulator of apoptosis, autophagy, inflammation, along with cholesterol and porphyrin transport [37]. Recently, we presented a CGMD study of this receptor aimed at the characterization of the impact of cholesterol in the formation of multimeric assemblies [38]. In this paper, we claimed that cholesterol affects the size and rigidity of the bacterial translocator proteins to a lesser extent than the mammalian protein. Moreover, an overabundance of cholesterol causes a decrease in the number of contacts at the subunit-subunit interface of the RsTSPO and Bacillus cereus TSPO (BcTSPO) systems. For this reason, the role of the sterol could be potentially significant, although the study of these bacterial proteins is not conclusive. Aiming to illustrate the capability of this server, we offer the users the possibility of preparing, running, and analyzing the bacterial TSPO simulation. This application case has also been run, and it is freely accessible here: https://molsim.sci.univr.it/mermaid/public_html/membrane/mrstspo1/run/analysis.php. This system was prepared following the workflow presented in Figure 1 and simulated in a 70% POPG and 30% CHOL model membrane (offered by our CGMD Platform). A total of 305 Na + and 307 Cl − was added to neutralize the system. A 1.5 nm distance between periodic images was applied. The simulation was conducted in four steps (minimization, NVT, NPT, and production) using default parameters. After~20 ns, the structure was equilibrated, as shown from the RMSD plot. The radius of gyration plot reflects how the protein remained compact during the simulation. Additional plots provided on the analysis page offer tools to assess the behavior of the simulation, which were stable in all the ensembles. Indeed, taking all the analyses together, the user has a system that is prepared to be run in a similar fashion to the one in [38]. During and after the simulation, the users are constantly informed of the status of their calculation through a progress bar positioned at the top of each analysis page, which is always accessible. The backmapping procedure was used at the end of the MD following the steps depicted in the workflow in Figure 3. The final backmapped.gro structure is freely accessible in our public Github Repository: https://github.com/JavaScript92/CGMD_Platform. A "Demo Case" button is available from the preparation page to allow an automatic upload of the PDB and running this application case. The reference input file can be downloaded from the same repository.
Application Case: The Water Martini
As an application case, we present the preparation and running of the structure of the orthorhombic form of the hen egg-white lysozyme at 1.5 Å resolution (PDB accession code: 1AKI), which is accessible here: https://molsim.sci.univr.it/mermaid/public_html/water/wusecase/run/analysis.php. This system was prepared following the workflow in Figure 1 and simulated in a 15 nm 3 cubic box. 305 Na + and 313 Cl − were added to neutralize the system. A 1.5 nm distance between periodic images was applied. The simulation was conducted in four steps (minimization, NVT, NPT, and production) using default parameters. After~2 ns, the structure was equilibrated, as shown from the RMSD plot. Besides, the expected trend in the radius of gyration plot reflects how the protein remained compact during the simulation. The charts within the analysis page allowed us to assess the stability and convergence of the first 100 ns of simulation. These results indicate that the simulation is ready to begin a production run. For the sake of completeness, to provide an estimation concerning the time gained between an AA and Martini CG simulation carried out in the same system (1AKI) in a water environment, a CG representation, consisting of 29,726 beads, is~100 times faster than an AA simulation containing 90,273 atoms. Both AA and CG simulations were simulated using the same computational resources (one node and 14 OpenMP threads).
Application Case: The SIRAH Force Field
As an application case of the SIRAH force field [11], we present and run the Binary Complex of Restriction Endonuclease HinP1I with its Cognate DNA (PDB accession code: 2FL3), following the workflow in Figure 2. The results for this demo case can be freely accessed here: https://molsim. sci.univr.it/mermaid/public_html/sirah/s2FL3demo/run/analysis.php. This system was prepared as described in Section 2.1.2 and simulated in explicit solvent. The octahedron box was neutralized adding CG K + and Cl − ions, using the gmx genion tool, providing a physiologic ionic strength of 0.15 M. The simulation was performed in five steps: two minimization runs (5000 steps each), two equilibration runs (5 and 25 ps, respectively), and one production run (100 ns). After and during the MD simulation, the users are constantly informed on the status of their simulation through the progress bar positioned at the top of each analysis page, which is always accessible during the running. Within our CGMD Platform, several analyses can be carried out, including 20 dynamic plots (temperature, pressure, potential energy, etc.), a 3D visualization for each MD step, and an "on-the-fly" 3D visualization of the production ensemble. The documentation explains how to retrieve the corresponding jobs. At the end of the simulation, a zip archive containing all the necessary molecular structures (in gro format), topologies, trajectories, and calculated properties (in xvg format) can be downloaded from the analysis page. Finally, the users can run this example by uploading the file 2fl3.pqr available here: https://github.com/JavaScript92/CGMD_Platform.
Usage Statistics
The previous version of the MERMAID web server has been extensively used by different research groups around the world. As of today (31 October 2020), over 100 calculations were performed on our web server, including Covid-19 proteins, ATPases, and several other membrane receptors. For the current version of the CGMD Platform, our group has run several test cases for each of them. For the sake of comprehension, some application cases are reported here: https://molsim.sci.univr.it/ mermaid/page/applicationCases.php. Other test files are freely accessible from our Github repository: https://github.com/JavaScript92/CGMD_Platform.
The current version of CGMD Platform implements: (i) a local version of Gromacs 2019.3 [39] to perform MD simulations and carry out the trajectories analysis as well as the backmapping procedure; (ii) a locally installed version of the Dictionary of Protein Secondary Structure (DSSP) [40] to get the geometrical properties of the secondary structure required for running the Martini force field version 2.2; (iii) all the programs needed for preparing the files for the Martini force field, including the martinize.py script [21] for coarse-graining the protein structures, and the insane.py script [22] for embedding the protein in the membrane; (iv) the version 2.2 of the SIRAH force field along with SirahTools [28] to prepare a CG version of the solute (downloaded from http://www.sirahff.com); (v) Plotly (http://gnuplot.info/) for data plotting; (vi) NGL viewer [24] to provide an interactive 3D molecular viewer of the molecular dynamics simulation embedded in the web page. Moreover, several Gromacs packages were employed: gmx energy tool was used to generate the temperature, pressure, total and potential energy, volume, and enthalpy plots; gmx density tool was used to calculate the density of all components of the system; gmx trjconv tool was used to convert the trajectories to gro format and remove water as well as lipids from the structures displayed on the browser; gmx rms and gmx gyrate were applied for the calculation of the RMSD of the protein (the same group was used for the least-squares fit, using the output structure of the NPT equilibration as a reference) and radius of gyration of the protein, respectively.
Finally, since GPCRs share the same topology, the modelled proteins coming from GOMoDo are structurally aligned against the Nociceptin receptor in its inactive state, which has been used as a reference structure (RCSB PDB Code: 4EA3, downloaded from the OPM Server), using a locally installed version of LovoAlign [41] to help to obtain a proper orientation in the membrane.
Conclusions
Understanding the mechanisms and dynamics underlying the function of big macromolecular complexes and membrane proteins embedded in their physiological lipidic environments has always been a challenge for researchers, as the time and space scales needed to gain a comprehensive understanding are very difficult to uncover.
CGMD techniques have been shown to be useful for gaining deeper insights into the mechanisms underlying the function of these systems at time/space-scales that cannot be studied at an atomistic resolution. Due to increasing interest and the availability of more powerful algorithms and hardware, in recent years, several web servers have been developed with the scope to prepare and/or perform different resolution levels of MD simulations, i.e., CHARMM-GUI [42], CABS-flex [43], locPREFMD [44], MDWeb [45], PREFMD [46], ProBLM [47], SMOG [48], UNRES [49], Vienna-PTM [50], MM/CG web server [51]. In particular, the existing web servers provide a web-based graphical user interface to generate various molecular simulation systems and input files to facilitate the usage of common simulation techniques. They are widely used to prepare the input files needed to be run on local clusters, and, in some of them, to run short simulations. Nevertheless, they are not specialized in membrane proteins and they allow neither the selection of different membranes nor the running of CGMD simulations. Our newly developed computational platform implements, together with the previously described features, several novel elements to aid in the preparation and running of CG simulations, such as the availability of a backmapping service, and the possibility of choosing between the Martini or the SIRAH CG force field for carrying out the system preparation. Moreover, the user is constantly guided through the entire procedure by a series of dynamic help pop-ups on each of the pages. This allows more direct and self-aware interaction with the server. Besides, the system offers a set of analyses that can be carried out on the fly to follow the evolution of fundamental features, such as temperature, pressure, and different energy terms during the simulation. Last but not least, it offers the possibility of choosing from among 57 different lipids for custom membrane building, or the availability of ready-to-use realistic models for the most common types of physiological membranes.
Membrane composition and membrane protein oligomerization play a biologically relevant role in cell function. The new platform could prospectively be used for efficiently setting up CGMD simulations of the same protein embedded in different lipidic environments, as well as comparative studies of membrane proteins in different multimeric states, by capitalizing on the low computational cost of CGMD.
To summarize, our novel CGMD Platform offers a user-friendly interface available to all users without any login requirement. The CGMD Platform is the only on-line protocol specifically designed for preparing and running CGMD simulations online: it guides the user step-by-step, therefore enormously reducing the usually time-consuming step of the system-set-up, even for non-expert users. It supports the preparation of complex systems, like membrane protein/lipids, easing the laborious process of membrane assemblies but still allowing them to be completely customizable. In addition, it offers a wide spectrum of highly specialized tools for simulation analyses. | 6,935.6 | 2020-12-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Sox enters the picture
The discovery of a gene that regulates two segmentation mechanisms in spider embryos is fueling the ongoing debate about the evolution of this crucial developmental process.
I t is usually assumed that humans have little in common with arthropods, such as insects and spiders, or annelids such as worms. However, the body plans of vertebrates, arthropods and annelids share a striking feature: the body is subdivided into distinct segments (pink dot, Figure 1A), and scientists have been asking "is segmentation evolutionarily conserved?" for more than a century.
In vertebrate embryos, segments are added progressively along the body axis from the anterior (head) to the posterior (tail). This process involves oscillating patterns of gene expression in the posterior of the organism, including widely conserved genes that code for proteins like caudal and components of the well-known Notch and Wnt signaling pathways.
In the vinegar fly Drosophila melanogaster, on the other hand, the segments are formed almost simultaneously early during embryonic development following a complex cascade of gene interactions, in a process known as longgermband segmentation. Maternally-provided factors, such as caudal, induce gap genes, which subsequently control the expression of pair-rule genes and, finally, segment-polarity genes (including a Wnt homolog). The end result is the formation of a molecular pre-pattern for segmentation.
For many years these morphological and genetic differences were considered as strong evidence that segmentation does not have a common origin. However, other arthropods employ an approach called short-germband segmentation that is, at least morphologically, more similar to the approach taken by vertebrates. In short-germband segmentation a small number of anterior segments are defined more or less simultaneously by gap (or gap-like) genes, while the remaining segments are added sequentially to the posterior by the 'segment addition zone' ( Figure 1B).
A revolution in the field of evolutionary developmental biology was triggered in 2003 when researchers demonstrated that the segment addition zone in spider embryos depends on Notch signaling, as it does in vertebrates (Stollewerk et al., 2003). Additional comparative studies in spiders, insects and other arthropods revealed that short-germband segmentation with a segment addition zone depending on caudal, Notch and Wnt signaling, probably represents the ancestral mode of patterning in arthropods, whereas the genetic cascade in long-germband insects represents a derived state.
Recent studies, however, revealed even deeper similarities than previously thought among the complex genetic mechanisms underlying long-and short-germband segmentation. These included a more detailed analysis of gene expression dynamics and regulation in D. melanogaster and the flour beetle Tribolium castaneum (Clark and Akam, 2016;Clark and Peel, 2018;Zhu et al., 2017) and the discovery Copyright Kaufholz and Turetzek. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. of oscillatory gene expression dynamics in longgermband insects (Verd et al., 2018). Now, in eLife, Alistair McGregor and co-workers at Oxford Brookes University and Cambridge University -including Christian Paese of Oxford Brookes as first author -report another piece of evidence for this deeper level of conservation of segmentation by showing that a gene called Sox21b-1 is involved in segmentation in the spider Parasteatoda tepidariorum (Paese et al., 2018). P. tepidariorum has emerged as a model system in which to study the influences of whole genome duplications on development in arthropods (Schwager et al., 2017).
Sox21b-1 belongs to the B group of the Sox family of transcription factors, which are present in all metazoan species. In arthropods the SoxB group is comprised of SoxNeuro, Sox21a, Sox21b and Dichaete. Although the evolution of arthropod SoxB genes is not fully resolved yet, Sox21b and Dichaete are closely related and probably arose by duplication in the last common ancestor of the arthropods, while in onychophorans (velvet worms), the sister group to arthropods, only one Dichaete/Sox21b class gene seems to be present (Janssen et al., 2018).
In the current study Paese et al. demonstrate that Sox21b underwent a second round of duplication in spiders, probably during a whole genome duplication event (Schwager et al., 2017), giving rise to the two paralogs: Sox21b-1 and Sox21b-2. Using RNA interference Paese et al. found that a knockdown of the Sox21b-1 paralog resulted in both the loss of leg-bearing prosomal segments and opisthosomal segments, including loss of the entire segment addition zone ( Figure 1B). In severe cases, even proper formation of the germband itself was disrupted. The researchers Peel and Akam, 2003). Although vertebrates, arthropods, and annelids all have a segmented anterior-posterior body axis (pink dots), they are all closely related to phyla that are not segmented. Some conserved gene families (including Sox and Wnt) were already present before the emergence of the bilaterians (green dot) and are involved in many important developmental processes: some are also known to be involved in segmentation in various animals. However, the evolution of segmentation is still under debate. It could have evolved once in the 'urbilateria' (black dot), the last common ancestor of all bilaterians, or twice -at the base of the vertebrates and independently at the base of the protostomes (grey dots). Both of these scenarios would suggest a subsequent loss of segmentation in all closely related unsegmented phyla. The third scenario is that segmentation independently evolved three times -at the base of the vertebrates, arthropods, and annelids (pink dots). (B) Schematic representation of the germband and related adult tissues of the spider Parasteatoda tepidariorum (adapted from Paese et al., 2018). The body of a spider embryo is comprised of the prosoma (including the head (grey) and thorax (violet and indigo) and the opisthsoma (green). The formation of the legbearing prosomal segments (L1-L4) depends on the gap-like functions of genes such as Distal-less (Dll), hairy (h), hunchback (hb) and, as now shown by Paese et al., Sox21b-1. The opisthosomal segments are added sequentially by a segment addition zone (SAZ, dark green) controlled by a complex gene regulatory network (dark green box), probably induced by Sox21b-1, that contains caudal (cad) and components of the Wnt (Wnt8) and Notch (Delta) signaling pathways. Ch: Cheliceral segment. Pp: Pedipalpal segment. Dl: Delta. also explored where Sox21b-1 is located in the genetic cascade that controls segmentation in the spider (Schö nauer et al., 2016). They found that in addition to acting as a gap-like gene in the prosoma, Sox21b-1 also regulates the expression of another gap-like gene and of many genes (including caudal and components of the Wnt and Notch signaling pathways) that are required to set up the segment addition zone in the opisthosoma.
These findings are striking for many reasons. Sox genes are known to be involved in segmentation, and to interact with Wnt genes, in both insects and vertebrates (Clark and Peel, 2018;Javali et al., 2017;Mukherjee et al., 2000;Russell et al., 1996). Moreover, both the Sox and Wnt gene families belong to the ancient gene repertoire of all bilaterians and have experienced multiple duplication events during evolution. Dichaete is known to control the expression of pair-rule genes in D. melanogaster (Russell et al., 1996), but relatively little is known about Sox21b. More recent studies in T. castaneum demonstrated that both Dichaete and Sox21b are expressed in the segment addition zone (Clark and Peel, 2018;Janssen et al., 2018), which suggests that the role of SoxB genes in segmentation is conserved for longand short-germband species.
Expression ofSoxB and Wnt genes has also been found in the most posterior part of velvet worm embryos (Hogvall et al., 2014;Janssen et al., 2018). Together with the latest results from the spider, this provides further support for a conserved genetic basis (involving homologs of SoxB and Wnt genes) for the different segmentation modes of arthropods.
The astonishing results reported by Paese et al. highlight once again how far we are from a complete understanding of segmentation. It also underlines the need for further comparative studies in various species, focusing on conserved gene families, especially after duplication events, to determine if segmentation evolved anciently or independently in the three segmented bilaterian lineages (pink dot, Figure 1A). Techniques like RNA interference, CRISPR/Cas9 and new sequencing methods, together with an increasing number of genomes and transcriptomes available for emerging model organisms, will hopefully help to answer this question. | 1,902.6 | 2018-10-01T00:00:00.000 | [
"Biology"
] |
FEA-Based Ultrasonic Focusing Method in Anisotropic Media for Phased Array Systems
: Traditional ultrasonic imaging methods have a low accuracy in the localization of defects in austenitic welds because the anisotropy and inhomogeneity of the welds cause distortion of the ultrasonic wave propagation paths in anisotropic media. The distribution of the grain orientation in the welds influences the ultrasonic wave velocity and ultrasonic wave propagation paths. To overcome this issue, a finite element analysis (FEA)-based ultrasonic imaging methodology for austenitic welds is proposed in this study. The proposed ultrasonic imaging method uses a wave propagation database to synthetically focus the inter-element signal recorded with a phased array system using a delay-and-sum strategy. The wave propagation database was constructed using FEA considering the grain orientation distribution and the anisotropic elastic constants in the welds. The grain orientation was extracted from a macrograph obtained from a dissimilar metal weld specimen, after which the elastic constants were optimized using FEA with grain orientation information. FEA was performed to calculate a full matrix of time-domain signals for all combinations of the transmitting and receiving elements in the phased array system. The proposed approach was assessed for an FEA-based simulated model embedded in a defect. The simulation results proved that the newly proposed ultrasonic imaging method can be used for defect localization in austenitic welds.
Introduction
Dissimilar metal welds (DMWs) of ferritic steel and austenitic stainless steel are widely used in nuclear power plants [1], where primary water stress corrosion cracks have been found in DMW areas between the pressure vessels and piping [2]. Therefore, it is necessary to ensure the structural integrity of structures by using nondestructive evaluation (NDE). Recently, the use of ultrasonic phased array systems has drastically increased in the field of NDE. The advantage of using ultrasonic phased array systems is that they provide two-dimensional B-scan images, which can help analyze the defect sizes and locations.
The probability of defect detection is relatively low when ultrasonic nondestructive testing is applied to austenitic weldments. During the welding process, coarse columnar grains grow [3], and the microstructure becomes anisotropic. The coarse grain size causes signal scattering and energy attenuation, and the anisotropic material properties of columnar grains result in a change in grain orientation in the welds, distorting the ultrasonic wave propagation paths [4]. Because traditional basic ultrasonic phased array systems use a straight wave propagation path and time in isotropic media for imaging defects, defect localization in austenitic welds is inaccurate. Thus, for practical applications of phased array systems in welds, information on grain orientation distribution, anisotropic material properties, and precise simulation of ultrasonic wave propagation behavior are required. As a result, many studies have been conducted to determine the distribution of grain orientation [1,3,5] and the elastic constants [6][7][8] in austenitic welds. In addition, several studies have applied ultrasonic array data to NDEs for austenitic welds [3, 6,[9][10][11][12][13][14][15][16][17]. However, there is no reliable and practical ultrasonic imaging method that can be applied to defects in austenitic welds.
In a previous study, the authors proposed a grain orientation prediction methodology in a nondestructive manner [5]. In the present study, a finite element analysis (FEA)based ultrasonic imaging methodology for austenitic welds is proposed for practical applications of ultrasonic imaging. The proposed ultrasonic imaging method uses the total focusing method (TFM) and a wave propagation database, which was constructed using FEA that considered the grain orientation and anisotropic material properties of the welds. The grain orientation was extracted from a macrograph obtained from a dissimilar metal weld specimen, and the anisotropic elastic constant was iteratively optimized by minimizing the ultrasonic wave propagation velocity difference between the test and simulation results. A full matrix of time-domain signals for all combinations of transmitting and receiving elements in the phased array system was calculated through a series of finite element analyses. Subsequently, the TFM was applied to build a defect image in the finite element (FE) model.
TFM Algorithm in Isotropic Material
The full matrix capture (FMC) approach uses the complete set of time-domain data (A-scans) from all combinations of transmitting and receiving elements. During an FMC inspection process, ultrasonic waves are transmitted from one array element, and all array elements capture the reflected signals, which is repeated to cover every combination of all transmitting and receiving elements. For an array system consisting of N elements, an FMC signal matrix is composed of N × N A-scan signals. The TFM imaging algorithm uses an FMC matrix [18]. Figure 1 illustrates the concept of the TFM algorithm. The position of a single point reflector within the media is defined in terms of the xand z-coordinates, (x, z). The wave propagation distance, d ik , from transmitter i to the reflector and back to receiver k, is calculated for each possible combination of i and k using Equation (1).
TFM Imaging in Anisotropic Material
The focus of the ultrasonic waves is realized by means of time delays, meaning that the transmitted pulses arrive in phase at the target region, which produces high-intensity focal points. In isotropic materials, the time delays can be determined using the longitudinal wave velocity and the relative position between elements i and j within the aperture and the target point; however, in anisotropic materials, this information is insufficient to determine the time delays because the propagation path of the ultrasonic wave is distorted, and does not maintain straightness. Precisely calculating the ultrasonic wave propagation path in austenitic welds is difficult because the complex grain structure and anisotropic material property cause skewing of the ultrasonic waves. For wave propagation simulation, information on the distribution of grain orientation and the anisotropic elastic constants is essential.
In this section, an FEA-based ultrasonic beam focusing methodology for austenitic The propagation time, t ij , is determined by dividing the propagation distance by the velocity of the longitudinal wave c in media. In the TFM, the ultrasonic beam is focused at any target point (x, z). The TFM algorithm first discretizes the target region into a grid. The signals from all the elements in the array are then delayed and summed to focus on the target point in the grid. The intensity of the grids I(x, z) at every point in the grid is expressed as follows: where h ij is the analytical signal associated with the signal recorded by element j as element i transmits, and N is the element number. In general, the TFM is used to image point-like reflectors in isotropic media, in which the ultrasonic beam propagates in a straight line. However, in austenitic welds, the anisotropic material properties and the distribution of the grain orientation associated with the position cause skewing and splitting of the ultrasonic beam. Therefore, it is necessary to accurately calculate the ultrasonic beam propagation path in austenitic welds for TFM applications.
TFM Imaging in Anisotropic Material
The focus of the ultrasonic waves is realized by means of time delays, meaning that the transmitted pulses arrive in phase at the target region, which produces high-intensity focal points. In isotropic materials, the time delays can be determined using the longitudinal wave velocity and the relative position between elements i and j within the aperture and the target point; however, in anisotropic materials, this information is insufficient to determine the time delays because the propagation path of the ultrasonic wave is distorted, and does not maintain straightness. Precisely calculating the ultrasonic wave propagation path in austenitic welds is difficult because the complex grain structure and anisotropic material property cause skewing of the ultrasonic waves. For wave propagation simulation, information on the distribution of grain orientation and the anisotropic elastic constants is essential.
In this section, an FEA-based ultrasonic beam focusing methodology for austenitic welds to determine time delays for the TFM is provided. Figure 2 shows a schematic procedure of TFM imaging in the welds. First, the grain orientation distribution and the anisotropic elastic constants, which are the major parameters for determining the ultrasonic wave propagation behavior, should be obtained (Step 1). Then, FEA is performed based on the information on the grain orientation distribution and the elastic constants of the welds (Step 2). From the FEA result, the ultrasonic wave propagation time database is extracted ( Step 3). The DB contains the wave propagation time from every element within the aperture and every scanning target point in the grid for TFM imaging. Using the database, the time delays can be determined, and a TFM image can be formed in anisotropic material. necessary to accurately calculate the ultrasonic beam propagation path in austenitic welds for TFM applications.
TFM Imaging in Anisotropic Material
The focus of the ultrasonic waves is realized by means of time delays, meaning that the transmitted pulses arrive in phase at the target region, which produces high-intensity focal points. In isotropic materials, the time delays can be determined using the longitudinal wave velocity and the relative position between elements i and j within the aperture and the target point; however, in anisotropic materials, this information is insufficient to determine the time delays because the propagation path of the ultrasonic wave is distorted, and does not maintain straightness. Precisely calculating the ultrasonic wave propagation path in austenitic welds is difficult because the complex grain structure and anisotropic material property cause skewing of the ultrasonic waves. For wave propagation simulation, information on the distribution of grain orientation and the anisotropic elastic constants is essential.
In this section, an FEA-based ultrasonic beam focusing methodology for austenitic welds to determine time delays for the TFM is provided. Figure 2 shows a schematic procedure of TFM imaging in the welds. First, the grain orientation distribution and the anisotropic elastic constants, which are the major parameters for determining the ultrasonic wave propagation behavior, should be obtained (Step 1). Then, FEA is performed based on the information on the grain orientation distribution and the elastic constants of the welds (Step 2). From the FEA result, the ultrasonic wave propagation time database is extracted (Step 3). The DB contains the wave propagation time from every element within the aperture and every scanning target point in the grid for TFM imaging. Using the database, the time delays can be determined, and a TFM image can be formed in anisotropic material.
Measurement of Grain Orientation Distribution and Anisotropic Elastic Constants
Step 1
Simulation of Wave Propagation Behavior
Step 2
Construction of Wave Propagation Time DB
Step 3
Computation of TFM Intensity
and TFM Imaging Step 4 To implement the proposed FEA-based ultrasonic beam focusing methodology for typical nondestructive testing in sites, the information or database of the grain orientation distribution and the anisotropic elastic constants should be given in advance. There are two practical ways to obtain this information. The first is to predict the material properties of a nondestructive method in advance, which the authors proposed in previous studies [5]. The second is to database the material properties in a destructive way for specimens made according to welding procedure specifications in advance. However, additional research should be conducted to minimize the difference of the material properties between a nondestructive test target and information obtained in advance, and to quantify the effect of material property uncertainties on the wave propagation behavior.
Distribution of Grain Orientation
To simulate the ultrasonic wave propagation behavior, it is necessary to model the distribution of the grain orientation, which is the macroscopic pattern of the grain orientation in austenitic welds. For the simulation, grain orientation obtained from micrographs can generally be used [3, 5,9]. In a previous study, the authors proposed a grain orientation prediction methodology based on computational mechanics and an optimization technique [5]. In this study, a micrograph was used to determine the distribution of grain orientation in austenitic welds. Figure 3a shows the schematic of the DMW specimen and the coordinate system, and Figure 3b shows the weld section of the DMW specimen fabricated for this study. The thickness, top width, and weld root of the welded zone were 30.0 mm, 39.7 mm, and 5.1 mm, respectively. The base materials were carbon steel (SA508 Gr.3) and stainless steel (STS304). This weld included a buttering part between the austenitic weld and the carbon steel parts, and the material of the welding and buttering part was alloy 152M. To model the grain orientation of the DMWs, the specimen was thoroughly etched, and the grain orientations were characterized using scanning electron microscopy (SEM). The macrographs of the weldment section were meshed with a 2 mm × 2 mm size mesh [12], after which the grain orientations were carefully marked in the macrographs (yellow lines in Figure 4a). The measured grain orientations from each meshed area were used to interpolate the grain orientations in any position. The interpolated grain orientations, as shown in Figure 4b via red lines, were used as input information for the FEA.
Anisotropic Elastic Constants
In the austenitic weldment, the material properties along the welding pass (y-axis) were assumed to be isotropic; however, in the other directions, to be anisotropic [9,10]. The material behavior can be properly simulated well with a transversely isotropic material, for which the elastic constants are expressed as follows: The prediction of the elastic constants can be formulated as an inverse problem in which the objective is to find an optimal set of the elastic constants with which wave propagation behavior in austenitic welds can be simulated with minimal error. In general, the behavior is measured using a set of sensors attached on the welds. The inverse problems can be solved by a trial-and-error method: guessing the unknown information, solving the forward problem, and then updating the guessed information in the forward data [19][20][21]. A user-defined error function describing the discrepancy between the measured wave velocity and the predicted one is minimized during the process.
The objective function is defined as the difference between the measured and calculated wave velocities, and the design variable is the five elastic constants in the transversely isotropic materials, C 11 , C 12 , C 13 , C 33 , and C 44 . The details of the FEA are described in Section 4. Figure 5 shows the schematic for the wave propagation test setup for the measurement of anisotropic elastic constants in the welding and buttering parts. A transmitter was placed on the top surface of the welds, and three receivers were attached on the bottom surface. The input load represented in Figure 6 was excited by a transmitter, of which the center frequency was 2.25 MHz, after which the response was stored by the three receivers at a sampling rate of 100 MHz. Figure 7 represents the stored signal by the receivers.
⎣ 2 ⎦
The prediction of the elastic constants can be formulated as an inverse problem in which the objective is to find an optimal set of the elastic constants with which wav propagation behavior in austenitic welds can be simulated with minimal error. In general the behavior is measured using a set of sensors attached on the welds. The invers problems can be solved by a trial-and-error method: guessing the unknown information solving the forward problem, and then updating the guessed information in the forward data [19][20][21]. A user-defined error function describing the discrepancy between th measured wave velocity and the predicted one is minimized during the process.
The objective function is defined as the difference between the measured and calculated wave velocities, and the design variable is the five elastic constants in th transversely isotropic materials, C11, C12, C13, C33, and C44. The details of the FEA ar described in Section 4. Figure 5 shows the schematic for the wave propagation test setup for th measurement of anisotropic elastic constants in the welding and buttering parts. A transmitter was placed on the top surface of the welds, and three receivers were attached on the bottom surface. The input load represented in Figure 6 was excited by a transmitter of which the center frequency was 2.25 MHz, after which the response was stored by th three receivers at a sampling rate of 100 MHz. Figure 7 represents the stored signal by th receivers. The test and simulation signals were filtered to extract the time signals containing an interesting frequency, after which the wave velocity could be well observed. The signals were used to optimize the elastic constants by minimizing the difference between the wave velocities of the test and the simulation. Figure 8, calculated using the optimized elastic constants, shows the normalized filtering signals with a 2nd order band-pass filter for measured signals and simulated signals, of which the upper and lower limits were 2.0 and 2.5 MHz, respectively. As shown in Figure 8, which demonstrates the response on the basement, weld, and buttering regions, the simulated wave propagation time from the transmitter to the receivers showed a good agreement with the tested one; the differences between simulated wave propagation times from the transmitter to receivers 1, 2, and 3, compared with the tested one, were −11, 1, and 0 μsec, respectively. The density of the welding and buttering parts was assumed to be 8190 kg/m 3 , and the elastic constants of the welding and buttering parts were optimized as follows: The test and simulation signals were filtered to extract the time signals containing an interesting frequency, after which the wave velocity could be well observed. The signals were used to optimize the elastic constants by minimizing the difference between the wave velocities of the test and the simulation. Figure 8, calculated using the optimized elastic constants, shows the normalized filtering signals with a 2nd order band-pass filter for measured signals and simulated signals, of which the upper and lower limits were 2.0 and 2.5 MHz, respectively. As shown in Figure 8, which demonstrates the response on the basement, weld, and buttering regions, the simulated wave propagation time from the transmitter to the receivers showed a good agreement with the tested one; the differences between simulated wave propagation times from the transmitter to receivers 1, 2, and 3, compared with the tested one, were −11, 1, and 0 µsec, respectively. The density of the welding and buttering parts was assumed to be 8190 kg/m 3 , and the elastic constants of the welding and buttering parts were optimized as follows:
Construction of Wave Propagation Time Database
It is impossible to determine the time delays in the austenitic welds using the basic TFM algorithm because the ultrasonic waves in the welds are distorted. This section
Construction of Wave Propagation Time Database
It is impossible to determine the time delays in the austenitic welds using the basic TFM algorithm because the ultrasonic waves in the welds are distorted. This section describes the methodology used to construct the wave propagation time database, S, which consists of three parameters: coordinates of elements within aperture (x i , z i ) or (x j , z j ), coordinates of any target scanning points (x, z), and the wave propagation time from the elements to scanning points, t i or t j . The wave propagation time can be calculated through FEA for a sufficiently rich set of phased array element positions on the surface of the DMW specimen, after which the wave propagation time can be extracted at the scanning points. The elements in the database are the coordinates of the phased array elements, scanning points, and the corresponding wave propagation time. The database can be expressed as follows: The wave propagation behavior in the austenitic welds was calculated using elastic FEA. The FEA was performed using the implicit solver in ABAQUS [22]. Figure 3a depicts the schematic of the DMW specimen used in the simulation. Within the x-y-plane, the medium appears isotropic; however, in other directions outside the plane, the medium exhibits anisotropic features. The FE model was assumed to have two dimensions, meaning that energy only propagated in the x-z plane. The mesh (Figure 9) was constructed using eight-node reduced integration plane strain elements, CPE8R in ABAQUS. The finite element size was 0.25 mm, to ensure that there would be at least seven nodes per wavelength in the spatial domain [13]. The simulation method using FEA was used to verify the longitudinal velocity in the transversely isotropic medium without changing the grain orientation. The longitudinal wave velocity in the xand z-directions was calculated as V xx = C 11 /ρ and V zz = C 33 /ρ, obtained from the Christoffel equation for transversely isotropic media. In a previous study [5], the velocities calculated by FEA were well matched with the theoretical solution within 1%. The measured grain orientation distribution described in Section 3 was used as the simulation input describing the grain structures in the austenitic welds, as shown in Figure 9.
The wave propagation behavior in the austenitic welds was calculated using elastic FEA. The FEA was performed using the implicit solver in ABAQUS [22]. Figure 3a depicts the schematic of the DMW specimen used in the simulation. Within the x-y-plane, the medium appears isotropic; however, in other directions outside the plane, the medium exhibits anisotropic features. The FE model was assumed to have two dimensions, meaning that energy only propagated in the x-z plane. The mesh (Figure 9) was constructed using eight-node reduced integration plane strain elements, CPE8R in ABAQUS. The finite element size was 0.25 mm, to ensure that there would be at least seven nodes per wavelength in the spatial domain [13]. The simulation method using FEA was used to verify the longitudinal velocity in the transversely isotropic medium without changing the grain orientation. The longitudinal wave velocity in the x-and z-directions was calculated as = / and = / , obtained from the Christoffel equation for transversely isotropic media. In a previous study [5], the velocities calculated by FEA were well matched with the theoretical solution within 1%. The measured grain orientation distribution described in Section 3 was used as the simulation input describing the grain structures in the austenitic welds, as shown in Figure 9. The array parameters with a linear array transducer with equispaced elements are presented in Table 1. The transmitted load from the phased array elements was simulated as the pressure at which the signal was modeled in the form of a sine function, The array parameters with a linear array transducer with equispaced elements are presented in Table 1. The transmitted load from the phased array elements was simulated as the pressure at which the signal was modeled in the form of a sine function, A 0 /2·(sin(2πf (t − λ/4)) + 1) for (0 ≤ t ≤ λ), as shown in Figure 10. A 0 is the amplitude of the input load, f is the center frequency, and λ is the wavelength in the time domain. The acceleration data were stored at a sampling rate of 200 MHz.
, as shown in Figure 10. A0 is the amplitude of the input load, f is the center frequency, and λ is the wavelength in the time domain. The acceleration data were stored at a sampling rate of 200 MHz. Figure 11 shows the change in the displacement field with time after the first phased array element located at x = −5.75, y = 0.0 was excited. Instead of pure longitudinal and shear waves, quasi-longitudinal and quasi-shear waves were generated in the anisotropic medium, as shown in Figure 11. Figure 12 shows the displacement as time increased at the points presented in Figure 11a Figure 11 shows the change in the displacement field with time after the first phased array element located at x = −5.75, y = 0.0 was excited. Instead of pure longitudinal and shear waves, quasi-longitudinal and quasi-shear waves were generated in the anisotropic medium, as shown in Figure 11. Figure 12 shows the displacement as time increased at the points presented in Figure 11a A0/2•(sin(2πf(t − λ/4)) + 1) for (0 ≤ t ≤ λ), as shown in Figure 10. A0 is the amplitude of the input load, f is the center frequency, and λ is the wavelength in the time domain. The acceleration data were stored at a sampling rate of 200 MHz. Figure 10. Input load transmitted to medium. Figure 11 shows the change in the displacement field with time after the first phased array element located at x = −5.75, y = 0.0 was excited. Instead of pure longitudinal and shear waves, quasi-longitudinal and quasi-shear waves were generated in the anisotropic medium, as shown in Figure 11. Figure 12 shows the displacement as time increased at the points presented in Figure 11a
Implementation of TFM Imaging Algorithm in Anisotropic Material
Using the wave propagation time database described in the previous section, the time delays can be determined in anisotropic media, after which the TFM image can be formed. This section describes the implementation of the TFM imaging algorithm for visualizing defects in austenitic welds.
An FEA-based simulation was performed to calculate the FMC signal matrix, S(t), which represents the full matrix of time-domain impulse response signals for all transmitting and receiving combinations. The finite element model described in Section 4 was used, but the model had a notch at x = 5.25, z = 13 mm with a width of 0.5 mm and a length of 4.5 mm, which corresponded to approximately 1.5 λ, as shown in Figure 13. Each transmitting element was excited with the impulse defined in Section 4, and the response signals were calculated in the position of all the receiving elements. Thus, 256 time signals were generated for all combinations of transmitting and receiving elements (16 × 16). Figure 14 shows an example of a set of 16 A-scan signals, which were calculated for all the receiving elements (elements 1-16) when element 1 was excited. All reflected signals did not arrive in phase because of the difference in propagation distance.
Implementation of TFM Imaging Algorithm in Anisotropic Material
Using the wave propagation time database described in the previous section, the time delays can be determined in anisotropic media, after which the TFM image can be formed. This section describes the implementation of the TFM imaging algorithm for visualizing defects in austenitic welds.
An FEA-based simulation was performed to calculate the FMC signal matrix, S(t), which represents the full matrix of time-domain impulse response signals for all transmitting and receiving combinations. The finite element model described in Section 4 was used, but the model had a notch at x = 5.25, z = 13 mm with a width of 0.5 mm and a length of 4.5 mm, which corresponded to approximately 1.5 λ, as shown in Figure 13. Each transmitting element was excited with the impulse defined in Section 4, and the response signals were calculated in the position of all the receiving elements. Thus, 256 time signals were generated for all combinations of transmitting and receiving elements (16 × 16). Figure 14 shows an example of a set of 16 A-scan signals, which were calculated for all the receiving elements (elements 1-16) when element 1 was excited. All reflected signals did not arrive in phase because of the difference in propagation distance.
Implementation of TFM Imaging Algorithm in Anisotropic Material
Using the wave propagation time database described in the previous section, the time delays can be determined in anisotropic media, after which the TFM image can be formed. This section describes the implementation of the TFM imaging algorithm for visualizing defects in austenitic welds.
An FEA-based simulation was performed to calculate the FMC signal matrix, S(t), which represents the full matrix of time-domain impulse response signals for all transmitting and receiving combinations. The finite element model described in Section 4 was used, but the model had a notch at x = 5.25, z = 13 mm with a width of 0.5 mm and a length of 4.5 mm, which corresponded to approximately 1.5 λ, as shown in Figure 13. Each transmitting element was excited with the impulse defined in Section 4, and the response signals were calculated in the position of all the receiving elements. Thus, 256 time signals were generated for all combinations of transmitting and receiving elements (16 × 16). Figure 14 shows an example of a set of 16 A-scan signals, which were calculated for all the receiving elements (elements 1-16) when element 1 was excited. All reflected signals did not arrive in phase because of the difference in propagation distance. The TFM imaging algorithm is performed by first discretizing the target region (in the x-z-plane within the medium) into the grid. The FMC matrix S(t) is delayed to produce a high intensity by aligning the reflected signals, and summed to synthesize a focus on every point in the grid. The intensity of the image I(x, z) at any target scanning point can be calculated using the propagation time database of the austenitic welds. Figure 15 shows the simulated A-scan signals at elements 1-16 under the excitation of elements 1-16, and the reflected signals from the notch. In addition, Figure 15 shows the delayed reflected signals from the center of the notch using the wave propagation database proposed in this study, using the basic traditional TFM method; the wave propagation time is also represented in the detailed figure. When the wave propagation time database was used to delay the A-scan signals, the signals arrived in phase at the target scanning point (notch center). However, when the wave propagation time calculated using the traditional TFM method was used, the A-scan signals were not aligned at the target point. This resulted in errors in the positioning of the defect. The TFM imaging algorithm is performed by first discretizing the target region (in the x-z-plane within the medium) into the grid. The FMC matrix S(t) is delayed to produce a high intensity by aligning the reflected signals, and summed to synthesize a focus on every point in the grid. The intensity of the image I(x, z) at any target scanning point can be calculated using the propagation time database of the austenitic welds. Figure 15 shows the simulated A-scan signals at elements 1-16 under the excitation of elements 1-16, and the reflected signals from the notch. In addition, Figure 15 shows the delayed reflected signals from the center of the notch using the wave propagation database proposed in this study, using the basic traditional TFM method; the wave propagation time is also represented in the detailed figure. When the wave propagation time database was used to delay the A-scan signals, the signals arrived in phase at the target scanning point (notch center). However, when the wave propagation time calculated using the traditional TFM method was used, the A-scan signals were not aligned at the target point. This resulted in errors in the positioning of the defect. The TFM imaging algorithm is performed by first discretizing the target region (in the x-z-plane within the medium) into the grid. The FMC matrix S(t) is delayed to produce a high intensity by aligning the reflected signals, and summed to synthesize a focus on every point in the grid. The intensity of the image I(x, z) at any target scanning point can be calculated using the propagation time database of the austenitic welds. Figure 15 shows the simulated A-scan signals at elements 1-16 under the excitation of elements 1-16, and the reflected signals from the notch. In addition, Figure 15 shows the delayed reflected signals from the center of the notch using the wave propagation database proposed in this study, using the basic traditional TFM method; the wave propagation time is also represented in the detailed figure. When the wave propagation time database was used to delay the A-scan signals, the signals arrived in phase at the target scanning point (notch center). However, when the wave propagation time calculated using the traditional TFM method was used, the A-scan signals were not aligned at the target point. This resulted in errors in the positioning of the defect. Figure 16 shows the method used to calculate the intensity using Equation (2) at the scanning point of the notch center. In the graph, the line indicates the function h in Equation (2), and t ij is the wave propagation time from the transmitting element to the receiving element. Figure 17 shows the TFM imaging results for the scanning area (as shown in Figure 13) from the starting point of x = −10, z = 0 mm to the ending point of x = 10, z = 20 mm. The imaging results of the basic TFM and the proposed TFM are displayed in Figure 16, in which the known locations of the defects are marked by red dotted lines. The proposed database-based method accurately estimated the defect position, but the traditional method incorrectly predicted the defect location by an error of approximately 3.2 mm. The error was caused by the wrong information of the wave propagation time, which was calculated using the basic TFM algorithm.
It can be concluded from the comparisons in Figure 13 that the proposed TFM imaging results have a significant improvement in the performance of defect localization accuracy compared with basic TFM imaging results, and the usefulness of the proposed TFM imaging methodology in this study is validated. Therefore, if the wave propagation database of the austenitic welds is used, the proposed TFM imaging methodology can be applied to the NDE of the DMWs at the site. However, it should be noted that the grain orientation distribution and the elastic constants should be determined in a nondestructive manner before the evaluation. Figure 16 shows the method used to calculate the intensity using Equation (2) at the scanning point of the notch center. In the graph, the line indicates the function h in Equation (2), and tij is the wave propagation time from the transmitting element to the receiving element. Figure 17 shows the TFM imaging results for the scanning area (as shown in Figure 13) from the starting point of x = −10, z = 0 mm to the ending point of x = 10, z = 20 mm. The imaging results of the basic TFM and the proposed TFM are displayed in Figure 16, in which the known locations of the defects are marked by red dotted lines. The proposed database-based method accurately estimated the defect position, but the traditional method incorrectly predicted the defect location by an error of approximately 3.2 mm. The error was caused by the wrong information of the wave propagation time, which was calculated using the basic TFM algorithm. Figure 16 shows the method used to calculate the intensity using Equation (2) at the scanning point of the notch center. In the graph, the line indicates the function h in Equation (2), and tij is the wave propagation time from the transmitting element to the receiving element. Figure 17 shows the TFM imaging results for the scanning area (as shown in Figure 13) from the starting point of x = −10, z = 0 mm to the ending point of x = 10, z = 20 mm. The imaging results of the basic TFM and the proposed TFM are displayed in Figure 16, in which the known locations of the defects are marked by red dotted lines. The proposed database-based method accurately estimated the defect position, but the traditional method incorrectly predicted the defect location by an error of approximately 3.2 mm. The error was caused by the wrong information of the wave propagation time, which was calculated using the basic TFM algorithm. It can be concluded from the comparisons in Figure 13 that the proposed TFM imaging results have a significant improvement in the performance of defect localization accuracy compared with basic TFM imaging results, and the usefulness of the proposed TFM imaging methodology in this study is validated. Therefore, if the wave propagation database of the austenitic welds is used, the proposed TFM imaging methodology can be applied to the NDE of the DMWs at the site. However, it should be noted that the grain orientation distribution and the elastic constants should be determined in a nondestructive manner before the evaluation.
Conclusions
A new FEA-based ultrasonic imaging methodology for austenitic welds is proposed in this paper. The methodology is composed of four steps: (1) measurement or prediction of grain orientation distribution and the anisotropic elastic constants; (2) simulation of wave propagation behavior; (3) construction of a wave propagation time database; and (4) computation of TFM intensity and TFM imaging. In this study, the grain orientation distribution was measured by a macrograph of the DMW specimen, and a new measurement method of the elastic constants was proposed using the measured grain orientation information and an optimization technique. The ultrasonic wave propagation behavior was calculated through FEA using the grain orientation and elastic constant information, after which the ultrasonic wave propagation time database was extracted. Finally, an FMC matrix in the phased array system was calculated through a series of finite element analyses for the simulated model embedded in a defect, and a TFM image was generated. The proposed TFM imaging results have a significant improvement in the performance of defect localization accuracy compared with basic TFM imaging results, and the usefulness of the proposed TFM imaging methodology in this study is validated.
Conclusions
A new FEA-based ultrasonic imaging methodology for austenitic welds is proposed in this paper. The methodology is composed of four steps: (1) measurement or prediction of grain orientation distribution and the anisotropic elastic constants; (2) simulation of wave propagation behavior; (3) construction of a wave propagation time database; and (4) computation of TFM intensity and TFM imaging. In this study, the grain orientation distribution was measured by a macrograph of the DMW specimen, and a new measurement method of the elastic constants was proposed using the measured grain orientation information and an optimization technique. The ultrasonic wave propagation behavior was calculated through FEA using the grain orientation and elastic constant information, after which the ultrasonic wave propagation time database was extracted. Finally, an FMC matrix in the phased array system was calculated through a series of finite element analyses for the simulated model embedded in a defect, and a TFM image was generated. The proposed TFM imaging results have a significant improvement in the performance of defect localization accuracy compared with basic TFM imaging results, and the usefulness of the proposed TFM imaging methodology in this study is validated.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,941.8 | 2021-09-24T00:00:00.000 | [
"Materials Science"
] |
A pH-responsive soluble polymer-based homogeneous system for fast and highly efficient N-glycoprotein/glycopeptide enrichment and identification by mass spectrometry
A homogeneous reaction system was developed for facile and highly efficient enrichment of biomolecules by exploiting the reversible self-assembly of a stimuli-responsive polymer.
Introduction
Stimuli-responsive polymers or "smart" polymers exhibit predictable and sharp changes in properties in response to small environmental changes, such as temperature, pH, ionic strength, light, or mechanical stress. These changes cause reversible self-assembly or phase separation of the polymer, which have attracted signicant research attention in the areas of synthesis of switchable pores/surfaces, biomedical imaging/ diagnostics and controlled drug delivery. [1][2][3][4][5][6] However, the potential of these "smart" polymers has not been well explored in the eld of analytical separation, which is extremely important for the sensitive identication and quantication of biomolecules and chemical components in biological, pharmaceutical and environmental analysis. Currently, the lack of fast and highly efficient sample enrichment/separation methods remains a major obstacle for achieving high throughput and sensitive analysis. 7,8 This lack of methodology is especially the case for biological analysis, due to the limited sample amount, low target concentration and strong background interference. 9,10 The widely adopted solid/insoluble matrix based enrichment approaches suffer from large steric hindrance, heterogeneous reactions between the liquid-phase targets and the solid-phase ligand and diffusion limitation in the solid-liquid interface. 11,12 These unfavorable conditions may result in limited reaction rates and poor yields. In contrast, homogeneous reactions in the liquid phase have the obvious advantages of fast mass transfer and high conversion rates and have been demonstrated to be highly useful in homogeneous catalysis and liquid-phase synthesis. 13,14 However, the lack of facile and robust target recovery approaches prevents the adoption of this approach in biological analysis, which commonly involves very small amounts of samples. Therefore, a new enrichment matrix with homogeneous reaction nature, improved accessibility and facile recovery is in urgent need for biological analysis.
As one of the most important post-translational modications [15][16][17] in eukaryote cells and key bio-analytical targets, protein N-glycosylation plays crucial roles in various biological processes, including intercellular recognition and communication, protein folding and immune responses. [18][19][20] Aberrant protein N-glycosylation is closely associated with many major human diseases, such as inammation, metabolic disorders, and various types of cancers, offering promising potential for disease diagnosis or prognostic monitoring. [21][22][23][24] Sensitive identication of these disease-related N-glycosylation variations may provide a unique path for the development of new diagnostics biomarkers and therapeutic drug targets. [25][26][27] Shotgun based glycoproteomics analysis by mass spectrometry (MS) is currently the method of choice for large scale and in-depth glycoprotein/glycopeptide proling in complex biological samples. 28,29 However, the inherently low abundant glycopeptides (approximately 1-2% of the total amount of peptides) obtained from tryptic digest of complex protein samples makes glycopeptide enrichment a prerequisite for efficient identication. Although a variety of enrichment methods such as hydrazide chemistry, 30-32 boronic acid, [33][34][35] lectin affinity 36 and HILIC 37,38 have been developed, the enormous complexity of the biological samples makes comprehensive enrichment and sensitive identication of glycopeptides by mass spectrometry still a challenging task. Despite varying mechanisms, most of the reported glycopeptides enrichment methods rely on the solid-liquid heterogeneous reaction using solid/insoluble matrix. The disadvantages of the interfacial mass transfer resistance and nonlinear kinetic behaviour of this reaction system as well as the high steric hindrance of the matrix materials are the major obstacles limiting the reaction rate and enrichment efficiency.
To solve this problem, we developed a pH-responsive soluble enrichment matrix that can be reversibly dissolved and self-assembled in aqueous solution for homogeneous reactionbased enrichment (Scheme 1). The soluble enrichment matrix is prepared by copolymerization of pH-responsive monomers with glycan-reactive moieties. The obtained linear copolymer chains form a homogeneous reaction mixture with the protein/ peptides samples in aqueous solution under mildly acidic pH which facilitates the coupling between the polymer matrix and the target glycoprotein/glycopeptides. Facile sample recovery with high efficiency can be achieved by simply lowering the system pH, which results in the rapid self-assembly of the polymer-glycoprotein/glycopeptide conjugates into large particle agglomerates that precipitate from the solution.
Therefore, only a single-step solution-phase reaction is involved in the enrichment process. Compared with the conventional solid/insoluble enrichment matrix, this stimuliresponsive polymer-based soluble enrichment matrix has three advantages. First, fast enrichment with >95% efficiency can be achieved within 1 h due to the unrestricted mass transfer in the homogeneous enrichment system. Second, due to the densely packed accessible glycan-reactive moieties on the linear polymer chains and pH-responsive-based facile recovery, advances in the enrichment of trace amount of glycoprotein/ glycopeptides in complex biological samples can be expected. Finally, the substantially reduced steric hindrance caused by applying exible linear polymer chains instead of beads/ particles as the enrichment matrix may facilitate the capturing and subsequent enzymatic releasing of the target glycopeptides. As a result, we expect that this new enrichment matrix can be applied as a general approach to promote biomolecule identication.
Results and discussion
Synthesis and characterization of poly-(AA-co-hydrazide) The preparation of poly-(acrylic acid) based pH-responsive polymer with hydrazide functionalization for glycoprotein/ glycopeptide enrichment is shown in Scheme 2. First, poly-(acrylic acid-co-methyl acrylate) was synthesized by copolymerization of acrylic acid (AA) and methyl acrylate (MA). GPC analysis of the obtained poly-(AA-co-MA) copolymer with 1-20 h polymerization time reveals molecular weights ranging from M n ¼ 15 440 to 214 100 g mol À1 and M w /M n ¼ 1.183-1.702 (Table S1 †). Next, the pH response of poly-(AA-co-MA) of different molecular weights was evaluated. Strong pH response was observed for the soluble polymer with molecular weight $150 kg mol À1 or higher (Fig. S1 †), as the clear polymer solution immediately turns to a milky white suspension aer changing the pH from 6.0 to 2.0. Next, the methyl ester in the poly-(AA-co-MA) copolymer was converted to hydrazide by hydrazine monohydrate treatment. The poly-(AA-co-MA) copolymer before and aer hydrazine treatment was characterized by FTIR ( Fig. 1a) and XPS (Fig. 1b). The successful synthesis of poly-(AA-co-MA) copolymer was conrmed by the characteristic peaks at 1730 cm À1 (C]O stretching of the carbonyl), 1385 cm À1 (COO symmetric stretching of the carboxylate ion), 1255 cm À1 (C-O stretching of the COOH group) and 1170 cm À1 (antisymmetric stretching of the C-O-C of ester groups) in the FTIR spectrum. Aer hydrazine treatment, new peaks appeared at approximately 1630 cm À1 (N-H bending), 3210 cm À1 , 3350 cm À1 (N-H stretching) and 1405 cm À1 (C-N stretching), indicating the introduction of C-NH-NH 2 groups in the copolymer chain. The conversion of the methyl ester of poly-(AA-co-MA) to hydrazide was further demonstrated by X-ray photoelectron spectroscopy (XPS) analysis. As shown in Fig. 1b, the introduction of hydrazide to poly-(AA-co-MA) leads to a clear exhibition of the N 1s peak at 400.1 eV in the XPS spectrum. In contrast, no corresponding peak can be found for the untreated poly-(AA-co-MA). Next, the hydrazide loading of the poly-(AA-co-hydrazide) was determined using the bicinchoninic acid (BCA) titrating method, which relies on the reduction of Cu 2+ to Cu + by hydrazide and the colorimetric detection of the Cu 1+ by bicinchoninic acid at 562 nm. The standard curve of the BCA titration of poly-(AA-co-hydrazide) copolymer is shown in Fig. S2 † and the hydrazide loading was found to be 1.2 mmol g À1 , which is more than six times higher than that of the commercial crosslinked agarose-hydrazide beads. The obviously increased hydrazide loading of poly-(AA-co-hydrazide) can be explained by the fact that all of the methyl ester groups of the linear polymer chains of poly-(AA-co-MA) are theoretically accessible for hydrazide functionalization. In contrast, for solid materials, such as cross-linked agarose beads (50-150 mm in diameter), only the molecules at the surface layer of the beads are exposed and the inner molecules are inaccessible for functionalization. The high hydrazide density of poly-(AA-co-hydrazide) can increase its collision opportunity and binding capacity with glycoproteins in complex protein samples, therefore it is particularly advantageous for glycosylation identication.
The pH-induced self-assembly of poly-(AA-co-hydrazide)glycoprotein conjugates Sensitive pH-response of the soluble polymer aer conjugation with target molecules is a prerequisite for achieving high sample recovery in enrichment reactions. Therefore, the pH-responsive behavior of poly-(AA-co-hydrazide) aer conjugation with a standard glycoprotein (RNase B) was analyzed using solutions with varying pH values to evaluate the feasibility of this strategy for glycoprotein enrichment. We found that the poly-(AA-co-hydrazide)-RNase B conjugates dissolve well in aqueous solution under neutral to mildly acidic pH and rapidly precipitate in an acidic environment. As shown in Fig. 2, the clear solution immediately turns to a milky white suspension aer changing the pH from 6.0 to 2.0 due to the large scale selfassembly of the conjugates. White polymer precipitates can be easily collected by gentle centrifugation for a few seconds. These results demonstrate that the conjugation of glycoproteins does not interfere with the pH-induced aggregation of the poly-(AA-co-hydrazide) copolymer chains. To further investigate the pH dependent self-assembly and aggregation of the poly-(AA-co-hydrazide)-glycoprotein conjugates, the zeta potentials and hydrodynamic size of the conjugates under different pH conditions were measured by dynamic light scattering (DLS). The zeta potential shows a monotonic increasing trend as the pH decreases over the range from 6 to 2, which can be explained by the gradual protonation of the acrylic acid moieties of the copolymer and the continuous shielding of the electrostatic charges (Fig. 3a). In the hydrodynamic size analysis (Fig. 3b), the conjugates exhibit decreasing hydrodynamic sizes as the pH decreases over the range from 6 to 2.8 because of the Scheme 2 Synthesis of poly-(AA-co-MA) copolymer and hydrazide functionalization. pH-induced shrinking of the copolymer chains. The minimal hydrodynamic size of $30 nm is reached at pH 2.8 and the corresponding zeta potential is À6.04. Further pH reduction by 0.6 leads to only a $3 mV increase in zeta potential but an abrupt increase in hydrodynamic size by approximately eightfold to $250 nm. This result can be attributed to the large scale self-assembly and aggregation of the poly-(AA-co-hydrazide)-RNase B conjugates because the polymer micelles are highly unstable when the zeta potentials are over the range of 0 AE 5 mV. The results of the zeta potential and hydrodynamic size analysis demonstrate that the sensitive pH response of poly-(AA-co-hydrazide) is not impaired aer glycoprotein conjugation. Furthermore, the pH-induced transformation of the well dispersed poly-(AA-co-hydrazide)-glycoprotein conjugates into aggregated clusters of approximately sub-micrometer size serves the purpose of enrichment very well. Because repeated precipitation-dissolution cycles are commonly involved in the enrichment process to remove the non-specically adsorbed protein/peptides, the reproducibility of the pH responsive behavior of the poly-(AA-co-hydrazide)-glycoprotein conjugates was investigated. As shown in Fig. S3, † the transparency of the solution containing dissolved or self-assembled conjugates was obtained by UV adsorption analysis. No obvious transparency change was found aer eight precipitationdissolution cycles indicating the unimpaired pH responsiveness and robustness of this enrichment approach.
Reaction kinetics, efficiency and selectivity of glycoprotein/ glycopeptide enrichment using poly-(AA-co-hydrazide) Aer evaluating the pH responsiveness of the poly-(AA-cohydrazide)-glycoprotein conjugates, the reaction kinetics, the conversion rate of aldehyde-hydrazide coupling and the recovery of glycoprotein enrichment using this soluble polymer matrix were investigated. First, the oligosaccharide structure of the N-glycoprotein was oxidized to produce aldehyde. Next, for the solid/insoluble enrichment matrixes (beads/particles), overnight incubation with the protein sample is required for complete glycoprotein enrichment 27 due to the limited mass transfer in the liquid-solid interface in the heterogeneous reaction system and the large steric hindrance induced by the solid enrichment matrix. To demonstrate the advantages of using the soluble polymer-based homogeneous enrichment system, the reaction kinetics and the recovery of glycoprotein enrichment were studied using asialofetuin (a standard glycoprotein) and were compared with the results obtained using commercial solid agarose-hydrazide beads. As shown in Fig. 4, poly-(AA-co-hydrazide) has a faster reaction rate with the glycoprotein than solid agarose-hydrazide beads and 96.2% glycoprotein capturing is achieved via aldehyde-hydrazide coupling in 1 h. In contrast, only 55.6% enrichment conversion is reached aer 1 h and at least 8 h of reaction are required to achieve >90% using the solid agarose-hydrazide beads. The recovery of glycoprotein enrichment using soluble poly-(AA-co-hydrazide) was determined aer collecting the poly-(AA-co-hydrazide)-asialofetuin conjugates via low pH-induced precipitation. The obtained conjugates were treated with PNGAse F to cleave the covalent bond between the innermost GlcNAc of the N-linked glycans coupled with poly-(AA-co-hydrazide) and the asparagine residues of the glycoproteins. Subsequently, the released asialofetuin was further characterized and quantied by SDS-PAGE and $90% sample recovery was achieved (Fig. S4 †). The improved reaction rate, efficient glycoprotein coupling and high recovery may be attributed to the unrestricted mass transfer in the homogeneous reaction based enrichment and the reduced steric hindrance using linear soluble poly-(AA-co-hydrazide) as the matrix. Improved accessibility of PNGase F towards the enzymatic cleavage site between the N-glycans and peptides can be expected. The facilitated release of the peptides may lead to enhanced detection in MS.
Due to the low abundance of glycopeptides in real biological samples, the ability to selectively enrich highly diluted glycopeptides from a complex solution is a key evaluating criterion for the soluble polymer based homogeneous system. A mixture of BSA (a non-glycoprotein) and asialofetuin (a standard glycoprotein with three well-characterized N-glycosylation sites) was used to mimic a complex sample. Fig. 5 shows the MALDI-TOF-MS spectra of the mixture of tryptic digests of 100 fmol asialofetuin and 10 pmol BSA before (a) and aer (b) enrichment by poly-(AA-co-hydrazide). Before enrichment, the spectrum is overwhelmed by the signals of the abundant nonglycopeptides and the glycopeptides can be hardly detected due to the strong signal suppression by the non-glycopeptides (Fig. 5a). In contrast, nearly all of the non-glycopeptides are removed and three glycopeptides covering all of asialofetuin's theoretical glycosites are clearly detected with high signal intensities and S/N ratios aer enrichment by poly-(AA-co-hydrazide) (Fig. 5b). For example, the signal intensity and S/N of the glycopeptide at 1741.8 m/z increases 29 and 325 times aer enrichment, respectively. These results indicate the excellent selectivity of this method because only negligible nonglycopeptides remain aer enrichment, although their concentration was one hundred times higher than that of the glycopeptides. Further diluting the amount of asialofetuin to 1 fmol while maintaining the same molar ratio between asialofetuin and BSA still results in the successful identication of the three glycopeptides (Fig. 5c), demonstrating that high enrichment efficiency and low fmol detection sensitivity can be reached aer enrichment using the soluble polymer-based enrichment system. The advantages of using poly-(AA-cohydrazide) were further demonstrated by a comparison with three other commonly used glycopeptide enrichment materials. Fig. 6 shows the MALDI-TOF-MS signal intensity of the enriched glycopeptides of asialofetuin by poly-(AA-co-hydrazide), by cross-linked agarose-hydrazide beads, by commercial HILIC materials and by agarose bead-bound lectins (WGA). Clearly, poly-(AA-co-hydrazide) provides the strongest signal intensity among the four enrichment materials for all of the three glycopeptides. The other three enrichment materials only resulted in a minor enhancement in the signal intensity of the glycopeptides, presumably due to the relatively lower enrichment affinity/selectivity or the unfavorable reaction conditions in the solid/insoluble matrix-based enrichment.
Application of poly-(AA-co-hydrazide) for mouse brain glycopeptide enrichment and glycoprotein identication by mass spectrometry Next, we challenged the enrichment capability of poly-(AA-co-hydrazide) using highly complex protein extracts from mouse brain. The oxidized N-glycoproteins were rst coupled with poly-(AA-co-hydrazide). Aer repeated washing to remove non-glycoproteins, the poly-(AA-co-hydrazide)-glycoprotein conjugates were subjected to trypsin digestion and repeated washing to remove non-glycopeptides. Next, the obtained poly-(AA-co-hydrazide)-glycopeptide conjugates were treated with PNGase F to release the enriched N-glycopeptides for LC-MS analysis by a LTQ-FT mass spectrometer. In three replicates, 843, 748 and 965 N-glycopeptides and 349, 338 and 395 N-glycoproteins were identied, corresponding to 1317 non-redundant N-glycopeptides and 458 non-redundant glycoproteins (Table S2 †). As shown in Fig. 7, 80.0% of Nglycoproteins were identied in at least two replicates, demonstrating good reproducibility of this enrichment method. Compared with 56 or 533 non-redundant N-glycopeptides identied from triplicate experiments without enrichment or enrichment using commercial cross-linked agarose-hydrazide beads (Table S3 †), the poly-(AA-co-hydrazide)-based soluble enrichment matrix shows obvious advantages for larger scale protein glycosylation identication. We attributed the improved glycopeptide enrichment to the high hydrazide density of poly-(AA-co-hydrazide) and the favorable mass transfer in the homogeneous hydrazide-aldehyde coupling reaction, which facilitate the collisions between poly-(AA-co-hydrazide) and the target glycopeptides. Furthermore, the substantially reduced steric hindrance caused by replacing the bulky solid/insoluble enrichment matrix with the exible soluble polymer is particularly benecial for enhancing the accessibility of the enzymatic cleavage sites of the matrix-immobilized glycopeptides. Therefore, highly efficient PNGase F enzymatic release of the captured glycopeptides can be expected.
Finally, to evaluate the reliability of the N-glycosylation identication obtained by the poly-(AA-co-hydrazide)-based enrichment, the spontaneous deamidation caused falsediscovery of N-glycopeptides in this approach was determined. Control experiments using conditions identical to those used in the poly-(AA-co-hydrazide)-based glycopeptides enrichment and LC-MS analysis were conducted, except that the enriched peptides were not treated with PNGase F. Therefore, peptides with a 0.984 Da shi at Asn and the N-X-S/T/C (X s P) motif identied in this experiment are the falsely discovered N-glycopeptides. In three replicates, we found 21 falsely discovered N-glycopeptides, corresponding to a 1.46% average false-discovery rate. This result is consistent with the literature reported value 39 suggesting that spontaneous deamidation is a minor issue in the poly-(AA-co-hydrazide)-based glycopeptides enrichment. Thus, this false discovery will not jeopardize the reliability of the N-glycosylation assignment in this research.
Conclusions
In conclusion, we developed a robust soluble polymer-based homogeneous enrichment system using pH-responsive poly-(AA-co-hydrazide). Improved enrichment reaction kinetics and enrichment efficiency of glycopeptides are achieved for standard protein and complex protein samples from animal tissue. We expect that this new enrichment approach will be widely applicable for the efficient enrichment of trace amounts of biomolecules and therefore promote bio-analysis.
Synthesis of pH-responsive acrylic acid-methyl acrylate copolymer (PAA-co-PMA) and hydrazide functionalization Methyl acrylate (260 mg), 1730 mg of acrylic acid (MA : AA molar ratio of 1 : 8) and 50 mg of potassium persulfate were dissolved in 50 mL of degassed 50% methanol. The mixture was allowed to react in a nitrogen environment at 50 C for 1-20 h with vigorous stirring. The obtained poly-(AA-co-MA) polymer was precipitated and recovered by the addition of pure ethanol. Next, the puried poly-(AA-co-MA) was re-dispersed in 50% methanol. The methyl ester groups of the copolymer were converted to hydrazide by the addition of 300 mg hydrazine monohydrate and the reaction was allowed to proceed overnight at RT under stirring. Aer removing the solvent by rotary evaporation and removing the residual reactants by dialysis (3000 cut-off), the obtained poly-(AA-co-hydrazide) was lyophilized and stored at 4 C until further use. Before application in the N-glycoproteome enrichment, the poly-(AA-co-hydrazide) was rst subjected to three times repeat washing by the low-pHprecipitation and high-pH-dissolution cycle to remove any trace amount of polymer chains with poor pH response.
Characterization
Gel permeation chromatography (GPC) analysis was performed on a DAWN HELEOS system (Wyatt Technology, Santa Barbara, CA, USA). FTIR measurement was conducted in transmission mode using a FTS135 FTIR spectrophotometer (Bio-Rad, Hercules, CA, USA) under ambient conditions. All of the samples were ground and mixed with KBr and pressed to form pellets. The X-ray photoelectron spectroscopy (XPS) measurement was performed using a Kratos AMICUS system (Shimadzu, Japan) with Mg KR radiation (hn ¼ 12 kV) at a power of 180 W. The hydrodynamic size was measured by dynamic light scattering (DLS) at 25 C using a Zetasizer Nano ZS (Malvern Instruments, Worcestershire, UK). The excitation light source was a 4 mW He-Ne laser at 633 nm and the intensity of the scattered light was measured at 173 .
Protein extraction
All the animal experiments were performed in compliance with the relevant regulations of Beijing Proteome Research Center (BPRC) and were approved by the "Committee for Animal Experiments" of BPRC. Mouse brain tissue was taken out and frozen in liquid nitrogen. The frozen brain tissue was ground and homogenized using a Polytron homogenizer with denaturing buffer containing 7 M guanidine-HCl, 10 mM EDTA and 0.5 M Tris-HCl (pH 8.5). Proteins were extracted by sonication of the homogenized brain tissue in ice-cold lysis buffer containing 50 mM ammonium bicarbonate (pH 8.2) and 8 M urea. Aer centrifugation at 12 000 g for 15 min at 10 C, the supernatant was recovered and the concentration of the obtained protein extracts was determined with Bradford assay.
Glycoprotein/glycopeptide enrichment Generally, 10 mM sodium periodate was added to 100 mg of protein sample to oxidize the diols of the glycoproteins to aldehydes. The sample was incubated at 4 C in the dark for 0.5 h. Aer desalting, the oxidized sample was incubated with 0.1 mg of poly-(AA-co-hydrazide) in 100 mL 25 mM ammonium bicarbonate (pH 6) for 1 hour with agitation to allow aldehydes-hydrazide coupling. Next, 1% TFA was added to lower the pH to #2 to induce large scale self-assembly and precipitation of the poly-(AA-cohydrazide)-glycoprotein conjugates. The precipitated conjugates were recovered by gentle centrifugation using a micro-centrifuge. Aer removing the supernatant, 1% NH 3 OH was added to raise the pH to $6 and to re-dissolve the precipitates with gentle agitation. The re-dissolved conjugates were washed three times to remove the non-specically adsorbed non-glycoproteins by 200 mL 50% ACN containing 8 M urea and 1 M NaCl (pH 6) using the same low-pH-precipitation and high-pH-dissolution cycle for conjugates recovery. For the kinetics study and conversion rate determination of the glycoprotein enrichment reaction, the glycoprotein (RNase B) remaining in the supernatant aer 5, 10, 30, 60, 120, 240, 360 and 480 min of coupling with poly-(AA-cohydrazide) was quantied by measuring its UV adsorption at 280 nm using a NanoDrop 2000c UV-Vis Spectrophotometer (Thermo Fisher Scientic, Waltham, MA, USA). For recovery evaluation of glycoprotein enrichment using asialofetuin, the enriched poly-(AA-co-hydrazide)-asialofetuin conjugates were dissolved in 50 mL 25 mM ammonium bicarbonate (pH 8.0) containing 100 unit of PNGase F and incubated at 37 C overnight to release the glycoprotein. The released asialofetuin was analysed and quantied by SDS-PAGE and the results were compared with that of asialofetuin without enrichment. For glycopeptide analysis, the poly-(AA-co-hydrazide)-glycoprotein conjugates were dissolved in 25 mL 25 mM ammonium bicarbonate containing 8 M urea for denaturation followed by DTT reduction and IAA alkylation. Aer diluting the solution with 25 mM ammonium bicarbonate to reduce the urea concentration below 1 M, trypsin was introduced at a protein to trypsin ratios of 25 : 1 and the mixture was incubated at 37 C for 16 h to digest the proteins into peptides. The non-glycopeptides generated by trypsin digestion were removed by repeated washing with 200 mL 50% ACN containing 8 M urea and 1 M NaCl (pH 6) using the same low-pH-precipitation and high-pH-dissolution cycle. The obtained poly-(AA-co-hydrazide)-glycopeptide conjugates were dissolved in 50 mL 25 mM ammonium bicarbonate (pH 8.0) containing PNGase F (100 unit) and incubated at 37 C overnight to release the glycopeptides. The obtained glycopeptides were freeze-dried and 1/3 of the re-dispersed sample was used for each LC-MS analysis.
MALDI-TOF-MS analysis
The enriched N-glycopeptides were re-dispersed in 5 mL CHCA solution (5 mg mL À1 , 50% ACN, 0.1% TFA) and 1 mL of sample was spotted on the target plate and air dried. MALDI-TOF-MS analysis was performed using a 4800 MALDI-TOF-TOF analyzer (AB Sciex, USA) equipped with a Nd:YAG laser at excitation wavelength 355 nm. All the mass spectra (1000 laser shots for every spectrum) were acquired in positive reection mode and analysed by Data Explorer (Version 4.5).
LC-MS analysis and data processing
The LC-MS/MS analysis was carried out on an Agilent 1100 nanoLC system coupled with a hybrid linear ion trap-7 T Fourier transform ion cyclotron resonance mass spectrometer (LTQ-FT MS). The spray voltage was set to 1.8 kV. All of the MS and MS 2 spectra were acquired in the data-dependent mode and the mass spectrometer was set to a full scan MS followed by ten data-dependent MS/MS scans. For data processing, all of the MS/MS spectra were searched against the UniProt database (version 201204206, 65 493 entries) using Protein Discoverer soware (version 1.3, Thermo Scientic). Trypsin was chosen as the proteolytic enzyme and up to two missed cleavages were allowed. Carbamidomethyl (Cys) was set as the xed modication and oxidation (Met) was set as the variable modication. The mass tolerance of the precursor ion was set to 20 ppm, that of the fragment ions was set to 0.8 Da and the peptide false discovery rate (FDR) was set to 1%. The localization of the N-glycosylation sites of the glycopeptides was determined by a mass shi of 0.984 Da on the N-X-S/T (X*P) sequon aer deamidation of the asparagine residue into aspartic acid by PNGase F de-glycosylation. | 5,712 | 2015-05-26T00:00:00.000 | [
"Chemistry"
] |
The Formal Framework for Collective Systems
: Automated reasoning is becoming crucial for information systems. Building one uniform decision support system has become too complicated. The natural approach is to divide the task and combine the results from different subsystems into one uniform answer. It is the basic idea behind the system approach, where one solution is a composition of multiple subsystems. In this paper, the main emphasis is on establishing the theoretical framework that combines various reasoning methods into a collective system. The system’s formal abstraction uses graph theory and provides a discussion on possible aggregation function definitions. The proposed framework is a tool for building and testing specific approaches rather than the solution itself.
Introduction
There are various types of problems that modern expert systems aim to solve. Some of them are simple enough that one approach is sufficient to provide adequate solutions. However, there are a number of problems for which a combination of algorithms or methods is needed. In such a case, we employ a complex system that either chooses an appropriate approach based on the insights or combines the outcomes of various methods into a uniform one. In this paper, we focus on the latter, i.e., systems fitting into the category of collective intelligence. Among many definitions of collective intelligence, some of the most often used are "the capacity of human collectives to engage in intellectual cooperation in order to create, innovate and invent" [1] and "groups of individuals acting collectively in ways that seem intelligent" [2]. The most common definition comprises deductive systems as: "A set of rules R and axioms. Since axioms can be viewed as rules without premises, we assume that a deductive system is a set of rules and a procedure for derivation such that Γ L A if and only if A can be derived from Γ by rules R" [3].
The goal of collective intelligence is to provide either decision support or decision making systems. The difference between those two approaches is slight and relates to the reliability of the system. If the ultimate goal is to eliminate the human factor, then one of the options is to employ deductive systems. In that case, the reliability of the system is derived from the proper configuration of the system and the accuracy of the data provided to it. Since there is no single system that could produce an output to all types of problems, commonly, a set of systems is used. In such a case, we might consider a complex system built from individual subsystems or a multi-agent system. However, utilising multiple deductive systems may lead to the situation where two or more of them return conflicting results. Such a situation is not unusual, since most deductive systems use sets of predefined rules to generate outputs. Those sets may not only have different numbers of elements but also differ in spite of individual rule definitions; thus, the systems lead to heterogeneous outputs. Another reason for the diversity of outputs is the fact that any human involvement creates the possibility for error to appear. In addition, the deductive systems are most commonly configured by humans. The number of rules relies on the time the designer can spend on their definition and the general aim of the system. In the case of a multitude of deductive systems, they will inevitably differ in details. Thus, for the sake of uniformity, a collective approach is introduced in this paper. The proposed solution utilises the model defined in [4], assuming each of the deductive systems as a collective member.
The remainder of the paper comprises, first, a background section that presents state of the art in the field of deductive systems and collective intelligence. The next section introduces the formal definition of the collective structure for the deductive systems to utilise. Further, the properties of such combinations are distinguished and analysed. Then, we present a discussion on the proposed framework and conclude the paper.
Background
This paper is the first work strictly addressing the collectives built from multiple deductive systems to the best of our knowledge. It has to be stressed out that the authors are aware of the multi-agent systems. However, they do not consider using deductive methods for agent definition in most cases. Since our work connects both the collective intelligence field and deductive systems, we discuss related research in both areas.
Collective Study
Many authors have proven that approaches based on collectives are an effective method for forming accurate judgements in an uncertain environment [5][6][7]. However, it is hard to find a straightforward answer to the question "why does collective intelligence work?". One of the most popular is the Surowiecki explanation [8]. In his work, he proposed the following properties of a wise crowd [8]: Diversity: each agent should have some private information, even if it is just an eccentric interpretation of the known facts. Independence: the opinions of those around them do not determine agents' opinions. Decentralisation of opinion: an agent can specialise on and draw on local knowledge. Aggregation: some mechanism exists for turning private judgements into a collective decision.
A multitude of research seems to confirm Surowiecki's work. Due to Surowiecki's background as a journalist, in his work, he mainly focused on human crowds. Nevertheless, collective intelligence has proven its effectiveness for various types of agents, even artificial ones [9]. Therefore it is used in many disciplines-e.g., in deep learning where it is called ensembling [10]. The use of collectives allows us to achieve far more accurate results with the use of simple solutions.
For the sake of uniformity, Jodłowiec et al. [4] proposed the universal definition of collective suitable for the identification of its features regardless of the implementation area. They assumed the collective to be a graph defined as a tuple: where M is a set of collective members; E is a set of edges; t is collective target, which can be understood as either a pursued value or a quality.
There exist other approaches to the definition of collectives, such as [11][12][13]. However, they lack the flexibility of the aforementioned model. Thus, we decided to rely on the approach presented in [4].
The model used allows the distinction of the following measure types describing the collective properties: Each of those metrics is important and provides information crucial for the collective analysis. However, in the authors' opinion, the aggregation function is highly underestimated for its role in the study of collective intelligence phenomena.
The diversity of collective structures and aims forces the creation of a variety of aggregation function types. Inputs and outputs can be distinguished for the functions [14]: • Aggregation functions whose inputs are of the same types as the outputs; • Aggregation functions whose inputs are of different types to their outputs.
Beliakov et al. [15] proposed another approach to the aggregation function classification, where we can distinguish the following types: • Averaging-aggregation function f has an averaging behaviour (or is averaging) if for every x it is bounded by: • Conjunctive-aggregation function f has conjunctive behaviour (or is conjunctive) if for every x it is bounded by: • Disjunctive-aggregation function f has disjunctive behaviour (or is disjunctive) if for every x it is bounded by: • Mixed-aggregation function f is mixed if it does not belong to any of the above classes, i.e., it exhibits different types of behaviour on different parts of the domain [15].
If we consider the aggregation function in terms of the mathematical formulas used, the following families can be distinguished [15]:
Mixed aggregation functions.
Minimum and maximum are the main aggregation functions used in fuzzy set theory and fuzzy logic [16]. That comes from the fact that they are the only two operations consistent with several theoretical set properties, i.e., mutual distribution. The standard definition of minimum and maximum is as follows: where x i is a property of a collective member.
The next type of aggregation function is mean and median, with the arithmetic mean being the most popular one to use. where: J is the collective judgement; m i is a judgement of a collective member i; N is a number of collective members.
It is a baseline for most of the other methods [17]. Research proved that the unweighted average guarantees the outcome to be more accurate than the typical individual judgement [18,19]. Nevertheless, various authors have proposed enhancements, mainly by adding a weight to each opinion, whose value usually represents the certainty or trustworthiness of collective member's opinion.
Thus, another type of aggregation function that is worth mentioning is ordered weighted averaging (OWA) functions [20]. OWA are also averaging aggregation functions, which associate weights not with a particular input, but rather with its value. They were introduced by Yager [20], and since then have become very popular in the fuzzy sets community. OWA could be defined based on vector x sorted as: where, x i is a subsequent value from the vector x sorted containing individuals judgements in non-increasing order x 1 ≥ x 2 ≥ ... ≥ x n ; w i is a subsequent value from the vector of weights w.
Choquet and Sugeno integrals are considered another aggregation function category [21][22][23]. They are mainly used in a situation when we can convert the problem of estimation by the use of fuzzy sets and scale it to become the Choquet and Sugeno problem [24]. The standard definition of this function defined for vector x sorted is as follows: where: x i and x j are subsequent values from the vector x sorted containing individuals' judgements in non-increasing order is a fuzzy measure.
Conjunctive and disjunctive functions are so-called triangular norms and conorms, respectively (t-norms and t-conorms) [25]. An example of a conjunctive extended aggregation function could be: where: x i is a i-th collective member judgement; n is a number of collective members.
Last, but not least important, is the mixed aggregation function. An example of such a function is 3 − ∏ function defined after [26]: where: x i is a i-th collective member judgement; n is a number of collective members.
One of the ideas is the approach that takes the assumption that outliers may affect the result too heavily [27]. Therefore, those methods aim at removing appropriate (unneeded) individual values. Let us take the median absolute deviation (MAD) filtering method as an example, where each judgement is considered an unneeded outlier, iff: where, t is a parameter that controls the sensitivity of the trimming; k, h are dummy variables, and is removed from the set. Following the filtering, usually one of the above-mentioned methods is applied, e.g., the average aggregation function. However, the potential drawback of any filtering is the potential to ignore strong dissenting voices. It proved to be a problem in situations of group thinking, where the process of forming a collective judgement neglects well-justified outlier opinions and is biased towards a consensus judgement, irrespective of the evidence for that judgement [14].
Deductive Systems
Automated reasoning is one of the most promising research fields in computer science. The multitude of approaches available are aimed at solving a straightforward issue, i.e., how to conclude according to the information and data available at the moment. One class of solutions uses a black box architecture, where the exact execution rules are independent and often unknown to the operator. This approach is widespread in such tasks as automated classification systems, and screening or prediction in image, text, sound or video processing. In general, the black-box approach is a mixture of artificial neural networks and genetic algorithms and other nondeterministic techniques. In most cases, a black-box system needs a short time to employ. However, there is a need for sufficient training data or periodical fine-tuning. While they prove efficient in computational aspects, they nearly never reach complete reliability.
Contrary to the nondeterministic approach, some systems aim at reliability in the first place. They utilise, among others, parts of the classical mathematical apparatus, i.e., deduction and induction. In this paper, we focus on the latter approach, with particular emphasis on the deductive systems, since they are one of the primary solutions for many problems, e.g., program synthesis problems [28]. The most basic definition sets the deductive system as "a set of rules R and axioms. Since axioms can be viewed as rules without premises, we assume that a deductive system is a set of rules and a procedure for derivation such that Γ L A if and only if A can be derived from Γ by rules R" [3].
The definition mentioned above allows the creation of multiple systems with the same set of premises. Thus, we may assume that when having a bunch of solutions defined with the same set of assumptions but with an orthogonal set of rules, we will receive heterogeneous outcomes. That raises a new problem, i.e., how to measure the reliability or accuracy of the outcome. Since we are focused on deductive systems, it is safe to assume that the accuracy of such a system relies on its complexity. In perfect conditions, the complexity is irrelevant, but in real-life, we need to take into account the constraintsmaximum answer time expectancy, minimum accuracy, etc. Thus, having the accuracy as the primary evaluation function, we will aim at producing such a system which provides an accurate output with adequate resource (time) consumption. Sets of rules used in deductive systems are mainly defined by human operators, rather than automatically generated. With complex tasks, it is virtually impossible to test the system against all possible situations (inputs). Thus, we need to assume that even if the system returns the correct value, it might not be the most accurate one. Instead of solving the accuracy issue by introducing more rules into the system, in this article, we will focus on the approach that solves the problem by combining responses from various deductive systems into a uniform one. Our solution uses a collective intelligence approach, in particular, a model defined in [4] that utilises graph theory to describe the complexity of the collective structure.
Method
In this research, we have focused on two main steps needed to adopt the model defined in [4]. At first, we checked if the node definition is suitable to represent individual deductive systems. Since the framework is flexible, we are not limited to any specific system, either deductive or not. Secondly, we have defined the aggregation principles to use with multiple systems and combined them into a uniform system. In the following sections, we present those steps in detail.
A Deductive System as a Collective Member
The definition of the collective member presented in [4] is a complex one. First of all, we need to make sure that a set of members, e.g., deductive systems, is a collective. That means all of its members should share the same target, as defined in Equations (14) and (15). The target might be understood as either the pursued value or a quality of any sort. It is only crucial we can evaluate reaching it.
where M is a set of collective members; m i is a subsequent collective member; t is a target; i is a collective member number; n is a number of collective members.
Once we ensure that the set of deductive systems is a collective, we may investigate their characteristics. Each collective member has a type assigned, which is defined in (16) as a tuple σ(M). σ(M) = (a 1 , a 2 , . . . , a man ) where a is an attribute characterised by name and type; i is an index of an attribute; man is number of attributes; MA is a set of all attributes.
Each collective member m ∈ M can be thus understood as a tuple of values v defined in (19). Each value v ∈ V of collective member m ∈ M corresponds to appropriate attribute a ∈ MA of σ(M).
MemberValue : Considering a collective built from deductive systems, we identify the minimum set of attributes characterising them. In our study, we assumed that it is sufficient to define: • A set of input values input; • Output value output; • Confidence factor CF.
Depending on the approach taken, input can be defined as one attribute a or represented by several attributes a 1 , . . . , a n . We believe that the internal member's rules should not be part of its description in the collective definition, since they are part of the deductive system configuration. The presented approach does not focus on the reasoning of individual members but rather on their aggregated outcome.
The common target is not the only special thing about the collective. It is the set of relationships among members that makes it unique. If we had only a set of unrelated members, we could investigate any kind of aggregation but only look for their outcomes' statistical significance. Following to the model from [4], the authors use the definition of relations between collective members as graph edges.
where e ∈ E is the edge connecting members m x , m y ∈ M; en = |E| is number of edges; rel ∈ REL is relation kind and REL is a set of kinds of relations; in f l ∈ [0, 1] is a level of influence member m x has on member m y ; ep is a edge property for which ep ∈ EP. The definitions mentioned above apply to all possible collectives, but for the sake of this article, we will narrow them down. The first and most obvious move is to put a constraint on the member set to only allow it to contain deductive systems.
where DS is a set of deductive systems.
Limiting the collective members set does not change the obligation to have a common target. In the case of a system built up from deductive systems, we assume this target to be providing a uniform answer to the query. As for the attributes, we take into account only those that might give any insight into the creation of a shared collective response. It might be wise to focus on the input each member uses for generating the response. Each deductive system is independent in its decision making. However, it can take into account the outcome from other connected systems. It is not yet the aggregation, but rather part of the individual decision making process of each member. Probably the most important aspect of collective building is the possibility to define aggregation function based on the relations between members. As stated in (21) and (22), each edge representing the connection between two members has information on the level of influence and type of relationship. That information is crucial for the setting up of the whole system. It not only affects the analysis of individual outcomes but also allows better definition of the aggregation function.
Collective Decision Making
The goal of the presented framework is to deliver a way for solving problems when several deductive systems infer a heterogeneous outcome. We have proved that the model we chose is capable of describing the complexity of the collective structure. The next step is to define the proper aggregation approach. Let us define the aggregation function y.
where y is the collective decision for a given vector of members' decisions x; x is a vector of members' decisions.
In the proposal, the authors strive to present a universal solution. Thus there will be no universal aggregation function shown, but rather an approach on how to define one for a specific case. The authors focus on three baseline families of functions, i.e.,
Average Aggregation Function
The first type of aggregation neglects any connections between collective members. So each deduction equally influences the general outcome. We will follow the arithmetic average function defined as (7).
Since this function does not use any information concerning relationships, we might assume it is not a full-fledged aggregation. However, it is a good idea to use simplification for the sake of, e.g., prototyping or processing time. It is particularly essential in the case of highly complex systems, where the proper definition of more sophisticated methods might be time demanding. There is no simple guide to use this simplest approach. However, we can assume that a large number of members, a dense relationship network and low centralisation are pre-requirements.
Weighed Average Aggregation Function
Another approach involves distinguishing member opinions by a chosen factor. Unlike the simple arithmetic average, the weighted one (26) assumes that the outcome of every member has a measurable and diverse impact on the final decision of collective.
where x i is the decision of the i-th member; n is the number of members; w i is the weight for i-th deductive system. This method is quite simple once we know how to obtain the vector of weights. However, the definition of the latter is a fundamental difficulty. Values of the vector can be calculated based on the members' attributes or the properties of relationships, e.g.,
•
Confidence in the inferred answer, • Influences of interconnected deductive systems-a calculation is based on the incoming and outgoing edges to/from the node; • With respect to other deductive systems-calculated as an average of the weights assigned by the deductive systems to the output generated by a given system.
Generation of the weight vector might be tedious work involving many repetitions and fine-tuning. The standard procedure would include setting the initial vector and challenging results obtained from aggregating collective members' opinions with the expected outcome. The process might rely on expert knowledge and on any automated technique.
Combined Aggregation Function
The proposed framework introduces a sophisticated solution, namely, the combined aggregation function. The idea is simple and relies on the usage of various measures to create a single collective response. The functions and properties to combine include averages, centralisation measure, distributions of properties or members, etc. An example of combined aggregation is: where x i is the decision of a i-th member; w i is the weight for i-th deductive system; n is the number of collective members; C(x) is a centralisation measure [29] calculated as Equation (28): otherwise 0; S max is the maximum number of edges connected to any of nodes in G; S avg is the average number of edges connected to nodes in G; f is centralisation value for a given collective; in a case where centralisation is low (lover than given value f ), each node could be treated equally, and otherwise some weight should be introduced.
It is not only possible to use a centralisation function, but we can also rely on the collective prediction as below.
where C(x i )is a the centralisation value for i-th collective member; m i is the decision of a i-th member.
The aforementioned centralisation measures are not the only possible solutions to use. Having the complexity of collective description, we can choose from a variety of functions either to define the weight system or to introduce a discrimination factor for the aggregation. Those functions can rely on members' attributes or properties of the collective as such.
One of the critical factors that we need to take into account is the idea of joint answer creation in the collective of deductive systems. At one point, we could stop at a simple calculation of the answer using any arithmetic aggregation function. However, we assume that each member of the collective can use the responses of interconnected systems as an input for their calculations. Thus, we will seek the state in which the collective stabilises and provides a countable output. What we need to take into account is so-called "butterfly effect" when a small change in one system could influence significantly on the response of other [30]. With infinite repetitions, this effect might be even more disastrous, causing an inability to reach consensus. To cease such a situation in the proposed framework, the authors introduce a method based on two conditions along with parameters that allow for its customisation, namely: q a quantified value representing the smallest change in response accepted by the system; m a quantified value representing the number of iterations resulting in the same outcome.
The method aims at stopping the concluding process before it gets out of the balance [31]. The first condition uses the individual deductive system outcome and checks if subsequent repetitions alter their values. The iteration of the procedure seeking the collective's consensus should stop once it meets the condition (30).
where x is collective member; m i is the decision of a member x in an iteration i; q is the value of the stop condition.
With this definition, it is easy to connect the value of q with the overall accuracy of the collective. The accuracy of the system returning numerical outcomes is defined as: A is the accuracy of the system; J is a value concluded by the collective (collective's prediction); J * is the real value of the target information.
The second condition takes into consideration the outcome of the system as a whole. The parameter m limits the iteration based on the value of the aggregation function. At first, when the process of estimating the collective outcome starts, there is no simple way of stopping the iteration. One approach would be to stop it after a fixed number of repetitions. However, this number might be different for various systems. In the proposed method, the authors recommend relying on more sophisticated conditions. The idea is to investigate the subsequent outcomes of the aggregation function and count those that return the same value. If this number reaches the limit set by m, then it is assumed that the system reached the consensus. The only tricky part lies in checking the equality of outcomes. In this case, we can use the parameter q. It sets the margin of error and defines the accuracy of the system.
For the sake of uniformity, the proposed solution applies not only to systems focused on numeric operations. In the case of quality-based aggregation, we can omit the first condition and rely only on the second one.
Example
Let us consider the example shown in Figure 1 to clarify aforementioned idea. The example comprises three deductive systems, D1, D2 and D3; and four sources of information S1, S2, S3 and S4. Each deductive system has access to exactly two information sources. However, the sources differ for each of them. Moreover, D1 has access to the output of D3, and D3 uses the output of D1. At first, D1 and D3 use only information provided by the sources. In the next iterations, both deductive systems take each other's predictions into account. The systems return numeric value v1 = v2 = v3 by D1, D2 and D3 respectively. Such a configuration makes systems have an orthogonal view of the current state. Since D1 and D3 might return unequal values and use each other output for computation, there is a need for repetition until they reach consensus. Thus, the system conducts multiple iterations for the D1 and D3 to stabilise, and one of the stopping conditions defined in Section 3.2.3 is triggered. Meeting the stopping conditions does not guarantee that values returned by deductive systems are the same. Nevertheless, the system is expected to return one value. Thus, the framework introduces the use of the aggregation functions. With one of them, we can determine the response r. In this example, we use a simple average aggregation function, so response r of the whole system equals: Depending on the application, the aggregation function might vary; thus, the result value r could also be different. Additionally, the introduction of an additional information source may lead to a different response from any system. Therefore, even deterministic solutions such as deductive systems finally may not act deterministically at all.
Discussion
The authors proposed a theoretical framework introduced in the previous section based on a collective model from [4]. It enables working on strong foundations and presents a universal model for the complex expert system. The approach is holistic, with a minimal number of constraints in case of implementation. Therefore, universality and flexibility are the most significant advantages of the proposed solution. The authors also elaborated on "the butterfly effect", suggesting an appropriate answer to it. The proposed approach is fully decentralised. Therefore, it does not have problems that may occur with centralised solutions, e.g., difficulties with resource allocation planning that quickly reach a point where the design of satisfying solutions becomes too complicated. Another advantage of the proposed approach is easy scaling.
At first glance, the proposed approach is similar to the well-known multi-agent method. In a multi-agent system (MAS) [32] agents are computational abstractions encapsulating control along with a criterion to drive control (task, goal). The MAS collects agents interacting (communicating, coordinating, competing, cooperating) in a computational system. In a multi-agent system, individual agents contribute to some part of the system through their private actions. Since part of the core conception of multi-agent systems is competing, there is a risk that agents in the system work at cross-purposes. For example, agents can reach sub-optimal solutions by competing for scarce resources or having inefficient task distribution, as they only consider their own goals. The most significant difference between the proposed solution and multi-agents systems is that each collective member solves the whole problem, not only a small part of it. The collective intelligence framework aims to promote the agent's actions that lead to increasingly influential emergent behaviour of the collective while discouraging agents from working at cross-purposes. This is an enormous advantage possessed by the proposed solution.
The universal character of the system is derived from the possibility to model various types of systems. A solution designer has to undertake two crucial decisions. The first one relates to the communication among the deductive systems defined, whether it is possible or not. A further step is only valid if the systems can exchange information, especially regarding answers. If the deductive systems are to communicate, the designer has to define the degree and types of this communication. The second one is the choice of the aggregation function and later its thorough definition. The flexibility of the framework also lies in the fact that the aggregation can be another deductive system. The use of graph theory makes the solution intuitive; thus, it allows the creation of complex structures comprising many individual systems that are still easy to comprehend. The authors are aware that their universal approach and such flexibility as described might be a source of undiscovered issues. Therefore, further research involving the implementation of various design patterns is needed.
Conclusions
The paper introduces a novel approach towards the definition of collectives comprising deductive systems. The theoretical framework presents the solution to collaborative decision making, mainly in the case when individual peers give heterogeneous answers. Small constraints allow the flexible and universal design of the collective. However, it is essential to intentionally and reasonably define the degree of individual system communication and the type of aggregation function. Since the design promotes the exchange of information among peers that any of them can use as an input for their operation, the authors introduced a sophisticated stop mechanism. The proposed solution mainly aims to simplify the compound solution description; thus, it allows for better interchangeability.
The future work mainly will focus on providing various design patterns for collective systems utilising the proposed framework, and proving its effectiveness. | 7,338.8 | 2021-05-15T00:00:00.000 | [
"Computer Science"
] |
Stable Superhydrophobic and Antimicrobial ZnO/Polytetrafluoroethylene Films via Radio Frequency (RF) Magnetron Sputtering
In this study, superhydrophobic ZnO/Polytetrafluoroethylene (ZnO/PTFE) films with water droplet contact angles (CA) observed as high as 165° and water droplet sliding angles of (SA) <1° have been prepared on glass substrates by RF magnetron sputtering. The PTFE was wrapped on a nano-rod made of a ZnO film with superhydrophobic properties while providing excellent UV resistance compared to hexadecyltrimethoxysilane (HDTMS) hydrophobic agents. The upper surface of the rough ZnO film was coated with PTFE, and most of the underlying coating was bare ZnO, which could well make contact with bacteria. For the Gram-negative strain, E. coli, the cell viability count of the ZnO/PTFE sample (3.5 log reduction, 99.96%) was conspicuously lower than that of the ZnO/HDTMS sample (1.2 log reduction, 93.87%) under 1 h illumination of UV light, which showed that the ZnO/PTFE sample has a better photocatalytic property than the ZnO/ HDTMS films. The ZnO/PTFE films also showed good mechanical robustness, which is an important consideration in their widespread real-world adoption.
Introduction
Bacterial biofilms are known as a serious threat to human health which can cause secondary contamination in their transportation, storage, sales and use [1]. Bacterial infections lead to the deaths of a large number of patients worldwide every year [2][3][4]. The use of antibacterial products is considered an effective way to prevent microbial harm, which can reduce people's chances of disease and medical expenses [5,6].
At present, there are two main research points for antibacterial materials; one is to inhibit the formation of bacterial biofilms on the surface, and the other is to kill the bacteria present on the coating [7][8][9]. Superhydrophobic surfaces (SHS) showing low bacterial adhesion due to their self-cleaning performance have been considered a promising strategy to limit bacterial attachment and subsequent biofilm formation [10][11][12][13]. There are many methods for preparing superhydrophobic coatings reported in recent years, which are mainly divided into two types: one is to change the roughness to obtain superhydrophobicity, and the other is to reduce the surface energy through chemical modification to form a superhydrophobic surface. In our earlier work, nano-structured ZnO/HDTMS coatings with excellent superhydrophobic properties have been successfully prepared by radio frequency magnetron sputtering, which has the advantages of convenience, good controllability of parameters, and easy mass production. However, Hwang et al. [14] showed that the anti-adhesion activity of superhydrophobic surfaces is short-lived and that their rough nature may actually enhance bacterial colonization over the longer term, which is bad for their use in healthcare or food-preparation environments.
The photocatalytic method is widely used in the field of sterilization due to its environmental protection and high efficiency [15][16][17]. Some metal elements such as silver, titanium, and zinc can absorb ultraviolet light to activate oxygen in the air or water to produce hydroxyl radicals and reactive oxygen ions that react with bacterial cells, destroying their normal structure and thereby causing them to die or lose their ability to proliferate [18][19][20][21]. The coatings, when combined with superhydrophobic and photocatalytic properties, can effectively reduce bacterial adhesion and kill adherent bacteria, but the hydrophobic agent could be simultaneously decomposed under photocatalysis. This has also been verified in this paper, and the hydrophobicity of ZnO/HDTMS decreased significantly after a few hours of exposure to UV light.
In previous research reports, almost all micro-/nano-structured surfaces of superhydrophobic antibacterial coating samples have a low surface energy. Few have prepared superhydrophobic coatings with low surface energy on the upper surface and superhydrophilic properties inside the coating, or studied the antibacterial properties of the coatings. Since RF magnetron sputtering does not require the target as an electrode to be conductive, DC magnetron sputtering is limited to the use of metal targets or non-metallic targets with a resistivity within a certain range. Therefore, the RF magnetron sputtering method is chosen to prepare the PTFE thin film. In this paper, ZnO, due to its photocatalytic property, was used to construct the rough rod-like nanostructure of the superhydrophobic coating based on RF magnetron sputtering, and PTFE, due to its low surface energy (~18 mN/m) and excellent UV resistance, was coated on the nanorods on the upper surface of the rough ZnO coating so that the entire ZnO/PTFE coating had excellent superhydrophobic properties. The vast majority of the underlying coating is still ZnO, which contributed to it maintaining its own photocatalytic properties.
Materials
The Zn targets (99.99% purity, diameter of 60 mm, and thickness of 5 mm) and the PTFE targets (diameter of 60 mm and thickness of 5 mm) were purchased from Beijing HeZong Science & Technology Co., Ltd., Beijing, China, absolute ethanol (99.5%) was obtained from Chengdu KeLong Chemical Co., Ltd., Chengdu, China, deionized water (15.6 MΩ·cm) was used to clean the substrates. Standard microscope glass slides and the sand grains that were used to test the wear resistance were purchased from VWR International, Inc., Radnall, PA, USA.
Fabrication
Microscope glass slides were ultrasonically cleaned in absolute ethanol and deionized water for 10 min in each and dried for 30 min in a drying oven at 90 • C before use as substrates. Zinc coatings were prepared by RF magnetron sputtering (JPGF-480, Shenyang Scientific Instruments Co., Ltd., Shenyang, China) of a Zn target. The glass slides were fixed in the deposition chamber at a distance of 10 cm from the target. The chamber was pumped to vacuum (5 × 10 −3 Pa) before introducing argon gas. Films were deposited with a constant sputtering power (120 W) under Ar atmosphere at 1 Pa for 15 min. The deposited films were then annealed to the ZnO film in a muffle furnace at 400 • C for 30 min. Upon cooling to ambient temperature, the ZnO films were fixed in the deposition chamber at a distance of 10 cm from the PTFE target. The chamber was pumped to vacuum (5 × 10 −3 Pa) before introducing argon gas. The ZnO films were coated for 2 min with PTFE using RF magnetron sputtering (120 W, 1 Pa), and finally yielding superhydrophobic ZnO/PTFE surfaces. The process for the fabrication of ZnO/HDTMS sample can be found in our previous work [22].
Characterization
The wettability of the samples was evaluated using an optical contact angle meter (Drop Meter A-100P, MAIST Vision Inspection & Measurement Co., Ltd., Ningbo, China). The surface morphology and the elemental composition of the superhydrophobic samples were observed by a field emission scanning electron microscope (FESEM/EDS, S-3400N, Hitachi Ltd., Tokyo, Japan) equipped with energy dispersive X-ray spectroscopy (EDS), for which samples were prepared by sputtering a thin layer of Au onto the surface. ATR-FTIR measurements were taken over a range of 700-4000 cm −1 using a Perkin-Elmer Spectrum-100 (Ge crystal) equipped with a universal ATR attachment.
UV Activated Antimicrobial Test
Both the Gram-positive and Gram-negative bacteria were used to assess the antimicrobial activity of the material. The protocol was adapted from that of Macdonald et.al. [23]. For each test, one bacteria colony was incubated in brain heart infusion broth (BHI, Oxiod) at 37 • C with a shear speed of 200 rpm for 18 h. The pellet was recovered by centrifugation (5000 rpm for 5 min) and then washed with a sterilized phosphate-buffered saline (PBS, 10 mL) and re-suspended in PBS solution (10 mL). The washing process was repeated three times. The bacterial suspension was diluted to 1000-fold by putting the 10 µL aliquot into 10 mL of fresh PBS in order to obtain an initial inoculum with approx. 105~106 CFU/mL. A total of 15 µL of the initial inoculum was placed on top of each specimen and covered with a sterile cover slip (22 mm × 22 mm) (VWR). The specimen was then irradiated by UV light (UVItec LI-208.BL, 2 × 8 W, 365 nm,~0.16 mW/cm 2 ) for up to an hour. A further set of samples was maintained in the dark for the same time period as the UV irradiation. Post irradiation, each sample system was added to sterilized PBS (450 µL) and vortexed (30 s). The neat suspension (450 µL) was diluted in a stepwise process up to 100-fold. Each ten-fold serial dilution (100 µL) was placed onto an appropriate agar (Macconkey agar for E. coli and Manitol Salt agar for S. aureus) for viable counts. The plates were incubated aerobically at 37 • C for 24 h (E. coli) or 48 h (S. aureus). Each sample type contained two technical replicates and each test was reproduced three times.
Chemical Composition
Thin films of ZnO/PTFE were deposited on microscope slides via RF magnetron sputtering with Zn and PTFE targets. The films covered 100% of the substrate and were well adhered to the glass. Figure 1a,
UV Activated Antimicrobial Test
Both the Gram-positive and Gram-negative bacteria were used to assess the antimicrobial activity of the material. The protocol was adapted from that of Macdonald et.al. [23]. For each test, one bacteria colony was incubated in brain heart infusion broth (BHI, Oxiod) at 37 °C with a shear speed of 200 rpm for 18 h. The pellet was recovered by centrifugation (5000 rpm for 5 min) and then washed with a sterilized phosphate-buffered saline (PBS, 10 mL) and re-suspended in PBS solution (10 mL). The washing process was repeated three times. The bacterial suspension was diluted to 1000-fold by putting the 10 µL aliquot into 10 mL of fresh PBS in order to obtain an initial inoculum with approx. 105~106 CFU/mL. A total of 15 µL of the initial inoculum was placed on top of each specimen and covered with a sterile cover slip (22 mm × 22 mm) (VWR). The specimen was then irradiated by UV light (UVItec LI-208.BL, 2 × 8 W, 365 nm, ~0.16 mW/cm 2 ) for up to an hour. A further set of samples was maintained in the dark for the same time period as the UV irradiation. Post irradiation, each sample system was added to sterilized PBS (450 µL) and vortexed (30 s). The neat suspension (450 µL) was diluted in a stepwise process up to 100-fold. Each ten-fold serial dilution (100 µL) was placed onto an appropriate agar (Macconkey agar for E.coli and Manitol Salt agar for S. aureus) for viable counts. The plates were incubated aerobically at 37 °C for 24 h (E.coli) or 48 h (S. aureus). Each sample type contained two technical replicates and each test was reproduced three times.
Chemical Composition
Thin films of ZnO/PTFE were deposited on microscope slides via RF magnetron sputtering with Zn and PTFE targets. The films covered 100% of the substrate and were well adhered to the glass. Figure 1a,b show the EDS spectra of ZnO and the ZnO/PTFE film; the atom concentrations of Zn and O elements on the ZnO film were 50.97% and 49.03%, respectively. The elemental composition of the superhydrophobic ZnO/PTFE film appears to have Zn, O, C and F, with the atom concentrations corresponding to 16.56%, 15.80%, 36.62% and 31.02%, respectively.
SEM Analysis
Superhydrophobic ZnO/PTFE thin films were successfully deposited by RF magnetron sputtering whereby textured ZnO was sputtered onto the glass substrate, followed by a coating of PTFE to yield the superhydrophobic films. Scanning electron microscopy
SEM Analysis
Superhydrophobic ZnO/PTFE thin films were successfully deposited by RF magnetron sputtering whereby textured ZnO was sputtered onto the glass substrate, followed by a coating of PTFE to yield the superhydrophobic films. Scanning electron microscopy (SEM) of the films revealed a highly textured nanostructure at various magnifications, as shown in Figure 2. It is apparent that the untreated ZnO surface uniformly comprised popcornlike clusters of nanoparticles, as shown in Figure 2a-c. The porous structure suggests a large surface area available for coating by the PTFE, as well as great potential for the trapping of air by sessile water droplets; both are important factors in the fabrication of superhydrophobic surfaces. Figure 2d-f show the structure of the ZnO surface coated with PTFE sputtered for 2 min under 100 W. It was found that the diameter of the nanoparticles in Figure 2f were a little larger compared with those in Figure 2c due to the thin PTFE film coating, which turned the superhydrophilic ZnO surface into the superhydrophobic ZnO/PTFE surface. (SEM) of the films revealed a highly textured nanostructure at various magnifications, as shown in Figure 2. It is apparent that the untreated ZnO surface uniformly comprised popcorn-like clusters of nanoparticles, as shown in Figure 2a-c. The porous structure suggests a large surface area available for coating by the PTFE, as well as great potential for the trapping of air by sessile water droplets; both are important factors in the fabrication of superhydrophobic surfaces. Figure 2d-f show the structure of the ZnO surface coated with PTFE sputtered for 2 min under 100 W. It was found that the diameter of the nanoparticles in Figure 2f were a little larger compared with those in Figure 2c due to the thin PTFE film coating, which turned the superhydrophilic ZnO surface into the superhydrophobic ZnO/PTFE surface. To describe the relationship between surface wettability and heterogeneous surfaces, Cassie and Baxter [24,25] proposed the following equation: where f2 is the fraction of air in the composite surface, and θc and θ are the contact angles on the rough and untextured surfaces, respectively. When the contact angle θ of the untextured surface is constant, a lower f2 would lead to a larger contact angle θc of the superhydrophobic surface. The contact angle of the surface increased to 165°, as shown in Figure 2d, which is far above that of the bare surface, and the sliding angle greatly reduced to less than 1° demonstrating superhydrophobic properties. The contact angle of the flat PTFE film has been shown to be 116°. The f2 value was calculated for the superhydrophobic ZnO/PTFE film to be about 0.94 according to Equation (1), which indicates that the actual fraction of contact area between the solid surface and the water surface was only 0.06.
UV Resistance
In order to test the UV resistance of the superhydrophobic ZnO/PTFE surfaces, the samples were exposed to UV light (320-420 nm, 0.9 W/m 2 ) for 6 h at 25 °C, and the contact angle (CA) and sliding angle (SA) of the ZnO/PTFE and ZnO/HDTMS surfaces were both measured after each 1-hour period, as shown in Figure 3. It was found that the wettability of ZnO/HDTMS surfaces dropped significantly after irradiation, with the CA decreasing from 166° to 105° and the SA increasing from 1° to 37°. Meanwhile, the ZnO/PTFE surface still exhibited consistent superhydrophobicity with a contact angle of 165° and a sliding angle < 1° after irradiation for 6 h, showing a superior UV-stability. This can be ascribed to the C-F bonds with high bond energy (485 kJ/mol) on the long chain of PTFE coated on To describe the relationship between surface wettability and heterogeneous surfaces, Cassie and Baxter [24,25] proposed the following equation: where f 2 is the fraction of air in the composite surface, and θ c and θ are the contact angles on the rough and untextured surfaces, respectively. When the contact angle θ of the untextured surface is constant, a lower f 2 would lead to a larger contact angle θ c of the superhydrophobic surface. The contact angle of the surface increased to 165 • , as shown in Figure 2d, which is far above that of the bare surface, and the sliding angle greatly reduced to less than 1 • demonstrating superhydrophobic properties. The contact angle of the flat PTFE film has been shown to be 116 • . The f 2 value was calculated for the superhydrophobic ZnO/PTFE film to be about 0.94 according to Equation (1), which indicates that the actual fraction of contact area between the solid surface and the water surface was only 0.06.
UV Resistance
In order to test the UV resistance of the superhydrophobic ZnO/PTFE surfaces, the samples were exposed to UV light (320-420 nm, 0.9 W/m 2 ) for 6 h at 25 • C, and the contact angle (CA) and sliding angle (SA) of the ZnO/PTFE and ZnO/HDTMS surfaces were both measured after each 1-h period, as shown in Figure 3. It was found that the wettability of ZnO/HDTMS surfaces dropped significantly after irradiation, with the CA decreasing from 166 • to 105 • and the SA increasing from 1 • to 37 • . Meanwhile, the ZnO/PTFE surface still exhibited consistent superhydrophobicity with a contact angle of 165 • and a sliding angle < 1 • after irradiation for 6 h, showing a superior UV-stability. This can be ascribed to the C-F bonds with high bond energy (485 kJ/mol) on the long chain of PTFE coated on to the ZnO surface. These C-F bonds cannot be broken by the UV light (314-419 kJ/mol) [26]. But, the C-H bonds of HDTMS could be easily damaged by the UV light.
Micromachines 2023, 14, x FOR PEER REVIEW 6 of 10 to the ZnO surface. These C-F bonds cannot be broken by the UV light (314-419 kJ/mol) [26]. But, the C-H bonds of HDTMS could be easily damaged by the UV light.
Antimicrobial Performance
The antimicrobial efficacy of the ZnO/PTFE and ZnO/HDTMS samples were quantitatively assessed by adopting a well-developed plate count method under UV illumination. Strains chosen were spread across Gram-positive bacterium and Gram-negative bacterium. Staphylococcus aureus 8325-4 [27] is one of the representative staphylococcal lineages that was refined for laboratory use. E. coli ATCC 25922 [28], a non-diarrheagenic pathogen that is considered as standard, is a commonly used clinical strain for microbiological research.
In this study, we investigated the CFU counts on the ZnO/PTFE and ZnO/HDTMS superhydrophobic surfaces and their corresponding controls in response to UV light with mild intensity. Each strain was used in a total of three trials so as to obtain reliable and reproducible results. Figure 4a,b show that upon irradiation with UV light, both S.aureus and E.coli bacteria were growing on the bare microscope slides. This indicates that the long wavelength UV light on its own is not capable of eradicating either of the bacterium. The CFU counts of bare microscope slides in both dark and illuminated conditions also suggest that the slides provide a hospitable "hotbed" for bacteria to proliferate on at room temperature.
From Figure 4a,b the graphs reveal the same trend with neither the ZnO/PTFE nor the ZnO/HDTMS superhydrophobic samples exhibiting significant antibacterial activity under dark conditions when tested against S.aureus and E.coli. The ZnO/PTFE sample has 0.06 log (13%) and 0.12 log (24%), respectively, and ZnO/HDTMS sample has 0.24 log (~43%) for both strains compared to the dark control (microscope slide). It is postulated that the trivial bactericidal activity of ZnO in both the ZnO/PTFE and ZnO/HDTMS samples in the absence of a light source can be attributed to the attachment of ZnO to bacterial cell walls, which subsequently causes the dissolution of local ZnO and therefore increases the concentration of Zn 2+ ions within the bacterial cytoplasm [29].
Both the teichoic acid and lipopolysaccharide shown in Figure 5a,b are rich in polyphosphate anions in these two species, and can be found in peptidoglycan layer of the Gram-positive bacteria and the outer membrane of the Gram-negative bacteria, respectively. They are considered to be the active sites for ZnO to attach to the bacteria cell wall [30,31]. As a consequence, teichoic acid or lipoteichoic acid could facilitate ZnO dissolution via the formation of ionic salts with Zn 2+ ions. The Zn 2+ ions then reach the cytoplasm through the peptidoglycan layer or outer membrane via the facilitator metalloproteins, and therefore becomes cytotoxic [32].
Antimicrobial Performance
The antimicrobial efficacy of the ZnO/PTFE and ZnO/HDTMS samples were quantitatively assessed by adopting a well-developed plate count method under UV illumination. Strains chosen were spread across Gram-positive bacterium and Gram-negative bacterium. StaphylococcuS.aureus 8325-4 [27] is one of the representative staphylococcal lineages that was refined for laboratory use. E. coli ATCC 25922 [28], a non-diarrheagenic pathogen that is considered as standard, is a commonly used clinical strain for microbiological research.
In this study, we investigated the CFU counts on the ZnO/PTFE and ZnO/HDTMS superhydrophobic surfaces and their corresponding controls in response to UV light with mild intensity. Each strain was used in a total of three trials so as to obtain reliable and reproducible results. Figure 4a,b show that upon irradiation with UV light, both S. aureus and E. coli bacteria were growing on the bare microscope slides. This indicates that the long wavelength UV light on its own is not capable of eradicating either of the bacterium. The CFU counts of bare microscope slides in both dark and illuminated conditions also suggest that the slides provide a hospitable "hotbed" for bacteria to proliferate on at room temperature. From Figure 4a,b the graphs reveal the same trend with neither the ZnO/PTFE nor the ZnO/HDTMS superhydrophobic samples exhibiting significant antibacterial activity under dark conditions when tested against S. aureus and E. coli. The ZnO/PTFE sample has 0.06 log (13%) and 0.12 log (24%), respectively, and ZnO/HDTMS sample has 0.24 log (~43%) for both strains compared to the dark control (microscope slide). It is postulated that the trivial bactericidal activity of ZnO in both the ZnO/PTFE and ZnO/HDTMS samples in the absence of a light source can be attributed to the attachment of ZnO to bacterial cell walls, which subsequently causes the dissolution of local ZnO and therefore increases the concentration of Zn 2+ ions within the bacterial cytoplasm [29].
Both the teichoic acid and lipopolysaccharide shown in Figure 5a,b are rich in polyphosphate anions in these two species, and can be found in peptidoglycan layer of the Grampositive bacteria and the outer membrane of the Gram-negative bacteria, respectively. They are considered to be the active sites for ZnO to attach to the bacteria cell wall [30,31]. As a consequence, teichoic acid or lipoteichoic acid could facilitate ZnO dissolution via the formation of ionic salts with Zn 2+ ions. The Zn 2+ ions then reach the cytoplasm through the peptidoglycan layer or outer membrane via the facilitator metalloproteins, and therefore becomes cytotoxic [32]. With long wavelength irradiation (365 nm), however, both the ZnO/PTFE and ZnO/HDTMS surfaces demonstrated remarkable antimicrobial activity. Illumination of samples containing ZnO resulted in the enhancement of photo-activated bactericidal activity. The ZnO/PTFE sample obtained a 3.7 log reduction when tested against S.aureus whereas the ZnO/HDTMS sample demonstrated a 0.9 log reduction for a 30 min incubation duration, which are equivalent to 99.98% and 88.86% reductions compared to their illuminated control. For the Gram-negative strain, E.coli, the cell viability count of the ZnO/PTFE sample (3.5 log reduction, 99.96%) was conspicuously lower than that of the ZnO/HDTMS sample (1.2 log reduction, 93.87%) under 1 h illumination of UV light. Both of these results show that the ZnO/PTFE sample has a greater photobactericidal property than the ZnO/HDTMS sample.
ZnO is believed to have an intrinsic photocatalytic efficiency; therefore, it can absorb UV radiation efficiently [33]. This characteristic enables the ZnO to interact with the bacteria. Upon UV illumination, any loosely attached oxygen will desorb from the surfaces and therefore be converted to a reactive oxygen species (ROS) such as H2O2 OHand O 2− . Therefore, by penetrating the bacterial cell, these active species can eradicate microorganisms. Compared with the ZnO/HDTMS sample, in which ZnO nanorods were completely coated with the modifier, only the upper part of the ZnO/PTFE sample is coated in PTFE; the entire underlying coating was bare ZnO, which could well make contact with bacteria, showing its photocatalytic property.
Wear Resistance
To test the wear resistance of the ZnO/PTFE films, 10 g of sand grains was dropped from a 50 cm height onto the 30° tilted ZnO/PTFE surface, as shown in the online supplementary Video S1. After the impact of the sand grains, the water droplet CA and SA were measured; the CA was still higher than 153° and the SA was < 5°, and thus retained excellent superhydrophobicity. Figure 6a,b shows the surface structure of the superhydrophobic ZnO/PTFE film after the sand impingement. As can be seen from the figure, the nano- With long wavelength irradiation (365 nm), however, both the ZnO/PTFE and ZnO/HDTMS surfaces demonstrated remarkable antimicrobial activity. Illumination of samples containing ZnO resulted in the enhancement of photo-activated bactericidal activity. The ZnO/PTFE sample obtained a 3.7 log reduction when tested against S. aureus whereas the ZnO/HDTMS sample demonstrated a 0.9 log reduction for a 30 min incubation duration, which are equivalent to 99.98% and 88.86% reductions compared to their illuminated control. For the Gram-negative strain, E. coli, the cell viability count of the ZnO/PTFE sample (3.5 log reduction, 99.96%) was conspicuously lower than that of the ZnO/HDTMS sample (1.2 log reduction, 93.87%) under 1 h illumination of UV light. Both of these results show that the ZnO/PTFE sample has a greater photobactericidal property than the ZnO/HDTMS sample.
ZnO is believed to have an intrinsic photocatalytic efficiency; therefore, it can absorb UV radiation efficiently [33]. This characteristic enables the ZnO to interact with the bacteria. Upon UV illumination, any loosely attached oxygen will desorb from the surfaces and therefore be converted to a reactive oxygen species (ROS) such as H 2 O 2 OH − and O 2− . Therefore, by penetrating the bacterial cell, these active species can eradicate microorganisms. Compared with the ZnO/HDTMS sample, in which ZnO nanorods were completely coated with the modifier, only the upper part of the ZnO/PTFE sample is coated in PTFE; the entire underlying coating was bare ZnO, which could well make contact with bacteria, showing its photocatalytic property.
Wear Resistance
To test the wear resistance of the ZnO/PTFE films, 10 g of sand grains was dropped from a 50 cm height onto the 30 • tilted ZnO/PTFE surface, as shown in the online supplementary Video S1. After the impact of the sand grains, the water droplet CA and SA were measured; the CA was still higher than 153 • and the SA was <5 • , and thus retained excellent superhydrophobicity. Figure 6a,b shows the surface structure of the superhydrophobic ZnO/PTFE film after the sand impingement. As can be seen from the figure, the nano-sized ZnO/PTFE protrusions on the film surface still maintained a good roughness. The same method was used to test the wear resistance of the ZnO/HDTMS films, and the results showed that the contact angle was still greater than 152 • and the sliding angle was less than 5 • . This observation is apparently due to the strong adhesion of the ZnO clusters to the glass substrate and the fact that most of the ZnO/PTFE protrusions were able to resist sand grain impingement at a certain height. This property is of great importance for the long-term use of this superhydrophobic film. sized ZnO/PTFE protrusions on the film surface still maintained a good roughness. The same method was used to test the wear resistance of the ZnO/HDTMS films, and the results showed that the contact angle was still greater than 152° and the sliding angle was less than 5°. This observation is apparently due to the strong adhesion of the ZnO clusters to the glass substrate and the fact that most of the ZnO/PTFE protrusions were able to resist sand grain impingement at a certain height. This property is of great importance for the long-term use of this superhydrophobic film.
Conclusions
ZnO/PTFE films with nanorod structures were successfully fabricated on glass slides by RF magnetron sputtering using Zn and PTFE targets, which combined both superhydrophobic and bactericidal properties. The rod-like ZnO structure on the upper layer was wrapped by PTFE, which presented low surface energy, and thus demonstrated excellent superhydrophobic properties with water droplet contact angles of up to 165°, which can effectively resist the adhesion of bacteria. Moreover, most of the ZnO nano particles inside were in a bare state, showing superhydrophilic properties, which can remove the adhering bacteria by photocatalysis, achieving an excellent antibacterial effect. The ZnO/PTFE films also showed excellent stability for UV/wear resistance. This method for ZnO/PTFE deposition is cheap and straightforward, promising a route to larger-scale antimicrobial superhydrophobic coating fabrication, while the approach of combining superhydrophobic and superhydrophilic properties opens up the field to a huge variety of potential material combinations in antimicrobial coating designs.
Supplementary Materials: The following supporting information can be downloaded at: www.mdpi.com/xxx/s1, Video S1: Test for the wear resistance.
Data Availability Statement:
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest:
The authors declare no conflicts of interest.
Conclusions
ZnO/PTFE films with nanorod structures were successfully fabricated on glass slides by RF magnetron sputtering using Zn and PTFE targets, which combined both superhydrophobic and bactericidal properties. The rod-like ZnO structure on the upper layer was wrapped by PTFE, which presented low surface energy, and thus demonstrated excellent superhydrophobic properties with water droplet contact angles of up to 165 • , which can effectively resist the adhesion of bacteria. Moreover, most of the ZnO nano particles inside were in a bare state, showing superhydrophilic properties, which can remove the adhering bacteria by photocatalysis, achieving an excellent antibacterial effect. The ZnO/PTFE films also showed excellent stability for UV/wear resistance. This method for ZnO/PTFE deposition is cheap and straightforward, promising a route to larger-scale antimicrobial superhydrophobic coating fabrication, while the approach of combining superhydrophobic and superhydrophilic properties opens up the field to a huge variety of potential material combinations in antimicrobial coating designs. | 6,836.2 | 2023-06-24T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Vibrio areninigrae as a pathogenic bacterium in a crustacean
The occurrence of infectious diseases poses a significant threat to the aquaculture industry worldwide. Therefore, characterization of potentially harmful pathogens is one of the most important strategies to control disease outbreaks. In the present study, we investigated for the first time the pathogenicity of two Vibrio species, Vibrio metschnikovii , a foodborne pathogen that causes fatalities in humans, and Vibrio areninigrae , a bacteria isolated from black sand in Korea, using a crustacean model, the signal crayfish Pacifastacus leniusculus . Mortality challenges indicated that injection of V. metschnikovii (10 8 CFU/crayfish) has a mortality percentage of 22% in crayfish. In contrast, injection of P. leniusculus with 10 8 or 10 7 CFU of V. areninigrae resulted in 100% mortality within one and two days post-injection, respectively. V. areninigrae was successfully re-isolated from hepato-pancreas of infected crayfish and caused 100% mortality when reinjected into new healthy crayfish. As a consequence of this infection, histopathological analysis revealed nodule formation in crayfish hepatopancreas, heart, and gills, as well as sloughed cells inside hepatopancreatic tubules and atrophy. Moreover, extracellular crude products (ECP ’ s) were obtained from V. areninigrae in order to investigate putative virulence factors. In vivo challenges with ECP ’ s caused > 90% mortalities within the first 24 h. In vitro challenges with ECP ’ s of hemocytes induced cytotoxicity of hemocytes within the first hour of exposure. These findings represent the first report that V. areninigrae is a highly pathogenic bacterium that can cause disease in crustaceans. On the contrary, V. metschnikovii could not represent a threat for freshwater crayfish.
Vibrio species are Gram-negative, bacillar in shape and motile, and ubiquitous in marine and estuarine ecosystems.This genus is one of the major bacterial species found in aquaculture farms (Cornejo-Granados et al., 2017;Holt et al., 2020), and its presence in freshwater ecosystems has been previously reported (Cornejo-Granados et al., 2018;Dong et al., 2016;Mishra et al., 2010).Due to its pathogenic potential, wide distribution, and range of hosts, Vibrio species are considered as opportunistic pathogens (Soumya Haldar, 2012).
The occurrence of opportunistic infections in aquaculture, including vibriosis, depends on the intricate interaction of pathogens, host, and environment (Bass et al., 2019).However, one of the most important elements for pathogen emergence is the evolution of novel strains (Bayliss et al., 2017), and wild aquatic animals and water and sediment bacterial communities are considered to be the main sources of novel pathogens in aquaculture facilities (Bass et al., 2019;Feist et al., 2019).
Up to date, more than 70 species of Vibrio are known (Thompson et al., 2004).However, the pathogenic potential of many of them on aquaculture species remains unclarified.Therefore, in the present study, two Vibrio sp.species , that we detected by 16S sequencing of Pacifastacus leniusculus intestines (data not shown), was investigated for the first time for their possible pathogenicity in freshwater crayfish.These species include Vibrio metschnikovii (Lee et al., 1978), which is considered a foodborne pathogen found in seafood worldwide that can cause fatal infections in human patients with comorbidity (Jensen and Jellinge, 2014), and Vibrio areninigrae, which was isolated for the first time from black sand collected from Jeju Island, Korea (Chang et al., 2008).Koch's postulates were confirmed by reproducing the disease, recovering the isolate from diseased crayfish, confirming the re-isolated to be the same as the injected bacterium with sequencing of 16S rRNA gene and describing the histologic changes induced by this disease, as well as determining putative virulence factors.
Bacterial strains and inoculum preparation
Vibrio metschnikovii Gamaleia 1888 strain was obtained from the Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures (DSM 19132).This strain was originally isolated from diseased fowl (Lee et al., 1978).
Vibrio areninigrae J74 strain was obtained from the Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures (DSM 22054).This strain was originally isolated from black sand collected from Jeju Island, Korea (Chang et al., 2008).
Mortality challenge trials
Freshwater crayfish P. leniusculus were obtained from Lake Erken in Sweden and maintained in tanks with aerated running tap water at 12 • C. Intermolt, male and healthy crayfish were used in the following experiments.Three days before the bacteria challenge, crayfish were distributed into 8-liter aquaria (3-5 crayfish/tank) with aerated water at 22 • C. Water was renewed one day before the bacterial challenge.Crayfish were tested to be free of Vibrio sp.before starting experiments.
The crayfish were injected at the base of the fourth-pair walking leg, with 100 μL of the bacteria dilutions previously prepared.Final amount injected per crayfish were 5.6 × 10 8 or 5.6 × 10 7 CFU for V. metschnikovii, and 6.5 × 10 8 , 6.5 × 10 7 , 6.1 × 10 6 , 6.5 × 10 5 , 6.0 × 10 4 CFU for V. areninigrae.The control group for each pathogen was injected with 100 μL of 0.9% NaCl.The mortality of the crayfish was registered daily.The experiments lasted seven days and three biological replicates with 3-5 crayfish in each replicate were performed.
Re-isolation of Vibrio areninigrae from P. leniusculus hepatopancreas
One gram of hepatopancreas obtained from crayfish at 18 h postinfection (hpi) with the V. areninigrae strain was homogenized in 1 mL of 0.9% NaCl.After centrifugation of the tube for five seconds, the supernatant was recovered and 1:10 serially diluted in 0.9% NaCl.Then, 100 µL of the 10 − 3 dilution was spread and incubated on MA for 16 h at 37 • C. Colonies showing similar characteristics of pure V. areninigrae colonies were re-plated on MA.
In order to test the pathogenicity of these colonies of re-isolated V. areninigrae from the hepatopancreas of crayfish, one single colony obtained from hepatopancreas was inoculated in MB and grown to OD 600 = 1.5, equivalent to 4 × 10 10 CFU/mL.Bacteria were collected by centrifugation at 1500g for 5 min at room temperature and washed twice with 0.9% NaCl.The bacterial stock was 1:10 diluted in 0.9% NaCl to obtain 4 × 10 9 CFU/mL concentration.The number of CFU/ml was verified on MA plates for this new isolate.
New sets of 3 crayfish per tank were inoculated in the base of the fourth-pair walking leg, with 100 μL of the 4 × 10 9 CFU/mL dilution of V. areninigrae isolated from hepatopancreas, at a final concentration of 4.6 × 10 8 CFU/crayfish.Controls inoculated with 0.9% NaCl were included.This experiment was repeated 3 times with 3 crayfish each time.
PCR analysis of the uridylate kinase encoding pyrH and 16S rRNA genes
DNA was extracted from V. metschnikovii, V. areninigrae J74 strain and V. areninigrae re-isolated from hepatopancreas using DNeasy Blood & Tissue Kit (QIAGEN) following the manufacturer protocol.Amplification by PCR of the uridylate kinase encoding gene pyrH was performed to confirm the presence of Vibrio spp., using the primers pyrH-02-R (GTRAABGCNGMYARRTCCA) and pyrH-04-F (ATGASNACBAAYCC-WAAACC) (Thompson et al., 2005).Amplification of the 16S rRNA was performed to identify Vibrio species, using the primers 27F (AGAGTTTGATCMTGGCTCAG) and 1492R (TACGGY-TACCTTGTTACGACTT) (Fredriksson et al., 2013;Lane, 1991).The reaction mixture of PCR was prepared separately for each set of primers with a final volume of 20 μL containing: 4 µL of 5 X Phusion HF Buffer (Thermo Scientific), 0.2 μL of 10 mM dNTP Mix (Thermo Scientific), 0.2 μL of Phusion High-Fidelity DNA Polymerase (Thermo Scientific), 0.5 μL (0.5 mM) forward primer, 0.5 μL (0.5 mM) reverse primer, 13.6 μL RNase-Free Water and 1 μL of DNA samples (100 ng).DNA isolated from Vibrio parahaemolyticus, Aeromonas hydrophila and Acinetobacter beijerinckii were included as controls.PCR amplification was performed for pyrH as follows: 5 min at 94 • C, 30 cycles of 1 min at 94 • C, 2 min 15 s at 58 • C and 1 min 12 s at 72 • C with a final extension of 7 min at 72 • C. PCR conditions for 16S rRNA were: 5 min at 95 • C, 35 cycles of 1 min at 94 • C, 30 s at 55 • C and 1 min at 72 • C with a final extension of 7 min at 72 • C. PCR products for pyrH and for 16S rRNA were resolved on 1.5% agarose gel stained with GelRed Nucleic Acid Stain™ (Biotium).The bands were excised from the gel, purified with GeneJet Gel Extraction kit (Thermo Fisher Scientific) and sequenced by Sanger method with both forward and reverse primers at the KIGene Service (Center for Molecular Medicine, Karolinska University Hospital, Stockholm).Sequences of both Vibrio areninigrae (J74) and Vibrio areninigrae re-isolated from hepatopancreas were aligned using CLUSTAL 2.1 and blasted against the NCBI data bank (GenBank™) as well as the EZBiocloud system repository.
Vibrio areninigrae scanning electron microscopy (SEM)
To confirm the structure and morphological similarity of V. areninigrae after re-isolation from hepatopancreas, bacteria suspension was prepared from one single colony, growing until 0D 600 of 1.3 in four mL of MB.One mL of bacteria was collected by centrifugation at 1500g for 5 min at room temperature and washed twice with 0.9% NaCl.The bacteria pellet was fixed in five mL of 2.5% glutaraldehyde in 0.1 M sodium phosphate buffer.V. areninigrae pure culture from the Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures (DSM 22054) was processed in the same way to confirm the morphology.Both samples were sent to the Microscopy Unit at the Department of Laboratory Medicine, Karolinska Institute, Stockholm for SEM.
Histopathological study of crayfish infected with V. areninigrae
In order to study the progression of the disease, histological analysis of hepatopancreas, heart and gills from crayfish challenged with V. areninigrae at 12 and 18 hpi was performed.
A new experiment with the same conditions mentioned above was performed (three aquaria with four crayfish/aquaria).Crayfish from two aquaria were individually injected with 4.6 × 10 8 CFU, and the remaining group was injected with 0.9% NaCl.Two of four crayfish/ group were fixed in Davidson's solution at 12 and 18 h after injection and processed for histological study.Fixed samples were processed, embedded in paraffin, and sectioned (7 μm thin sections) following standard methods (Bell and Lightner, 1988).The sections were stained with haematoxylin and eosin (H&E) and then analyzed by light microscopy.Photographs of the complete digestive tracts were taken at the same time points from one infected and one healthy crayfish.The remaining crayfish were used as mortality control and died approximately at 24 hpi.
Extracellular products (ECP's) assay
In order to evaluate the toxicity of crude extracellular products (ECP's) from V. areninigrae, in vivo and in vitro assays were performed following the protocol of Jiravanichpaisal et al., (2009).Briefly, one single colony of V. areninigrae isolated from hepatopancreas was grown overnight in 10 mL of MB supplemented with 10% crayfish plasma or with no plasma added.The plasma supplementation was performed to evaluate if crayfish components will stimulate V. areninigrae secretion of ECP's.Controls of MB with 10% plasma or without plasma that weren't inoculated with V. areninigrae were included in the experiments.
After 18 h of incubation at 37 • C with agitation (250 rpm), the cultures were centrifuged at 1500g for 5 min at room temperature and the supernatant was separated and filtered through 0.22 μm membrane filter and kept at 4 • C until crayfish injection.Sterility of the ECP's obtained after filtration was confirmed by the spread of 100 μL of ECP's on MA.
In vivo challenge
Groups of four crayfish maintained as mentioned above, were injected with one of the following treatments: 200 μL of ECP's from V. areninigrae, 200 μL of ECP's from V. areninigrae supplemented with plasma 10%, 200 μL of MB or 200 μL of MB supplemented with plasma 10%.The experiment was repeated 3 times with 4 crayfish each time.Statistical analysis of the survival obtained was evaluated with the Logrank (Mantel-Cox) Test.
In vitro challenge
Two mL of hemolymph were obtained individually from healthy intermolt crayfish and immediately mixed with the same volume of anticoagulant buffer (0.14 M NaCl, 0.1 M glucose, 30 mM trisodium citrate, 26 mM citric acid, 10 mM EDTA, pH 4.6) (Söderhäll and Smith, 1983).The hemocytes were then separated from plasma by centrifugation at 900g for 10 min at 4 • C and washed two times with 0.15 M NaCl.Total hemocyte count was determined using a hemocytometer.Hemocytes were cultured in 96-well plates at a density of 1 × 10 5 cells/well in 0.15 M NaCl at room temperature (22 • C), with one of the following treatments: 50 μL of NaCl 0.15 M, 50 μL of MB, 50 μL of ECP's or 50 μL of ECP's supplemented with plasma 10%.Morphology and cytotoxicity were evaluated after one hour using trypan blue 4%.Three experimental biological replicates were performed.
Vibrio metschnikovii is weakly pathogenic to crayfish
Crayfish injected with V. metschnikovii at concentrations of 5.6 × 10 or 5.6 × 10 7 CFU did not show any sign of disease.The animals stayed active during the experiment (seven days).Crayfish injected with 5.6 × 10 8 had a survival rate of 78% at the end of the experiment; only two animals died two days post-infection (dpi).Crayfish injected with 5.6 × 10 7 CFU remained active and had a survival rate of 100% at the end of the experiment.Crayfish injected with 0.9% NaCl remained active and did not die during the duration of the experiment.Further characterization of infection with this pathogen was not performed since the mortality results suggested it doesn't represent an important risk for crayfish.
Vibrio areninigrae pathogenicity and gross signs of infection
Crayfish infected with V.areninigrae at a concentration of 4.6 × 10 CFU became lethargic already at 12 hpi and this continued as the infection developed.Hepatopancreas developed discoloration and an aqueous consistency as the infection progressed.The intestine was empty in the infected crayfish, indicating some problems with feeding or digestion (Fig. 1).
We tested the survival rate of crayfish injected with V. areninigrae J74 strain at different doses, and Fig. 2 shows a dose-dependent mortality rate of crayfish injected with V. areninigrae J74 strain and V. areninigrae re-isolated from the hepatopancreas of previously V. areninigrae infected animals.Crayfish that received 6.5 × 10 8 CFU became lethargic at 12 hpi and presented a median survival of one day, while the crayfish group injected with 6.5 × 10 7 CFU had a median survival of two days.In contrast, 83% of the crayfish injected with 6.1 × 10 6 CFU survived at the end of the experiment, with two animals dying at 3.5 dpi.Crayfish injected with 6.5 × 10 5 and 6.0 × 10 4 CFU, as well as the control group, all survived at the end of the experiment.To confirm the pathogenicity, we re-isolated bacteria from hepatopancreas of infected and clearly affected animals.The crayfish injected with 4.6 × 10 8 CFU of these bacteria re-isolated from hepatopancreas had a median survival of one day, in accordance with the mortality rate obtained in the first infection experiment.Control crayfish injected with 0.9% NaCl remained active and didn't show mortalities during the seven days of the experiment.
Identification of Vibrio areninigrae re-isolated from infected crayfish by molecular methods
Amplification and sequencing of the pyrH and 16S rRNA genes using the DNA extracted from Vibrio areninigrae (J74 strain) and Vibrio areninigrae re-isolated from infected crayfish were used to confirm the identity of the species recovered.As shown in Fig. 3 the isolated Vibrio from crayfish hepatopancreas after injection (V.areninigrae-hepatopancreas), was confirmed by PCR with the amplification of the gene pyrH.Aligning with CLUSTAL 2.1 of the sequences of the injected V. areninigrae J74 strain (original), with the obtained sequence of the reisolated strain showed 100% similarity with 16S as well as the pyrH primers (Supplementary figures S1-S2).The sequences obtained with 16S rRNA gene showed 100% identity with Vibrio areninigrae (J74) when analyzed with NCBI (GenBank™) and EZBiocloud data banks confirming V. areninigrae as the causative agent of the mortalities.So far, there are no reference sequences for pyrH of V. areninigrae in any databases, and the sequences obtained with degenerate primers for pyrH showed the highest similarity percentage with Photobacterium swingsii (99%) using BLAST at NCBI (GenBank™), while the 16S sequence showed 96% similarity with this species.
Morphological study of Vibrio areninigrae using SEM
SEM micrographs showed pure culture from the microbe collection of V. areninigrae J74 (Fig. 4A and C), as well as pure culture re-isolated from the hepatopancreas of moribund crayfish (Fig. 4B and D).The cells are slightly curved, rod-shaped, and the length varies between 1.0 and 3.0 μm.Fig. 4C shows asymmetric division or 'budding' of V. areninigrae.
Vibrio areninigrae grows exclusively in marine agar.No growth was observed in TSA or selective medium VCSA.
Extracellular products (ECP's) toxicity
In order to elucidate whether the ECP's produced from V. areninigrae play a role in its pathogenicity, ECP's were prepared and tested in both in vivo and in vitro assays.
In vivo challenge
Injection of ECP's from V. areninigrae resulted in mortality of crayfish.At the end of the experiment on day 7, survival from crayfish group injected with ECP's was 8.3% (median survival time 0.89 dpi), and for the crayfish group injected with ECP's supplemented with plasma was 16.6% (median survival time 1.8 dpi) (Fig. 8).No significant difference Percent survival of P. leniusculus after injection with V. areninigrae (strain J74) at different doses, and P. leniusculus injected with 0.9% NaCl (Control 1).Percent survival of crayfish P. leniusculus after infection with re-isolated bacteria from hepatopancreas of previously V. areninigrae infected animals, then confirmed to be V. areninigrae (Re-isolated J74) by sequencing, and P. leniusculus injected with 0.9% NaCl (Control 2).At least 3 animals were included per treatment and the experiments were repeated three times.
was obtained from the comparison of these two groups (P = 0.3738).Control groups of crayfish remained active and showed no mortalities during the experiment (Fig. 8).No bacterial growth was observed in the MA after spreading with ECP's, which confirmed sterility of these samples.
In vitro challenge
We then tested the effect of ECP's on isolated hemocytes in vitro, and Fig. 9 shows the result of ECP's effect on total hemocytes maintained with 0.15 NaCl after one-hour incubation in the different treatments.Hemocytes from control (0.15 M NaCl) were viable and maintained normal shape (Fig. 9A), while hemocytes inoculated with MB remained viable and showed slight agglutination (Fig. 9B).Hemocytes inoculated with ECP's (Fig. 9C) and ECP's supplemented with plasma 10% (Fig. 9D) showed more than 95% cell death as judged by trypan blue staining one hour after incubation with V. areninigrae ECPs.
Discussion
Two species of Vibrio sp.namely V.metschnikovii and V. areninigrae were detected when we performed 16S sequencing of P. leniusculus intestines (data not shown) and that is the reason why they were tested for their potential as pathogens for a crustacean, the freshwater crayfish P. leniusculus.Therefore, crayfish were challenged via injection with these Vibrio species to assess the pathogenic potential of these bacteria for the first time in any crustacean.
Our results showed that the overall pathogenicity of Vibrio metschnikovii can be considered weak and this species doesn't represent a threat for P. leniusculus in terms of mortality.Crayfish had a mortality of about 22% at two days post-injection with high doses of V. metschnikovii.Similar results were obtained before in P. leniusculus with other enteric bacteria, including Citrobacter sp., Acinetobacter sp., and Pseudomonas sp., (Jiravanichpaisal et al., 2009), where the use of large inoculums failed to cause death, presumably because of an effective immune response.This is a notable result since, although V. metschnikovii is widely distributed in aquatic species, including scallop, bird clam, oyster, shrimps, lobster, crab and fish (Antunes et al., 2010;Farmer et al., 1988;Lee et al., 1978), and its presence has been reported as part of microbial communities in shrimp ponds (Sung et al., 2001), information regarding its pathogenicity is not completely robust.For example, Aguirre-Guzmán et al., (2004) considered V. metschnikovii a as non-pathogenic bacterium for shrimp, but it is not clear which methodological approach the authors used to reach this conclusion.
Moreover, the results reported herein have implications for public health, since it is shown that P. leniusculus acts as a carrier of V. metschnikovii, which can cause human disease including gastrointestinal tract disease (Dalsgaard et al., 1996), pneumonia (Wallet et al., 2005) and, in comorbidity cases, septicemia, cardiac arrest and fatalities (Jensen and Jellinge, 2014;Linde et al., 2004).In addition, consumption of cooked crayfish has been associated with vibriosis infection (Bean et al., 1998) and incidence of ca 10% of V. metschnikovii has been previously reported in seafood markets (Elhadi et al., 2004).
Regarding V. areninigrae, we successfully recovered V. areninigrae from crayfish previously infected, as the molecular analysis with 16S rRNA confirmed.However, it is important to mention that exact identification with pyrH gene sequencing is not possible for this Vibrio species using degenerate primers.This is due to a lack of information available in databases, and although detection of Vibrio sp. using pyrH is widely used, it is important to consider that the ranges of intra and interspecificity sequence similarity are lower than, for example 16S rRNA (Pascual et al., 2010).
Clinical signs observed in infected crayfish with V. areninigrae included typical vibriosis signs i.e. lethargy, empty gut, pale and aqueous hepatopancreas (Soto-Rodriguez et al., 2015), and injection of 10 8 or 10 7 CFU resulted in 100% mortality of crayfish within one and two days, respectively.Moreover, after receiving an injection with V. areninigrae filtrate (ECP's) the mortality of crayfish was high (>80%) and in a very short time (1-2 days).These results suggest that V. areninigrae produces extracellular toxins which are part of the virulence factors of this bacterial species.Extracellular products from different Vibrio species and strains have been extensively studied before, including adhesins, alkaline proteases, chitinases, cysteine proteases, hemolysins, metalloproteases, serine proteases, type III (T3SS) and type VI (T6SS) secretion systems and ureases (Aguirre-Guzmán et al., 2004;Beshiru and Igbinosa, 2018;Igbinosa, 2016;Labreuche et al., 2017;Le Roux et al., 2015;Li et al., 2019;Sirikharin et al., 2015;Zhang et al., 2020).It has also been demonstrated that pathogenicity of Vibrio is the result of a complex combination of multiple virulence factors (Li et al., 2019;Sirikharin et al., 2015).Although the objective of this study wasn't to characterize the toxins, but to address the pathogenic potential of this bacterium, our results also showed a cytotoxic effect of V. areninigrae ECP's towards P. leniusculus hemocytes in an in vitro study.Cells became unviable one hour after exposure to ECPs.This confirms that mortalities herein observed were not the result of bacterial multiplication, but more likely caused by toxins.
Furthermore, the most distinct result from histopathological analysis was early nodule formation in hepatopancreas, heart, and gills of V. areninigrae infected crayfish after 12 h-infection.This suggests that even if crayfish were able to mount cellular immune reactions, especially during the first hours of infection, it is very likely that as time progressed, the toxin production caused animal death.Moreover, hepatopancreas showed detachment of tubular epithelial cells or cell sloughing, which is considered a pathognomonic lesion of Vibrio-related diseases like acute hepatopancreatic necrosis disease (AHPND) (Angthong et al., 2017;Dhar et al., 2019;Sirikharin et al., 2015;Soto-Rodriguez et al., 2015;Velázquez-Lizárraga et al., 2019) and V. harveyi infection (Zhang et al., 2020), and that is known to be caused by Vibrio toxins (Sirikharin et al., 2015).
It is worth mentioning that until now, the only information regarding V. areninigrae available is related to taxonomical identification and biochemical analysis (Chang et al., 2008;Rim Kang et al., 2015), and since this bacterium was originally isolated from an active aquacultural zone, Jeju Island (FAO, 2016;Yun et al., 2015), elucidation of V. areninigrae pathogenic potential is of utmost importance for the local shrimp industry.Nonetheless, it is worth to mention that freshwater crustaceans could present similar mortalities rates to microbial pathogens as marine crustaceans (Longshaw, 2011).
Vibriosis outbreaks from environmental reservoirs depend upon the specific ecology, disease dynamics, and etiology (Holt et al., 2020).However, characterization of pathogens and the virulence factors have proven to provide valuable information that can be used to develop prevention and mitigation strategies, contributing to strengthening the sustainability of crustacean farming.
Conclusions
Our results show that V. areninigrae is a highly pathogenic bacterium for crayfish P. leniusculus and that the production of virulence factors is responsible for crayfish death.Koch's postulates were fulfilled during the characterization of the disease.V. metschnikovii, however, is a weakly-pathogenic bacterium for this crustacean.
Fig
Fig.2.Percent survival of P. leniusculus after injection with V. areninigrae (strain J74) at different doses, and P. leniusculus injected with 0.9% NaCl (Control 1).Percent survival of crayfish P. leniusculus after infection with re-isolated bacteria from hepatopancreas of previously V. areninigrae infected animals, then confirmed to be V. areninigrae (Re-isolated J74) by sequencing, and P. leniusculus injected with 0.9% NaCl (Control 2).At least 3 animals were included per treatment and the experiments were repeated three times.
Fig. 7 .
Fig. 7. Histopathological analysis of gills from crayfish injected with 4.6 × 10 8 CFU of V. areninigrae.Control group showed no pathological changes (A).After 12 h early stage of nodule formation (NF) was observed (B).After 18 h cell aggregation (CA) and the presence of pyknotic cells (PC) were observed (C and D).Bar scales = 50 µm.
Fig. 8 .
Fig.8.Percent survival of P. leniusculus after injection with V. areninigrae extracellular products (ECP's).Median survival 0.89 dpi for ECP's group and 1.8 dpi for ECP's supplemented with plasma 10%.No significant difference between ECP's treatments was obtained (P = 0.3738).Control groups did not show any mortalities.Four animals were included per treatment and the experiments were repeated three times. | 5,874.4 | 2020-12-14T00:00:00.000 | [
"Biology"
] |
Scattering of charged particles off monopole-antimonopole pairs
The Large Hadron Collider is reaching energies never achieved before allowing the search for exotic particles in the TeV mass range. In a continuing effort to find monopoles we discuss the effect of the magnetic dipole field created by a pair of monopole-anti-monopole or monopolium on the successive bunches of charged particles in the beam at LHC.
I. INTRODUCTION
The theoretical justification for the existence of classical magnetic poles, hereafter called monopoles, is that they add symmetry to Maxwell's equations and explain charge quantisation. Dirac showed that the mere existence of a monopole in the universe could offer an explanation of the discrete nature of the electric charge. His analysis leads to the Dirac Quantisation Condition (DQC) [1,2] eg = N/2, N = 1, 2, ..., (1) where e is the electron charge, g the monopole magnetic charge and we use natural units = c = 1 = 4πε 0 = µ0 4π . In Dirac's formulation, monopoles are assumed to exist as point-like particles and quantum mechanical consistency conditions lead to establish the magnitude of their magnetic charge. Monopole physics took a dramatic turn when 't Hooft [3] and Polyakov [4] independently discovered that the SO(3) Georgi-Glashow model [5] inevitably contains monopole solutions [6]. These topological monopoles are impossible to create in particle collisions either because of their huge GUT mass [3,4] or for their complicated multi-particle structure [7]. In the case of low mass topological solitons they might be created in heavy ion collisions via the Schwinger process [8]. For the purposes of this investigation we will adhere to the Dirac picture of monopoles, i.e., they are elementary point-like particles with magnetic charge g determined by the Dirac condition Eq.(1) and with unknown mass m and spin. These monopoles have been a subject of experimental interest since Dirac first proposed them in 1931. Searches for direct monopole production have been performed in most accelerators. The lack of monopole detection has been transformed into a monopole mass lower bounds [9][10][11][12]. The present limit is m > 400 GeV [13][14][15][16][17][18] but experiments at LHC can probe much higher masses. Monopoles may bind to matter and we have studied ways to detect them by means of inverse Rutherford scattering with ions [19,20].
Since the magnetic charge is conserved monopoles at LHC will be produced predominantly in monopole-antimonopole pairs (or monopolium) [21][22][23][24]. This magnetic charge-less pair, given the collision geometry, will produce a magnetic dipole field. We discuss hereafter the scattering of charged particles on a magnetic dipole and will analyze later on how our results affect the particles of the successive bunches at LHC. This development therefore assumes that the monopoles are more massive than the beam particles and therefore their scattering does not affect the dynamics of the formation process.
II. SCATTERING OF CHARGED PARTICLES BY A MAGNETIC DIPOLE.
Suppose that at LHC a monopole-anti-monopole pair is produced by any of the studied mechanisms [8,[21][22][23]25]. If the pair is produced close to threshold the pair will move slowly away from each other in the interaction region. These geometry will produce a magnetic dipole in the beam line which affects the particles coming in the successive bunches. We model this scenario as the study of the scattering of a beam of charged particles by a fixed magnetic dipole created by two magnetic charges separated by a fixed distance. We will discuss the peculiarities of monopolium, as a bound state, in Section IV.
The magnetic field of a monopole located at the origin of the coordinate system is given by where g is the magnetic charge, r the radial vector of coordinates (x, y, z) and r the norm of r. Let us construct the magnetic field of a monopole, located at position d = (0, 0, d), and an anti-monopole (located at position (0, 0, −d)), where d is a distance (see Fig. 1), where r ±d = r ∓ d and r ±d their norm. Let us perform an expansion in d/r.
where − → M = 2g d is the magnetic moment. Note that the magnetic charge field vanishes in the expansion in d as expected from duality and that to leading order in d/r we obtain the conventional field for a fixed magnetic moment.
Let us study the behaviour of the vector potential which is important to determine the interaction between the charged particles of the beam and the magnetic moment. Duality hints us that once the singularities are taken care of the result should resemble the conventional case. The vector potential for a monopole whose magnetic field is Eq.(2) can be written as where θ ∈ [0, π] is the spherical polar angle andφ = (− sin φ, cos φ, 0) is the azimuthal unit vector, being φ the azimuthal angle. Note that this field is singular for θ = π, i.e. , this is the famous Dirac string singularity. The vector field generated by the monopole-anti-monopole of Fig. 1 is given by If we perform a series expansion in d/r, we obtain Note that the d 0 term associated to the magnetic charge does not appear. The lowest order non vanishing term has the conventional structure of the potential of a magnetic moment. The Dirac string singularities of the monopole and the anti-monopole have banished in the expansion. All terms beyond the d 0 term are analytic. Minimal coupling applied to the free Schrödinger equation for a spin-less particle of charge Q and mass m A leads to an interaction for the magnetic dipole of the form Note that Qg = Z/2 where Ze is the charge of the particles in the beam. For the velocities involved in of our physical scenario the diamagnetic term, H dia , will be small compared to the paramagnetic one. Let us discuss the relation for the lowest order field. The diamagnetic potential for the first term of the potential expansion is while the paramagnetic term where v Q is the beam velocity. Being conservative we take for the interesting physical region the following values v Q ∼ 1, r ∼ δ, r 2 − z 2 ∼ δ 2 , where δ is inter-particle distance in the bunch which is ∼ 10 7 fm for protons and ∼ 10 8 fm for ions and we take for d a maximum value d ∼ 10 3 fm, which corresponds to the size of a Rydberg monopole-anti-monopole bound state. With these values we get for the ratio of the diamagnetic to the paramagnetic potentials where m N is the nucleon mass ∼ 1GeV.
Only for very small velocities and very close to the interaction point is the diamagnetic potential comparable to the paramagnetic one.
In the chosen gauge ∇ · A = 0, the paramagnetic term can be written as B being ∇ × A which is equal to where B A = Q( v Q × r/r 3 ) is the magnetic field created by the charge in motion. Thus we have shown that within the approximations used the interaction caused by the magnetic field of our magnetic dipole on a charge is the same as the interaction of the magnetic field of the moving charge on the magnetic dipole.
Let us calculate the scattering of charged particles off the magnetic dipole by using the Born approximation which defines the amplitude for the scattering for a spin-less charged particle by a magnetic dipole as In order to have a non vanishing result we take the incoming beam in the y direction, i.e. k = (0, k, 0), where k is the incoming momentum. The scattering plane we take as the xy-plane, thus k ′ = k(sin θ s , cos θ s , 0) where θ s is the scattering angle. After some conventional integrations we obtain for the amplitude in the Born approximation from which the cross section becomes We see that the cross section is independent of momentum at high energies. LHC accelerates particles in bunches which are of macroscopic size 16µm x 16µm x 7.94cm and contain many particles. Thus the dipole will affect many particles while moving away from the interaction point and separating to distances d up to hundreds of fm. Let us therefore calculate the Born approximation for finite d. In principe looking at the expansion this calculation seems prohibitive but having rewritten the potential as in Eq.(6) it becomes feasible to do it exactly. Let us apply the Born approximation directly to the full potential in A d , We choose as before k = (0, k, 0) and k ′ = (k sin θ s , k cos θ s , 0) 1 . The calculation requires the following integral whose result is 4d and is immediate if one recalls the following limit Having performed this z integral exactly the problem reduces to the simplified calculation performed before and we obtain as result Eq. (15), Thus we get the same equation for finite d as in the limit d → 0.
We have performed the calculation in a non-relativistic scheme. Let us now generalise the result by implementing relativistic corrections.
Let us start by studying a beam of spin-less particles. The corresponding Klein-Gordon equation reads Taking A 0 = 0, considering only the paramagnetic interaction term and choosing the gauge where ∇ · A = 0 we get where k 2 = E 2 − m 2 . This equation has to be compared with the Schrödinger equation Thus relativity is implemented just by substituting the non relativistic momentum k = √ 2mE by the relativistic one k = √ E 2 − m 2 . Therefore, the structure of the cross section in the Born approximation does not change, Let us assume now that we have a beam of unpolarised spin 1/2 particles. Using the conventional notation the Dirac equation for our problem becomes We next multiply by [26,27], and we obtain which leads to the same equation as before for each component using the relativistic momentum. Thus again the structure does not change, Before closing this section it must be noted that we are performing the calculation in the most favorable situation in which the dipole is perpendicular to the beam. However, we expect to produce many monopole-anti-monopole pairs which will be created in all possible orientations and therefore the final result will behave as an unpolarised cross section. and will be smaller. Let us use the above study for LHC physics. Imagine that monopole-anti-monopole pairs are created in the collisions [21][22][23]. Some of those pairs annihilate into photons and some of them escape the interaction region. The annihilation cross sections has been studied for some time [22,29,30]. Those monopoles which escape might be detected directly or bind to matter and methods for detection have been devised [15,20,25,31]. We are here interested in discussing what happens while the pairs are escaping the interaction region because this effect might help disentangle the monopole from other exotic particles. A pair of opposite magnetic charges will create a magnetic dipole field as we have shown in the previous section from which the particles of the beam will scatter. We study next what happens with proton and ion beams at LHC with the maximum planned beam energy 7 TeV and maximum luminosity.
III. SCATTERING OF CHARGED PARTICLES ON A PAIR MONOPOLE-ANTI-MONOPOLE
We study proton beams first. Using Eq. (19) we plot in Fig. 2 the shape of the cross section as a function angle in units of d 2 . It is a typical electromagnetic cross section large at small angles decreasing rapidly as the angle increases. Thus the wishful signature should occur in the forward direction.
In order to get some realistic estimates for detection we have to fix several scales. The first scale to fix is d. The minimum possible value for d is twice the classical radius of the monopole (∼ 2g 2 /m) which for a monopole of mass 500 GeV is ∼ 0.03 fm. For the maximum value of d we choose the separation between monopole-anti-monopole in a magnetic Bohr atom (∼ 2n 2 /mg 2 ) for large n ∼ 100 this leads to d ∼ 240 fm.
The next parameter we need to determine is the duration of the collision. This parameter together with the luminosity of LHC, 2. 10 34 cm −2 s −1 , will determine the number of protons scattered by each pair. To determine that number we need to know the maximum separation from the interaction point at which the dipole is still active and its velocity of separation from the impact point. We will use for the effective separation distance the width of the bunch ∼ 16µm, for which we get βt ∼ 0.8 10 −13 s. In our plots we take for the velocity β the value 0.01, production almost on shell, noting that β enters the equation as ∼ 1/β, thus a rescaling of our results is trivial.
Finally we need to know the number of pairs produced in the collisions. We used the production cross section for spin 0 monopoles [2] calculated using the techniques of refs. [21][22][23].
Let us discuss first monopoles of 500 GeV mass with Dirac coupling g given by the quantization condition Eq.(1). The cross section for the pairs produced is ∼ 1000 pb 2 . With these scales fixed we calculate the number of protons scattered in one year assuming that the pair separates with a velocity β = 0.01 from the interaction point. In the process the monopole will separate from the anti-monopole and d will increase from a small value initially to a relatively large value once they leave the proton bunch. The result of the calculation is shown in Fig. 3. The upper curve corresponds for a dipole of d ∼ 240 fm and the lower for a dipole of d ∼ 0.03 fm.The result corresponds to a typical electromagnetic interaction where the forward direction is favoured, but where the non-forward scatterings are an important characteristic. The validity of the Born approximation for the large values of d might be questionable, they have to be taken as an indication of order of magnitude. From an experimental point of view it is the non-forward directions which characterises the creation of a monopole-antimonopole pair. We see that detection in the near-forward direction is possible for large values of d and the scenario is specially suited for the big detectors ATLAS, CMS and LHCb. These observations are complementary to direct detection and the annihilation of monopole-anti-monopole pairs into photons. Direct detection might not differentiate monopoles from other exotics and annihilation produces broad bumps which are not very characteristic [22,29,30]. However, together with the observation of non-forward protons of beam energy these signatures become a clear identification of monopole production.
The β-coupling schemes used in many calculation [21,22] leads to production cross sections which are smaller and therefore to a smaller number of protons scattered as shown in Fig. 4. The monopole-anti-monopole production cross section decreases rapidly with the monopole mass [21][22][23] and so will the number of scattered protons. We show these results in Fig. 5 for Dirac coupling. Thus for larger monopole masses the dipole effect becomes more and more difficult to detect. Let us discuss next heavy ion scenarios by studying 208 P b 82+ beams. In order to get estimates we fix the scales again. The effective duration of the interaction is calculated as before, namely as the time that takes the pair to get out of the bunch. Since the bunches have the same size as for the proton we use the same time scale. We take the same escape velocity of the ions β = 0.01. Since the collision takes place in an extreme relativistic scenario we will approximate the lead nucleus by a flat pancake, assume central collisions and thus the production cross section for monopole-anti-monopole pairs can be approximated by Z 2 σ(pp), noting that photon fusion is the dominant production mechanism and that the neutrons do not contribute to production. Unluckily the luminosity for ions at LHC is much smaller, 10 27 cm −2 s −1 . This factor proves to be dramatic in not allowing detection. We show in Fig. 6 (left) the average number of particles scattered per year for a 208 P b 82+ beam. It is clear that with the present LHC luminosity for lead the possibility of measuring the dipole effect with lead ions is out of question. In order to see a signature for the MoEDAL detector the luminosity has to be increased minimally by 10 4 . In Fig.6 (right) we show the results for this luminosity. Detection is difficult but possible in the slightly off-forward direction.
We can summarise the results of our investigation by concluding that the dipole effect of an monopole-anti-monopole pair may be detectable at LHC if monopole masses do not exceed 1000 GeV with available proton beams and reachable luminosities. With ion beams and present luminosities, detection is not feasible. The signal for the existence of the pair is clear, one should look for protons at beam energies in the off-forward beam directions.
IV. SCATTERING OFF MONOPOLIUM
Monopolium is a bound-state of monopole-anti-monopole. It cannot have a permanent dipole moment. However, in the vicinity of a magnetic field it can get an induced magnetic dipole moment through its response to the external magnetic field. In quantum mechanics the magnetic polarisability α is connected to the change of the energy levels of the system caused by the external field. The general framework to evaluate these changes is the (stationary) perturbation theory applied to the total Hamiltonian which, in the case of a monopolium immersed in a static (and uniform) magnetic field B, can be written where µ = m/2 is the reduced mass of the monopole, V MM the potential energy associated to the monopole -antimonopole interaction within a non-relativistic framework and − → M the magnetic dipole induced in the system. The (negative) lower order correction to the ground state energy of the system is quadratic in the perturbative field and defines the magnetic (paramagnetic) susceptibility α M where | B is the ground state of H( B), E 0 the ground state energy value of the unperturbed Hamiltonian H 0 and B = | B|. The magnetic dipole operator − → M is inferred by the duality from the analogous electric dipole operator: where r is the relative position of the monopole and anti-monopole. The susceptibility α M can be equivalently defined from the induced magnetic moment as in the classical case, namely Both Eqs. (27) and (29) lead to the well know perturbative expression (recall that 0|M|0 = 0 because of parity invariance) which relates the polarisability α M to the inverse energy-weighted sum rule m −1 . Because of the rigorous bounds among sum rules one can estimate m −1 through a lower bound with Thus to get an estimate for the polarisability one has to calculate the first few sum rules for the magnetic dipole operator (28): where z is the relative distance of monopole and anti-monopole in the direction of the external field. Since monopolium is a two-body system the calculation of the sum rules can be performed rather easily not only for the odd moments which depend on commutators, but for the even moments also, although they require the evaluation of anticommutators [32].
i) m 0 gives the total integrated response function and is related to the rms radius of the monopolium; ; The simplicity of the previous commutator is basically due to the fact that the commonly used monopole-antimonopole potentials, V MM , commute with the magnetic dipole operator (28).
Lower and upper bounds to the magnetic susceptibility can be found. The so called Feynman bound is given by [33,34], We make here the assumption that the previous lower bound can reasonably approximate the magnetic susceptibility for monopolium as established in other contexts [32][33][34], thus where r 2 is the mean square radius of the monopolium system. Let us describe the physical scenario for detection of monopolium. We assume that monopolium is produced at LHC at 14 TeV fundamentally by photon fusion in the reaction Let us assume that monopolium is produced near threshold with a mass below 2000 GeV. The time scale of the process is dominated by the lifetime of monopolium t ∼ 1 Γ ∼ 1 10 GeV −1 . The protons travel close to the speed of light and therefore the distance scales are ∼ 0.02 fm. The magnetic field created by the moving protons deforms monopolium and gives it a magnetic moment, where we equate the induced magnetic moment to that of an effective dipole as described in previous sections. d is a measure of the stiffness of monopolium. In this way we will apply the formalism developed in the previous sections to this effective magnetic moment. Our goal is to estimate α M and B to get d and then we apply the scattering formalism of previous sections.
To calculate α M we need to have a model for monopolium, i.e. an interaction potential. There are several models in the literature [29,35] but for the purpose of the present investigation the approximation to the potential of Schiff and Goebel [35] V (r) = −g 2 1 − exp(−2r/r 0 ) r , used in ref. [28] will be sufficient. The approximation consists in substituting the true wave functions by Coulomb wave functions of high n. For each r 0 a different value of large n will be best suited. We use the equation to parametrise all expectation values in terms of ρ, where ρ = r M /r classical , r M being the expectation value of r in the (n, 0) Coulomb state, and α the electromagnetic fine structure constant ∼ 1 137 . We allow ρ to be continuous parameter representing in such a way potentials of different cutoff ranges. In terms of rho the binding energy becomes a function in terms of ρ which covers the interval [0, 2m]. In this approximation all the moments can be determined analytically The magnetic susceptibility obtained from the Feynman estimate Eq.(37) becomes α M = 1 93312 Let us now estimate the magnetic field acting on monopolium. We are assuming that monopolium is moving slowly as compared to the protons (β M ∼ 0.01) , therefore it is static in the time scale of the problem. The proton which creates the magnetic field is moving very fast, β p ∼ 1. The time scale of the problem is determined by the lifetime of monopolium which leads to an effective radius of R ∼ 0.02 fm, thus The effective distance, Eq. (40) becomes In order to perform the calculation we require the monopolium production cross section. To do so we have used the formalism and computational programs of [28] with updated pdfs. In Fig. 8 we plot d as a function of size parameter for m = 1000 GeV and m = 1500 GeV. We note that the stiffness parameter ranges from 0.001 fm for strong bound monopolium to 0.1 fm for weak bound monopolium.
We limit here the calculation to proton beams since we have seen in the previous sections that the luminosity for ions is too low to produce detectable results. In Figs. 10 we show the dependence of the number of protons scattered per year as a function of scattering angle for different values of the size parameter and monopole masses. The figure on the left shows that the strong binding scenario might allow detection, for monopolia with a mass below 1000 GeV, while observations in the weak binding scenario are difficult. This has to do with the monopolium production cross section which decreases very fast with the ρ parameter compensating for the smaller stiffness. The figure in the right shows that as we increase the monopole mass for a fixed ρ the cross sections becomes smaller and observability is reduced. The production cross section diminishes greatly as the monopole mass increases.
We have not studied here the energy dependence of the cross section since in the Born approximation the cross section comes out energy independence and all the energy dependence will come from the production cross section [22,28]. We have presented all results for 7 TeV proton beams and LHC luminosities.
To summarise, we stress that the analysis for monopolium depends strongly on the details of the dynamics. Different monopole-anti-monopole potentials might lead to different results. In particular, the phenomenon depends very strongly on the binding energy and the decay width. Large binding energies and small widths will increase observability. Given the neutral nature of monopolium the detection of non-forward protons is ideal for its characterisation. However, in weakly bound monopolia with short lifetimes given the planned luminosities at LHC the phenomenon would not be observable.
V. CONCLUDING REMARKS
In previous work we studied ways to detect Dirac monopoles bound in matter by means of proton and ion beams [20]. We also studied the possibility of finding monopoles not free but in bound pairs of monopole-anti-monopole, the so called monopolium [22,28]. Monopolium has lower mass than a pair of monopole-anti-monopole and also annihilates into photons [29,30] but because it is neutral it is difficult to detect directly. In this paper we pursue some investigations to detect monopoles in LHC besides direct detection and their decay properties. We study the distortion produced in the beam by their permanent or induced magnetic dipole moment to characterise detectability. We have modelled the interaction by a fixed magnetic dipole made by two magnetic charges g and −g separated by a distance d. We have studied how this effective magnetic dipole interacts with a beam of charged particles. The main result is that the beam particles will be deflected and therefore particles with beam energy will appear in off-forward directions. We have shown that monopole-anti-monopole pairs lead to an sizeable effect with the proton beams at LHC and thus the effect is suitable for detection in ATLAS, CMS and LHCb. However, present heavy ion luminosities do not allow detection which makes the scenario not useful for MoEDAL. In the case of monopolium the strong coupling limit also leads to off-forward protons, a scenario which could characterise the production of this neutral particle. However our study shows that observability of the phenomenon depends very strongly on the lifetime of monopolium and its binding energy.
To conclude monopoles can be detected directly or by the decay of monopole-anti-monopole pairs into photons. Monopolium can be detected by its decay into photons. We have shown that detecting beam particles at beam energy in non-forward directions becomes an additional tool for monopole and monopolium detection. | 6,249.6 | 2019-09-09T00:00:00.000 | [
"Physics"
] |
An Eye-gaze Tracking System for Teleoperation of a Mobile Robot
Most telerobotic applications rely on a Human-Robot Interface that requires the operator to continuously monitor the state of the robot through visual feedback while uses manual input devices to send commands to control the navigation of the robot. Although this setup is present in many examples of telerobotic applications, it may not be suitable in situations when it is not possible or desirable to have manual input devices, or when the operator has a motor disability that does not allow the use of that type of input devices. Since the operator already uses his/her eyes in the monitoring task, an interface based on the inputs from their gaze could be used to teleoperate the robot. This paper presents a telerobotic platform that uses a user interface based on eye-gaze tracking that enables a user to control the navigation of a teleoperated mobile robot using only his/her eyes as inputs to the system. Details of the operation of the eye-gaze tracking system and the results of a task-oriented evaluation of the developed system are also included.
INTRODUCTION
In a telerobotic system, a human operator controls the movements of a robot from a remote location. Some of these systems serve only the purpose of teleoperating the robots and others allow the human operators to have a sense of being on the remote location through telepresence. These robotic systems certainly have very interesting applications with enormous benefits to society (Minsky, 1980). Examples of interesting real-world applications are those in the area of Ambient Assisted Living (AAL) where teleoperated robots are starting to be used to remotely enabling the presence of their users and to provide companionship to elderly people (Amedeo et al., 2012). The different platforms combine a robotic mobile base with a remote video conference system for the communication between distributed teams-worker, relatives or health professionals and elderly people at home, or at healthcare facilities (Kyung et al., 2011;Tsui et al., 2011). There are also several mobile telepresence robots commercially available for the general public, such as Double Robotics (Double Robotics, n.d.), Giraff (Giraff, n.d.), QB Avatar (Anybots, n.d.), or R-Bot Synergy Swan (R.Bot, n.d.). These robots are relative cheap and considered as an important tool for inclusion.
Most telerobotic applications rely on a Human-Robot Interface (HRI) that requires the operator to continuously monitor the state of the robot through some sort of visual feedback and to use manual input devices to send commands to control the movements of the robot. This engages the eyes of the operator in the monitoring task and the hands in the controlling task throughout the whole duration of the teleoperation. Although this setup is present in many examples of telerobotic applications, it may not be suitable in various situations. Namely, when it is not possible or desirable to have manual input devices, or when the user has a motor disability that does not allow the use of manual input devices. Also, an effective hands-free teleoperation interface is very interesting to a The experimental platform consists of a mobile robot at a remote location and a teleoperation station. The mobile robot is an adapted version of a Turtlebot II robotic platform (Turtlebot II, n.d.) that comprises a mobile base called Kobuki, a netbook, and a Microsoft Kinect 3D camera sensor. The basic configuration was augmented with a mini-screen and a webcam, both mounted in a tower.
The teleoperation station is a normal laptop equipped with the non-intrusive eye-gaze tracking system MagikEye.
Both the teleoperation station and the mobile robot are Wi-Fi enabled and communicate through wireless Internet.
The Software of the Platform Figure 2 shows the different software applications that comprise the platform. The "Skype" application is present in both the robot and the control station and allows video and audio transmission between the two. This application works independently of the others. Although this application provides the means to implement a basic telepresence platform, since the robot also has a screen and a webcam with a microphone, the possibility was not considered in this research and the application was used solely to allow the remote monitoring from the teleoperation station.
The "Robot" application is responsible for controlling the robot navigation. It receives commands from the control station through User Datagram Protocol (UDP) messages that converts into robot specific commands that are sent to the Kobuki base, through a serial connection. The application also receives sensor data from the Kobuki base, through the same serial connection. Although the Kobuki base has several different sensors, in this project the application only used the data regarding the status of the bumpers that uses to suspend a certain control command, if the bumpers indicate the presence of an obstacle.
The "Kobuki Control" refers to the firmware of the robot base responsible for controlling its hardware. The "MagikEye" application at the control station, implements the eye-gaze tracking system used to develop the user interface for the project.
The "Teleoperation" application implements the user interface to control the remote mobile robot. It receives the eye-gaze tracking data from the MagikEye application through Windows messages, and based on the user interaction, generates commands to control the navigation of the robot that are sent to the remote robot through UDP messages. The application also has the option to control of the robot using the keyboard and the mouse.
THE USER INTERFACE BASED ON EYE-GAZE TRACKING
This section describes the operation of the user interface and the eye-gaze tracking system.
The User Interface
The user interface of the Teleoperation application described in the last section was implemented with the objective to provide the operator with the capabilities for both controlling and monitoring. Therefore, the user interface must provide access to controlling commands as well as adequate feedback presentation of the remote images captured by the robot. Since the images are presented trough the Skype application occupying the entire screen, the user interface was implemented as a transparent layer on top of the entire screen.
The transparency of this layer allows the user to issue commands to control the navigation of the robot whilst monitoring the images of the remote location captured by the robot. Commands are issued when the user looks at certain regions of the transparent layer. Figure 3 shows the layout of the three regions that were defined.
The arrows shown in the figure are just a representation of the commands associated with each region and do not exist in the interface. By looking at one of the three regions, the user can issue a command to make the robot go forward, turn left, or turn right. Each command has associated a speed parameter that the user can also control. The speed value is set proportionally to the position where the user is looking inside a certain region. The area without arrows corresponds to a region not associated with any commands and provides the user with rest for the eyes and with the opportunity to inspect parts of the scene.
The Eye-gaze Tracking System
The eye-gaze tracking system used is called MagikEye and is a commercial product from the MagicKey company (MagicKey, n.d.). It is an alternative point-and-click interface system that allows the user to interact with a computer by computing his/her eye-gaze. The system is composed by a non-intrusive hardware component to capture images of the eyes of the user, and a software component that processes the images and calculates the point of the gaze in the computer screen. The user can move the head that does not interfere with the operation of the application. A calibration process is required prior to obtaining the points of the gaze on the screen. The system uses a very lightweight protocol based on Windows messages that allows the integration with other applications and is available for the Windows platform. The following describes the operation of the system with more detail.
The hardware used by the system can be seen on Figure 4 and is comprised of a high definition camera with a maximum spatial resolution of 1280x1024 pixels, and a color resolution of 8 bits. This camera uses a USB 2.0 interface and has the ability to acquire and transmit 25 frames per second at full resolution and can provide 60 frames per second with a spatial resolution of 1280x400. The camera uses type C lens, with 25mm, which allows the capture of the user's face images with high detail, crucial to the proper functioning of the system. The camera and lens are integrated with two infrared illuminators that emit at a wavelength of 840nm. The emitted infrared meets the EN62471 standard in terms of safety for the user. The typical positioning of the camera is near the base of the computer screen facing the user's face, as shown in Figure 1.
The eye-gaze tracking method is based on the detection of the dark pupil obtained by the combination of the angle of the infrared illumination with the lens of the camera, as shown in Figure 5.
The first step of the method is to detect with high precision the center of the pupil. This is accomplished using a modified custom version of the Hough Transform (Duda et al., 1972), optimized for high speed. The algorithm can detect the pupil with high precision even when the pupil has a slightly oval shape and is partially covered by the eyelid.
The second step of the method consists in detecting the position of two white dots closest to the center of the eye that are the reflections of the two infrared illuminators. These reflections (white blobs) are used as a reference to calculate the direction of the user's eye in relation to the camera. Figure 6 shows a sequence of images that are obtained from the right eye when the user looks at the upper left corner, the upper right corner, the bottom left corner and the lower right corner of a computer screen. As shown in the images, the relative positions of the reflections (white blobs) from the center of the eye are related to the eye-gaze of the user.
The third step of the method maps these relative positions of the reflections into the total resolution of the computer screen, to estimate the point of the gaze. The maximum variation in terms of horizontal or vertical location of the reflections relative to the center of the eye when the user looks at the opposite boundaries of the computer screen does not exceed 40 pixels. This value has to be mapped to represent the full screen resolution. In the case of having a screen with a horizontal resolution of 1680 pixels, this means that the error rate is at least 1680/40 = 42 pixels. This error rate is increased by the estimation error of the result of the center of the eye and the estimation error of the exact position of the infrared reflections. The following techniques have been implemented to minimize these errors and to increase the accuracy of the system and allow the user to place the cursor of the mouse on any pixel of the computer screen: A sub-pixel resolution is used to calculate the center of the eye. The center is calculated in decimal terms. To do that, different weights are used to measure the probability of a particular pixel be effectively the center of the eye. Then, the 5 possible centers with highest probabilities are selected and a weighted average of those centers is calculated.
A similar technic is used to determine the center of the white blobs. The algorithms are optimized to process the largest possible number of images. The system processes 60 frames per second. Since the mouse update is performed at a frequency of 20Hz, the calculations of the final position result from the average of 3 consecutive frames.
The two eyes are processed independently and the results are averaged to calculate the final point of gaze. A time domain filter is used allowing the stabilization of the final point of gaze in small movements, without affecting large movements, such as when the eyes look rapidly from one side of the screen to the other.
SYSTEM EVALUATION
The performance of the eye-gaze interface was evaluated using a task-oriented evaluation, similar to the one proposed in (Latif et al., 2008). The evaluation had two goals. The first was to compare the performance of the user interface based on eye-gaze tracking with conventional modalities of interaction based on two manual input devices, the keyboard and the mouse. The second was to investigate the user's perception and opinion about the usability of the system.
A navigational task was designed and nine volunteers, aged between 21 and 45 years, performed the same task using all three different modes of interaction. After completing the task each participants filled out a questionnaire on the system usability. The participants were people with good familiarity with using computers but without experience in teleoperating mobile robots or in using user interfaces based on eye-gaze tracking. All participants were given a brief verbal description of the idea of the goals of evaluation study and how the interface works. The aim of the task was to drive the robot along the track shown in Figure 7.
The track had its beginning and end within a room and included passing through a door. The total length of the track was approximately 22 meters.
Performance Evaluation
One metric commonly used to evaluate the performance of human-robot interaction applications, evolving teleoperation and navigation, is the efficiency. For the purpose of this work, efficiency was defined as the time to complete the navigational task. This time was measured for all three different modes of interaction for each participant starting from the start-point of the track and finishing by coming back to the same point.
A brief explanation of the task was given to each participant before starting the task, but the participants did not undergo any training session, even with the eye-gaze interface.
Before each participant started to execute the task, the MagikEye application was calibrated for that user. Then each participant executed the task three times using the three interaction modes. First the task was completed with the mouse interface, next with the eye-gaze interface, and finally with the keyboard interface.
The efficiency of the three modes of interaction is shown in the chart of Figure 8 in the form of the average time of task completion in seconds. The error bars represent the standard error.
The results showed that the interface based on keyboard input came in the first position in terms of performance. The interface using inputs from the computer mouse came in the second place and finally the eyegaze interface came in the third place. Nevertheless, the experiment proved the feasibility of the eye-gaze interface as a mean of HRI in teleoperation applications. Despite the relatively low performance of the eye-gaze interface, all participants managed to finish the navigational task using all the modes of interaction.
Usability Evaluation
To evaluate the user's perception and opinion about the usability of the eye-gaze interface, the participants were asked to complete a System Usability Scale (SUS) survey (Brooke, 1996).
The SUS is a simple, widely used 10-statement survey developed as a "quick-and-dirty" subjective measure of system usability. The tool asks users to rate their level of agreement or disagreement to the 10 statements (half worded positively and half negatively) about the system under evaluation. The level of agreement is given using a scale of one to five, where one is strongly disagree and five is strongly agree.
The evaluation of the usability of the system performed by the participants revealed an average SUS score of 70.8 (on a scale of 0 to 100) that is considered to represent a good usability (Brooke, 1996;Nielsen, 1994).
The scores of the individual statements of the SUS survey can be grouped to obtain a set of quality components that characterize the system (Nielsen, 1994).
The chart shown in Figure 9 presents the average scores for the different quality components considered (on a scale of 0 to 4).
It can be seen from the chart that all the quality components obtained a positive score. However, none of the quality components stands in relation to others, and overall, the scores are not high. These results, together with the SUS score, once again prove the feasibility of the system, but emphasize the need for further developments of the system.
CONCLUSIONS
A telerobotic platform that uses a user interface based on eye-gaze tracking was presented. Details of the operation of the eye-gaze tracking system were also included. The system was evaluated using a task-oriented evaluation and the results permit to conclude that the proposed interface is a feasible option as a mean of HRI in teleoperation applications.
The evaluation results and the observations during the experiments identify some possible improvements that can be considered for future work.
The inclusion of a pan-and-tilt mechanism to control the webcam of the robot could improve the overall performance of the system, since the operator will have more flexibility during monitoring and control tasks and could also provide the ability to implement telepresence.
A feature asked by several participants in the evaluation, in relation to the eye-gaze interface, was the possibly to have some kind of visual feedback identifying which region/command is active. This makes sense, because the other modes of manual interaction have an intrinsic feedback given by the tactile and visual senses, but this feedback is lost when using the eye-gaze interface implemented with the transparent layer.
Some functionality could be added to improve the steering control of the robot. For example, would be interesting to be able to drive the robot on a curvature line. | 4,134.2 | 2018-04-10T00:00:00.000 | [
"Computer Science"
] |
Operationalizing the Exposome Using Passive Silicone Samplers
The exposome, which is defined as the cumulative effect of environmental exposures and corresponding biological responses, aims to provide a comprehensive measure for evaluating non-genetic causes of disease. Operationalization of the exposome for environmental health and precision medicine has been limited by the lack of a universal approach for characterizing complex exposures, particularly as they vary temporally and geographically. To overcome these challenges, passive sampling devices (PSDs) provide a key measurement strategy for deep exposome phenotyping, which aims to provide comprehensive chemical assessment using untargeted high-resolution mass spectrometry for exposome-wide association studies. To highlight the advantages of silicone PSDs, we review their use in population studies and evaluate the broad range of applications and chemical classes characterized using these samplers. We assess key aspects of incorporating PSDs within observational studies, including the need to preclean samplers prior to use to remove impurities that interfere with compound detection, analytical considerations, and cost. We close with strategies on how to incorporate measures of the external exposome using PSDs, and their advantages for reducing variability in exposure measures and providing a more thorough accounting of the exposome. Continued development and application of silicone PSDs will facilitate greater understanding of how environmental exposures drive disease risk, while providing a feasible strategy for incorporating untargeted, high-resolution characterization of the external exposome in human studies.
disease [6]. However, unlike genetic sequencing, measuring the cumulative effect of exposure over the lifespan comes with significant complications. Measurements of chemical exposure are compounded by the estimated millions of exposures that vary temporally over a lifetime, including environmental pollutants, chemicals in consumer-facing goods, biologics, and pharmaceuticals [7]. The relationship between many of these exposures and health effects is unknown, and there is a need to perform discovery studies that enable systematic characterization of the exposome and how it relates to health outcomes. These exposome-wide association studies require robust sampling strategies that can be incorporated into population studies and enable detection of a wide range of exposures. In this review, we highlight the advantages of considering measurement of the untargeted external exposome for new insight into the relationship between environmental exposures and disease. We discuss the use of silicone passive sampling devices (PSDs), which show considerable promise for untargeted measurement of external exposures within an exposome-wide association study framework. Analytical considerations, including untargeted analysis using high-resolution mass spectrometry and strategies for exposome data science, are discussed. Lastly, we provide a framework for using untargeted external exposure monitoring within a deep exposome phenotyping framework.
Operationalization of the Human Exposome
The exposome is unique in 'omic sciences because it represents an integrated measure across multiple compartments characterizing how non-genetic factors external and internal to the host influence disease risk. As a result, the exposome in its totality requires attention to impacts from internal, external, and psychosocial factors, many of which require separate approaches and study designs to measure. The internal exposome combines measurement of biological activity with internal dose biomarkers that can include levels of the parent chemical, transformation products, and adducts of reactive compounds. The external exposome includes environmental exposures to toxic chemicals, pollution, and radiation, while also incorporating behavioral variables like diet, exercise, and drug use. Non-specific exposures, such as social and psychological stressors, are the third component of the exposome. While these three compartments tend to be measured and considered separately, they are dependent and interrelated. All exogenous biomarkers (internal exposome) originate from exposures that occurred outside the host (external exposome), and many of these exposures can be both potentiated and varied depending on non-specific factors, such as stress and lack of sleep (psychosocial exposome) [8].
Measuring the Internal Exposome
Most efforts to date have focused on the internal exposome due to the availability of biological specimens collected and stored from well-established population studies. These include approaches that aim to identify chemical biomarkers to estimate exposure burden and its relationship to adverse health outcomes, as well as 'omic approaches that define specific phenotypes of exposure and disease. The most promising approaches for comprehensive measures of the internal exposome include untargeted high-resolution metabolomics, which detects low molecular weight compounds within a biological sample [9][10][11]. While initially developed to characterize disease-related changes in endogenous metabolites, methods that use high-resolution mass spectrometry (HRMS) show sensitivity and dynamic range to detect low-level chemical exposures and drugs, in addition to endogenous metabolites from critical pathways [9,12,13]. Continued advancement in HRMS instrumentation and computational approaches for data extraction has resulted in their widespread adoption for exposome research [14][15][16]. New applications show the strengths of HRMS to understand chemical phenotypes of exposure for environmental stressors, exposures during pregnancy and other life stages, occupational exposures, and environment-disease relationships [17][18][19][20][21][22][23][24][25]. When combined with additional 'omic measures, HRMS provides a systems biology approach to link exposure to internal dose, biological response, and disease [6]. Within this framework, internal dose is assessed by screening for the presence of metabolites that arise from exogenous chemicals, while biological response to exposure is determined by identifying alteration in endogenous processes (e.g., gene, protein, and metabolite expression). Biological alterations associated with exposure or disease can be considered separately using a "meet-in-the-middle" approach, and overlapping associations reinforce a causal relationship between exposure and disease, providing insight into underlying disease mechanisms [26][27][28]. However, interpretation of these results can be challenging due to exposure timing, varied or unknown biological half-lives of exposure biomarkers, and complex exposure-response effects that occur in distal tissues.
Measuring the External Exposome
External exposome monitoring provides a standalone, but complementary, measure of environmental stressors [29]. Unlike measurements for the internal exposome, which tend to be precise to the individual, precision for the external exposome varies depending on measurement strategy. When estimating inhalation and location-driven exposures for populations over large geographical areas, geospatial/ remote sensing and regional stationary sampling approaches are often used [30][31][32]. Air pollution is often assessed with satellite-based surface-point differentiation, and remote sensing methods have also been used to assess distance to green and blue space, temperature, and light pollution [33]. Chemical exposures can be estimated using distance from known pollution sources, such as location relative to contaminated sites, or surface and groundwater pollution [34]. Stationary samplers that incorporate sensors can provide highly accurate measurements at a single location over time, while others that use absorbent sampling material to collection pollutants provide an integrated measure over the observation period. These point-measurements from stationary samplers are then often extrapolated to estimate regional concentrations. Depending on the age of the samplers or satellites, these approaches provide temporally dense measures to estimate exposure histories for large populations, a key advantage when studying how past exposures influence current health outcomes [35][36][37][38][39][40][41][42]. However, many of the techniques used to estimate exposures are limited and lack the precision to assess microenvironment changes. The use of mobile sampling devices, including automobiles or drones that include adjustment for time and activity patterns using smart phones, has improved accuracy; however, these approaches still cannot account for high variability due to activity and changes in microenvironment exposure levels.
Individual exposome monitoring focuses on characterizing interactions between a person and exposure sources. As a result, multiple strategies are possible, including detection of chemical exposures in food or water and characterization of the indoor and outdoor microenvironments [43]. To measure inhalation exposures, mobile samplers are often worn by study participants or can be placed throughout different microenvironments to improve measurement resolution. These samplers can be active, which combines a pump with samplers to quantitatively measure exposure, as well as PSDs, which collect time-integrated concentrations of chemicals through passive diffusion into the sampler matrix. Though various designs for active samplers have been developed, key limitations include the need for an external battery to power sampling pumps, frequent calibration to verify air flow, and expensive equipment. Thus, active samplers can be difficult to operate and uncomfortable to carry, especially for children [44][45][46][47]. Passive samplers, which rely on less invasive technologies such as adsorbent strips and wearables like silicone wristbands, pouches, and badges, provide an alternative strategy to screen for both known and unknown exposures in large populations. PSDs validated for exposures with known uptake rates, such as benzene or trichloroethylene, have been widely used in occupational monitoring studies for industrial chemicals [23,[48][49][50][51][52][53][54]. However, for most PSDs, ongoing research is focused on better understanding the mechanisms for chemical equilibrium with sampler matrices, which can vary by chemical molecular weight, media pore size, and silicone/air partitioning coefficients [55]. Determining these parameters for different PSD materials and designs is necessary for understanding biases when this approach is used for exposure monitoring.
To better incorporate the exposome into the study of human health, there is a critical need to leverage strategies that enable comprehensive characterization across different exposome compartments. To achieve the power necessary to identify how low-level exposures and associated mixture effects contribute to disease outcomes, it is necessary to use approaches that allow low-cost sampling options and can be deployed in large populations. Current studies show how biospecimens, including blood, urine, and saliva, combined with untargeted assays, provide a solution for internal exposome characterization [7,[56][57][58][59]; however, no similar approaches are routinely available for the external exposome. In the following sections, we review the use of innovative silicone PSDs that show considerable promise as sampling devices to screen the external exposome.
Passive Sampling Devices for External Exposome Profiling
PSDs are non-invasive, easy to distribute, and can overcome many of the limitations that complicate interpretation of exposure biomarkers in biological samples [60]. While the configuration and material can vary, resulting in differences in uptake kinetics and exposure sampling, PSDs generally include some type of sorbent material allowing diffusion within the sorbent matrix following air or surface contact. Ideal sampling materials show linear uptake, high capacity, and reproducible sampling behavior under typical deployment conditions. When displaying these properties, PSDs have the ability to collect a time-averaged, personalized measurement of respiratory and/or dermal chemical exposures. Many of these properties are dependent on sampler material, the analyte of interest, and the sampler design, as such, PSD validation for specific analytes may be needed if strict quantitation is required. PSD configurations and placement can also be optimized to detect specific routes of exposure. For example, some are designed to only measure airborne exposures by minimizing contact with media other than air, either by encasing sampling material or by placing as a brooch over clothing [61][62][63]. Others, such as wristbands, show promise as an integrated measure of multiple exposure pathways [64].
Ideal PSDs for exposomic studies should have high partitioning coefficients for compounds with a wide range of physiochemical properties, be cheap to manufacture, and be provided in a form that is easy for the participant to use. While multiple strategies have been proposed, the use of commercially available silicone wristbands has shown to provide a versatile, low-cost PSD that enables screening for a broad range of chemical exposures [65][66][67]. As a result, use of PSDs that are primarily composed of polydimethylsiloxane and other silicone elastomers are one of the more commonly used sampling materials for PSDs. In Table 1, we summarize the human exposure studies completed to date that leveraged commercially available silicone materials as PSDs for human exposure monitoring.
Of these studies, wristbands were the most commonly used PSD device, while a limited number used multiple placement strategies to isolate exposure pathways, including brooch samplers for airborne respiratory exposures, and isolated wristbands to minimize dermal contact [46, 61-65, 68, 69]. Silicone PSDs have been used to measure different classes of environmental pollutants, including polycyclic aromatic hydrocarbons (PAHS), brominated and organophosphate flame retardants (B-or OFR), pesticides and insecticides, phthalates, passive tobacco smoke exposure, and volatile organic chemicals (VOCs), among others. In most cases, PSDs exhibited high affinity for these chemical classes, highlighting the benefit of using this material for exposome monitoring. Most study participants wore wristbands for 7 days, with some studies extending to 30-day continuous wear periods [65,67,70].
Most PSDs (84%) were characterized using targeted approaches, where specific chemical classes were quantified with in-house analytical standards. An additional 18% of the studies included some form of suspect screening. Most exposures included volatile and semi-volatile compounds measured using gas-chromatography (GC), including single-and triple-quadrupole mass spectrometers, with some studies leveraging electron capture detectors for increasing specificity towards halogenated compounds [60,67,71,72]. Only a few studies combined GC with HRMS, including time of flight (TOF) and Orbitrap mass spectrometers [61][62][63][73][74][75][76][77][78][79]. Of the studies using HRMS technologies, only 50% were untargeted, defined as methods that used data-driven approaches for signal detection, filtering, and annotation. The use of liquid chromatography (LC) methods, which enables detection of many contemporaryuse pesticides and emerging chemicals of concern, was also limited. Five studies included LC-MS analysis of wristbands, with one measuring pesticide exposures, one phenol exposure, one SVOCs, and the remaining two focused on passive tobacco smoke exposure [80][81][82][83][84]. None of these studies used LC-HRMS, a key technology for untargeted screening of many environmental exposures [85,86].
Since the use of silicone PSDs is a new approach for passive exposure monitoring, 38 of the 44 reviewed studies combined PSDs with validated approaches for assessing exposures, including comparison to biomarker levels and established sampling devices. These include quantification of known exposure biomarkers in blood and urine [45,68,81,87,88], hand wipes [64,75,87], active air sampling [45,46,89,90], and low-density polyethylene PSDs [67,91], as well as using questionnaires to estimate past exposures [60,61,77,78,81,82,84,87,88,92,93]. Findings include a significant correlation between the PSD chemical concentrations and accepted biomarker measurements in urine [45,87], demonstrating usability of silicone PSDs to evaluate personal exposures. These samplers similarly showed high specificity to detect unique chemical profiles, including detection of exposure profiles based on dietary and behavioral trends, as well as unique chemical signatures within different rooms of the same residence [84,94].
Most of the reviewed studies include questionnaires to associate PSD detected chemical classes with behavioral, lifestyle, and demographic patterns that may influence exposure patterns and potential health outcomes. Although few studies focus on biological endpoints, recent applications have attempted to link PSD measurements to health outcomes, including DNA damage biomarkers [72], thyroid function [95], social behaviors in children [96], and respiratory-related disorders [62]. Interestingly, one study combined wristbands with effect-directed analysis (EDA) to identify wristband-captured exposures contributing to thyroid dysfunction [76]. In this study, extracted wristbands were tested using gene-reporter assays that evaluate thyroid disrupting bioactivity, providing a biological-based prioritization of compounds potentially contributing to adverse effects.
One of the challenges facing large-scale adoption of silicone-based PSDs for monitoring multiple exposures is uncertainty in partitioning and diffusion rates into the sampler matrix for compounds showing a wide range of physical-chemical properties [67,91]. Contact with surfactants and oils, such as soaps and lotions, may also influence uptake of certain compounds. Since chemical uptake into wristbands vary, estimating environmental concentrations can be difficult [72]. To improve quantitative interpretation, recent efforts have focused on identifying partitioning coefficients by chemical class [44]. Silicone PSDs have been shown to outperform traditional sampler materials like lowdensity polyethylene (LDPE), showing improved sequestration of polar compounds and heavier polybrominated flame retardants [91]. However, other chemical classes have shown lower affinity for silicone, including PAHs [97]. While this could limit detection of important air-and smoke-related exposures, silicone showed a higher correlation with urinary PAH metabolites and outperformed polyurethane foam combined with active air sampling [45]. Sorbent bars coated in polydimethylsiloxane showed comparable performance for sampling for higher molecular weight PAHs, with stable uptake for periods greater than 24 h [62]. Additional chemical classes showing good affinity for PSDs include OFRs, compounds in tobacco smoke, and plasticizers [75,87]. Silicone conditioning refers to the precleaning process to remove impurities and free siloxanes from the silicone matrix prior to deployment. Solvent conditioning use solvent-based extraction methods for preparing wristbands. Heat conditioning uses a combination of heat treatment and vacuum to remove free siloxanes PSDs are available commercially in a wide range of colors and sizes and can be modified to include text through embossing/debossing. While price varies depending on vendor, amount purchased, color/text options, and size, most are available at low cost (< $0.50 USD) and provide an economical solution for PSDs. However, wristbands purchased commercially often contain a high degree of impurities that can interfere with measurement sensitivity. Before deployment, thorough conditioning and cleaning are required to remove unbound siloxanes and other impurities [44,65]. Dyes and inks can further contribute to background impurities, and testing should be performed to assess their impact before use. If available, uncolored or clear silicone materials should also be considered [68]. The importance of conditioning prior to wristband deployment is highlighted in Fig. 1. Uncleaned samplers result in a high degree of co-extracted siloxanes (Fig. 1A) that can impact compound detection, foul GC columns, and introduce a high degree of instrument contamination. Wristbands cleaned using solvent washing or heat treatment remove the majority of impurities present in the silicone (Fig. 1B), enhancing detection of exposures following wristband deployment (Fig. 1C). The most common method for silicone conditioning includes a series of washes using organic solvents, including ethyl acetate, methanol, hexanes, and pentane (Table 1). Wristbands are often equilibrated with each solvent for a period ranging from 30 min to multiple days [65,87,98]. Following all washes, wristbands are allowed to dry under an inert gas or in a clean environment prior to packaging for distribution.
While solvent washing can remove a significant number of impurities, this approach is expensive, time consuming, and difficult to adapt to large quantities. Recent attempts to lower time and cost of conditioning have been developed, including re-use of solvents and accelerated methods that reduce solvent-washing [44,84]. Heat conditioning provides a suitable alternative to solvent washing and can result in considerable savings in personnel time and solvent costs [44]. When using heat conditioning, wristbands are heated to 250-300 °C and maintained under vacuum (< 1 Torr) with periodic nitrogen flushing. Heating times can vary, with some studies showing 3 h provides sufficient removal of siloxanes; experience with heat conditioned wristbands for untargeted analysis suggests periods of 20-24 h may be more suitable. Depending on the size of the oven, it is possible to prepare 50-100 wristbands per day. While heat conditioning improves wristband preparation throughput, effects of high temperatures on the silicone material, removal of less volatile siloxanes, and frequent cleaning of vacuum systems must be considered. All new batches of silicone must be tested for suitability with heat conditioning, as even minor changes in manufacturing can influence how silicone elastomers respond to the heating process. Cost-effective and robust conditioning of a large number of silicone PSDs is one of the main barriers for use in large population studies. While current conditioning approaches can increase costs by 50-1000-fold, continued development of heat conditioning and alternative strategies is expected to decrease cost and improve capacity.
Following deployment of silicone PSDs, exposure-related compounds must be removed from the matrix and transferred to a form that is amendable to the chosen analytical method. Since the majority of silicone PSDs have focused on volatile and semi-volatile exposures, most commonly used preparation methods are GC-friendly and include solvent extraction or thermal desorption (TD) ( Table 1). The most common solvents for extraction include ethyl acetate, hexane, and dichloromethane. Extracts can then be processed through additional steps, including solvent evaporation and exchange, cleanup using solid-phase extraction (SPE), or injection as is. Care must be taken when selecting processing steps, as significant analyte loss could occur depending on the physiochemical properties of the analytes. For example, selection of extraction solvents with high octanol-water partitioning coefficients (K ow ) may prevent extraction of more polar compounds, while drying steps can result in loss of volatile compounds.
TD methods have been used extensively for environmental sampling of volatile compounds, as well as analysis of silicone PSDs [44,99]. When using TD to analyze PSDs, silicone samplers are heated so that compounds are volatized and either injected directly onto the GC column or trapped using filters prior to analysis. The advantages of these methods for deployed silicone samplers include improved sample preparation times, reduction in co-extracted matrix and nonvolatiles that can foul GC systems, and the ability to characterize highly volatile organic compounds. Development of the Fresh Air wristband, which uses thermal desorption to analyze polydimethylsiloxane PSDs, shows it is possible to measure many different volatile and semi-volatile environmental exposures, and can be combined with untargeted analysis to identify exposures associated with health outcomes [61][62][63]. Although limited, LC analyses of wristbands have all used solvent extraction sample preparation methods, which is consistent with better detection of non-volatile and polar compounds [80][81][82][83][84].
High-Resolution Mass Spectrometry for Measuring the External Exposome
Although PSDs have received considerable attention for exposure monitoring, their combination with untargeted analysis is limited and most applications have focused on measuring common classes of known exposures. These studies provide important insight into chemical exposures in human populations but do not allow detection of unsuspected or uncharacterized chemical that may be driving health effects [6,100]. Untargeted analyses depend upon methods that use HRMS, as it enables sensitive detection of low-level chemicals while providing sufficient mass accuracy and resolution for prediction of chemical formulas [7]. One of the first studies to combine silicone PSDs with untargeted analysis used GCxGC-TOF to identify personal exposure pattern variation for 27 participants across multiple regions within an urban environment [78]. Wristbands were characterized using a combination of targeted and untargeted methods, with targeted analysis including semivolatile organic compounds (SVOCs). Untargeted results showed variable exposure profiles that included up to 1,000 detected chemical features and identified distinct clusters of compounds that distinguished seasons and regions. Targeted SVOCs showed no difference among these regions, highlighting the importance of expanding beyond known exposures when considering variability in individual exposome profiles.
The Fresh Air wristband, which includes a polydimethylsiloxane coated sampler bar located in an enclosed PTFE chamber, has been used extensively with GC-based HRMS methods that include Orbitrap and QTOF-based analyses for sampling exposures [101]. Using untargeted analysis, Koelmel et al. incorporated stringent filtering and deconvolution strategies to detect and identify exposures, including up to 615 high confidence annotations from the original 6,000 chemical signals detected in samplers worn by participants enrolled in a study designed for identifying biomarkers of air pollution [63]. Due to their flexible sampler design, they have also shown that PSD placement, season, and residence type influence exposure profiles. Their work underscores the high sensitivity of PSDs when combined with untargeted analysis, and how this can be leveraged to characterize the external exposome and routes of exposure [61,62].
Recently, ultra-high-resolution mass spectrometry methods have been applied to stationary silicone PSDs to evaluate microenvironment exposures, as well as silicone wristbands worn by study participants. In the study by Kalia et al., silicone samplers were placed throughout different rooms within a residential location for a period of 7 days and analyzed using untargeted GC-HRMS [94]. Although none of the detected chemical features were identified, 1,347 signals were measured across all samplers located within one residence after correcting for background using field blanks. Comparisons between rooms in the same residence using principal component analysis (PCA) showed detection of room-specific signatures was possible. Untargeted GC-HRMS has also used to characterize silicone wristbands worn by study participants; the results from this study show benefits of untargeted methods to characterize the exposome using PSDs, and demonstrate potential influence of sample and data processing on detected chemicals [74].
Many chemicals of emerging concern (CECs) are only detectable using LC-HRMS methods and are known to be volatilized and transported on aerosols; these include perand polyfluorinated alkyl substances (PFAS). PFAS and many other CECs are ionizable, and partition between the gas and particulate phase depending on interaction with aqueous aerosols and matrices, resulting in unique interactions with PSDs dissimilar from nonpolar SVOCs and persistent organic pollutants [102]. Novel PSD designs have been developed to characterize PFAS and other target CECs in indoor air [103], outdoor air, [104], and aqueous matrices [105,106]. Limited research has paired untargeted LC-HRMS analyses with PSD deployment of any kind [107], though active sampler applications with untargeted analysis show important airborne exposures from both biotic and abiotic sources can be detected using LC-HRMS and combined with biological endpoints, thus predicting disease risk [29,108]. To date, untargeted LC-HRMS has not been used to characterize personal exposome profiles using wearable silicone PSDs, despite well-established instrumental protocols for detection of low abundance, polar environmental chemicals. This data gap highlights an opportunity for optimization of wearable PSDs for semi-polar and polar chemicals suitable for LC-HRMS analysis, potentially leveraging sorbent or material modifications and innovative untargeted analytical workflows.
Analytical Considerations for Untargeted Analysis of Silicone PSDs
While targeted methods provide excellent sensitivity and can generate new insight into ongoing exposures, costs increase with the number of chemicals analyzed [109]. Thus, development of targeted exposome-level assays is cost-prohibitive and does not enable detection of unknown and uncharacterized exposures. Since untargeted analytical methods using HRMS maximize the number of chemicals that can be measured in a single sample, these approaches are optimal for combining the exposome with silicone PSDs. The most commonly used HRMS platforms include QTOFs, which estimate accurate mass based on the time an ion takes to traverse a given flight path, and Orbitraps, where injected ions are introduced into a charged and rotating spindle and the oscillation frequency of orbiting ions is used to estimate accurate mass. While both QTOFs and Orbitrap instruments have excellent mass accuracy for high-abundance peaks, Orbitraps that provide ultra-high-resolution capabilities (> 120,000) display the greatest sensitivity and resolution for low abundance environmental chemicals, making them the preferred platform for exposome research. Combined with adaptive algorithms for processing complex mass spectral data, it is now possible to detect over 100,000 chemical signals in samples, including low-level environmental pollutants [12,[110][111][112].
The number and types of chemicals detected in PSDs can be expanded by combining complementary separation and ionization approaches for HRMS. These include using alternate chromatography strategies, with LC and GC as the most comprehensive platforms for exposome-wide association studies. LC-HRMS platforms are best suited for measurement of polar molecules with ionizable functional groups, or large, non-polar molecules that include lipids, fatty acids, and sterols. However, many exposome chemicals are volatile enough to be introduced into the gas phase when heated and are not detected by LC-HRMS. Thus, GC-HRMS provides the best sensitivity and selectivity for these compounds [12,113]. Most detected chemicals from wearable silicone PSDs will exhibit some degree of volatility and most exposures are best detected using GC-HRMS [78]. When analyzing silicone PSD extracts, care must be taken to ensure the extracted samples are suitable for the analytical method of choice. All analyses should use a rigorous QA/QC plan, including at least 10% of samples as field blanks, which are non-deployed PSDs that were subjected to similar storage and transportation conditions. Often, these blanks are needed to separate the background from true signals and can be used to filter silicone-related chemicals from the final results.
Following analysis of PSDs, chromatograms can be processed using a number of software tools. Key steps include identification of peaks and integration, deconvolution to identify mass spectra, and alignment of peaks across samples. Both commercial and open-source tools are available; however, algorithms optimized for detection of low abundance peaks are best for exposome research [114][115][116][117]. Deconvolution strategies enhance detection and identification of chemicals, with current approaches based upon peak shape similarity, hierarchical clustering, and correlation across samples [118][119][120][121][122]. By incorporating correlation for deconvolution, these methods are optimized for low abundance peaks that are often characterized by poor peak shape, and will include fragments, isotopes, and adducts from the same compound [123].
Identification of mass spectral signals is one of the key challenges in applying untargeted HRMS. Many detected Fig. 1 Sample total ion chromatograms for A unconditioned silicone wristband analyzed as received from the manufacturer; B silicone wristband conditioned for 18 h at 300 °C maintained at < 0.1 Torr with nitrogen venting at 15,30,45,60,90,120,180,240, 300, and 360 min; C heat conditioned silicone wristband after a 7-day deployment period where the wristband was worn continuously by the study participant ions do not match compounds listed in metabolomic or environmental chemical databases and authentic standards are not available. Computational approaches that assign annotation confidence can enhance prediction of chemical identities. These approaches harness multiple lines of evidence to evaluate the quality of annotation and, combined with appropriate databases, improve the number of annotated compounds [124,125]. Multiple databases exist for exposome research, including the Blood-Exposome database [126] and CECScreen, which includes over 70,000 CECs and predicted metabolites [85,126,127]. The US EPA CompTox Dashboard provides a key resource for identifying detected chemicals [128], with information on 765,000 chemicals and includes in silico predicted electron ionization (GC) and MS/ MS (LC) spectra for all entries [129,130]. For annotating unknown peaks that do not match database entries, numerous tools can be used to characterize ion fragmentation patterns and predict possible identities and biotransformation products of parent metabolites [131,132]. Continued efforts focused on developing new chemical databases that house both environmental chemicals and endogenous metabolites are expected to improve annotation capabilities for untargeted mass spectrometry data in exposome research [133]. Molecular networking of GC-HRMS spectra and MS/MS data from LC-HRMS [134,135] provides an additional strategy for classifying and inferring potential chemical identities, thus enabling insight into related substructures and similar compounds based upon similarity networks among spectra.
Exposomic Data Science
For exposome studies designed to evaluate a disease or other adverse outcomes, signals from untargeted profiling must be prioritized to identify which exposures are driving risk [59]. When applying an exposome-wide association study framework to study relationships between exposures and outcomes, uni-or multivariate data analysis approaches are applied to evaluate the relationship of each detected chemical with the outcome. Because identification of all detected signals is often not possible, variable selection enables prioritization of exposures for identification. Due to the large number of signals detected in exposome studies, traditional data analysis methods are challenged by false positives and robust identification of the top signals defining environmentdisease relationships. There are several sources of error that lead to this issue, including insufficient sample size relative to the number of compounds analyzed, excessive false discovery rate from multiple hypothesis tests, and analyzing each part of known or hypothesized networks individually [136]. An alternative approach is to apply multivariate methods that analyze the entire HRMS dataset jointly. These methods represent the samples as points and determine projections of these points into lower dimensional space, hyperplanes, components, or latent variables, such that a measure of information about the data points is maximized.
Multivariate and data reduction analytic strategies solve two major issues with the traditional exposome-wide association studies by (1) increasing power, since corrections for multiple comparisons are performed on the number of latent features (tens) rather than the number of chemicals (hundreds or thousands), and (2) facilitating determination of networks, since the latent variables are constructed based upon statistical or functional similarity and jointly use information across chemicals. Linear versions of these methods, such as PCA, independent component analysis (ICA), canonical correlation analysis (CCA), linear discriminant analysis (LDA), and partial least squares discriminant analysis (PLS-DA), are popular due to their simplicity of interpretation [137][138][139][140][141][142][143]. Nonlinear methods, such as self-organizing maps, support vector machines, and random forests, are less useful for interpretation but can be more powerful than linear methods for regression or classification [144][145][146]. Continued development of multivariate and dimension reduction techniques for application to exposomic studies is an ongoing area of research, with future application of these methods being expected to reduce complexity of the exposome while improving insight into how chemical mixtures influence health. For further information about multivariate methods used in exposome applications, we refer the reader to the following review articles [147][148][149].
Strategies for Operationalizing Deep Exposome Phenotyping
To realize measurement of the exposome, it is critical to consider exposures across multiple compartments. While most studies use HRMS to characterize the internal exposome using biological samples, personal PSDs provide complementary advantages. First, PSDs are much cheaper to produce and distribute compared with the cost for collecting blood or urine samples. Biologics often require special methods and trained personnel for collection, including clinical visits. These materials must be stored at low temperatures to maintain sample integrity, and different handling and storage procedures can increase variability. PSDs can be provided directly to participants and returned by mail, with limited-to-no contact between study participants and coordinators. This capability is especially important when considering the additional restrictions placed on in-person research due to the COVID-19 pandemic. Long-term, secure storage of biologics can also be costly, as storage in −80 °C is common. In contrast, PSDs can be stored in sealed bags at 4 °C or room temperature, since compounds are stable within the silicone matrix.
Although studies to date have only used silicone PSDs with a small number of participants (average 50 participants; max 255; Table 1), this technology has the potential to provide a key exposome measurement within longitudinal cohort studies. Using heat-based conditioning methods, silicone wristband PSDs can be produced for as little as $5-$10, which is significantly lower than the cost for clinical visits to complete blood or other biofluid collection. The low cost and non-invasive nature of the silicone PSDs allows routine distribution to participants in large cohorts at study enrollment, and additional PSDs can be provided to participants during longitudinal follow-up periods. While cost for analysis by untargeted HRMS can be in the $200-$500 range, longitudinal follow-up can prioritize participant selection based upon health outcomes. Finally, silicone PSDs may provide improved detection and reduced variability for exposures with short biological half-lives. Depending on the compound, time for clearance from the human body can vary on the range of days to decades. When measuring compounds with short biological half-lives in blood or urine, the ability to detect a biomarker is dependent on the time of sample collection. Thus, exposome measurements in biological samples often suffer from high variability for rapidly metabolized compounds [150]. Due to compound stability within the silicone matrix, PSDs eliminate biological transformation and enable detection of the parent compounds averaged over longer time scales [83].
Due to their non-invasive nature, price, and ease of distribution, silicone PSDs are a key technology for measuring the exposome. While quantitative exposure measurements using PSDs are challenging if air-silicone partitioning behavior of analytes is not known, PSDs show considerable potential as a sampler to screen for the presence of both known and unexpected exposures that can be prioritized for further follow-up using traditional exposure assessment methods. Thus, rather than replacing collection of blood, urine, or other biological samples, they provide a complementary measure to assess specific compartments of the external exposome in population studies. For example, ingestion (eating and drinking) is one of the primary routes of environmental exposure and must be assessed using other approaches. Biospecimens also allow measurement of alterations across biological levels and long-term maladaptations, an important consideration for evaluating cumulative effects of environmental exposures. Combined with internal chemical and bioeffect monitoring, the use of silicone PSDs provides a strategy for deep exposome phenotyping in human populations (Fig. 2).
The goal of deep exposome phenotyping is to provide a systematic framework that operationalizes exposomewide association studies of human health by combining the key measures necessary to understand the continuum from exposure to disease. Application in longitudinal cohorts can enable in-depth, comprehensive assessment of exposures, and when combined with untargeted HRMS analysis, provides the chemical coverage necessary for characterizing complex mixtures. Integrating external and internal measures of the exposome with multiple "-omic" layers will allow a functional approach to understanding how environment contributes to disease risk laying a foundation for the mechanisms underlying environment-related diseases [26,27].
Because silicone PSDs are available at low cost, they can be easily incorporated into ongoing longitudinal studies and employed as a tool to estimate temporal changes in exposure patterns through repeated follow-up with new samplers. While the focus to date has been environmental health studies, silicone PSDs also provide a strategy for incorporating the exposome into precision medicine. Environmental factors are widely recognized for their potential to alter treatment efficacy and disease progression [151]. Silicone wristbands and other PSDs can provide a non-invasive means of chemical surveillance, helping identify patients for Fig. 2 To better understand the human exposome, there is a need to measure exposures across both the external and internal exposome. Combining silicone PSDs, biological samples, and untargeted HRMS provides a unified strategy for deep exposome phenotyping that enables systematical measures of environmental exposures and corresponding biological exposures. While most efforts to date have focused on the internal exposome, silicone PSDs are low cost, noninvasive, easy to distribute, and allow measurement of compounds with short biological half-lives. Application of silicone PSDs within longitudinal studies will improve measurement of exposures at different life stages and provides the chemical coverage necessary for characterizing complex mixtures. Integrating external and internal measures of the exposome with other omic layers will allow a functional approach to understanding how environment contributes to disease risk, laying a foundation for the mechanisms underlying environmentrelated diseases primary intervention or participants who would benefit from increased follow-up.
Conclusions
While environment is one of the main drivers of disease risk, the ability to measure the complexity of the exposome is limited by its temporal nature, availability of samples, and the technology to detect complex exposures patterns. While considerable advances have been made in analytical strategies for the internal exposome, comparable methods for the external compartment are not well-developed. Silicone wristbands and other PSDs, which can be combined with untargeted HRMS platforms to characterize the exposome, are a natural way to integrate measures of the external exposome into longitudinal studies. These devices are cheap, non-invasive, and can be easily distributed. Previous studies demonstrate their suitability for many environmental chemical exposures, which is critical for success in exposome applications. By using untargeted approaches, it is possible to detect and identify ongoing exposures that may have not been expected or characterized, supporting pollution control and identification of the primary chemical exposures experienced by humans. Thus, continued development and application of silicone PSDs will facilitate greater understanding of how environmental exposures drive disease risk, while providing a feasible strategy for incorporating untargeted, high-resolution characterization of the external exposome in human studies.
Funding This work was supported by funds received from the National Institute of Environmental Health Sciences (award numbers P30 ES023515 and U2C ES030859), National Cancer Institute (award number UG3 CA265846), and award number 874627 from the European Commission. Funding sources did not direct the study.
Compliance with Ethical Standards
Conflict of Interest None to declare.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 9,172.8 | 2022-01-04T00:00:00.000 | [
"Biology"
] |
Dynamic Discrete Mixtures for High-Frequency Prices
Abstract The tick structure of the financial markets entails discreteness of stock price changes. Based on this empirical evidence, we develop a multivariate model for discrete price changes featuring a mechanism to account for the large share of zero returns at high frequency. We assume that the observed price changes are independent conditional on the realization of two hidden Markov chains determining the dynamics and the distribution of the multivariate time series at hand. We study the properties of the model, which is a dynamic mixture of zero-inflated Skellam distributions. We develop an expectation-maximization algorithm with closed-form M-step that allows us to estimate the model by maximum likelihood. In the empirical application, we study the joint distribution of the price changes of a number of assets traded on NYSE. Particular focus is dedicated to the assessment of the quality of univariate and multivariate density forecasts, and of the precision of the predictions of moments like volatility and correlations. Finally, we look at the predictability of price staleness and its determinants in relation to the trading activity on the financial markets.
Introduction
The last 20 years have witnessed a boost in the intradaily trading activity on the financial markets and, subsequently, an enormous increase in the availability of stock prices observed at high frequency. On the one hand, the availability of stock prices sampled at high frequency has steered the empirical analysis of financial markets toward the use of ex-post measurements of (integrated) variance over fixed horizons (e.g., day), see the discussion in Andersen, Bollerslev, and Diebold (2010). On the other hand, prices sampled at high-frequency display a number of microstructural features that challenge the adequacy of the standard specifications for the dynamics of stock prices. This opens the door to alternative model specifications for the highfrequency price moves.
We contribute to this strand of literature with a new statistical framework for the analysis of high-frequency prices. In the classic framework, the prices of financial assets are typically assumed to originate from a continuous distribution with time-varying parameters, for example, with stochastic volatility, see Shephard (2005) among many others. The widely adopted assumption of a continuous underlying price process is made to increase model tractability. However, financial markets regulations make stock price changes intrinsically discrete due to the minimum allowed tick size (also known as decimalization effect). As the sampling frequency increases (e.g., at the frequency of few seconds) price discreteness becomes the dominating feature, see the recent discussion in Rossi and Santucci de Magistris (2018). The statistical analysis of discrete processes in Z poses substantial difficulties from a CONTACT Paolo Santucci de Magistris<EMAIL_ADDRESS>Department of Economics and Finance, LUISS "Guido Carli" University. Viale Romania 32, 00197 Roma, Italy. CREATES, Aarhus University, Fuglesangs Allé 4, 8210 Aarhus V, Denmark.
Supplementary materials for this article are available online. Please go to www.tandfonline.com/UBES. methodological viewpoint, greatly complicating the underlying theory and model interpretation, see the recent contributions of Koopman, Lit, and Lucas (2017) for a discrete-time model and Shephard and Yang (2017) for a model built in continuoustime. Along with their intrinsic discreteness, high-frequency prices display a number of stylized facts, such as time-varying volatility and correlations, large and persistent share of zero variations (zeros), a feature known as price staleness (see Bandi et al. 2020a;Bandi, Pirino, and Renò 2017), and occurrence of extreme realizations (fattailed distribution).
In this article, we develop a flexible multivariate integervalued model that incorporates the main empirical features of the price changes observed at high frequency. The model builds upon a simple mechanism for the generation of price changes that result from the difference between two unobserved random variables accounting for positive and negative moves. Since the price changes can only take values on a discrete grid, these two random variables must adhere to this constraint. The Skellam distribution of Irwin (1937) and Skellam (1946), which arises from the difference between two independent Poisson random variables, provides the natural baseline framework for discrete price changes, see also Barndorff-Nielsen, Pollard, and Shephard (2012) and Koopman, Lit, and Lucas (2017). In particular, Koopman, Lit, and Lucas (2017) assumed that the price changes of the individual assets traded on NYSE are conditionally distributed as a Skellam with stochastic volatility. The resulting specification belongs to the class of nonlinear non-Gaussian state space models, for which the likelihood is not analytically available. This leads to complicated inference and nonstandard estimation procedures; Koopman, Lit, and Lucas (2017) use simulated maximum likelihood relying on the numerically accelerated importance sampling (NAIS) method of Koopman, Lucas, and Scharth (2015). An extension to the multivariate context within the framework of Koopman, Lit, and Lucas (2017) is unfeasible, since the multivariate Skellam distribution (see Bulla, Chesneau, and Kachour 2015;Akpoue and Angers 2017) is remarkably difficult to handle. 1 Differently from the previous studies, our modeling framework builds upon the idea that the observed price changes are independent conditional on the realization of unobserved discrete-valued random variables characterizing the dynamic properties of the multivariate series at hand. In other words, the model belongs to the class of hidden Markov models (HMM), see among others Vermunt, Langeheine, and Böckenholt (1999), Bartolucci and Farcomeni (2009), Bartolucci, Farcomeni, and Pennoni (2012), and Zucchini, MacDonald, and Langrock (2017). Conditional on the latent Markovian structure, each individual asset is zero-inflated Skellam distributed and independent of other assets. The HMM structure is made up of two independent Markov chains. One Markov chain is responsible for the dynamics of the price changes and their mutual association through a hierarchical structure involving a latent Categorical random variable. The other Markov chain accounts for the time-varying probability of price staleness across assets.
We investigate the probabilistic properties of the model. After marginalization of the latent variables, the distribution of the observables is a multivariate mixture featuring the stylized facts outlined above: time-varying volatilities and correlations, discreteness, fat tails, as well as time-varying probability of zeros in excess of that implied by the baseline Skellam distribution. We show that the model has an alternative representation in terms of a single hidden Markov chain. This allows us to prove the identification of the model as well as to derive an expectationmaximization (EM) algorithm with steps available in closed form. Through the EM algorithm, it is possible to resort to maximum likelihood (ML) estimation with no exceptional effort. We also derive the predictive, filtered, and smoothed distributions of the latent variables, as well as the joint predictive distribution of price changes.
Our empirical results can be summarized as follows: the proposed modeling framework is sufficiently flexible to match the univariate and multivariate empirical distributions of highfrequency price changes and their associated moments, even when the stocks under investigation display heterogeneous tick sizes. This holds true in both low and high volatility periods, the latter being characterized by abnormal price variations especially at the opening and closing of the trading day. The model well accounts for all the empirical features displayed by the high-frequency prices, especially the large and timevarying proportion of zeros, which often simultaneously occur on multiple assets. Finally, the model features the decomposition 1 A notable application of the Skellam in the multivariate context is provided in Koopman et al. (2018), who adopt a copula specification coupled with generalized autoregressive score (GAS) dynamics. To guarantee model tractability, Koopman et al. (2018) imposed equicorrelation (see Engle and Kelly 2012), but this comes at the cost of a very restrictive dependence structure.
of the probability of zeros into three determinants, that can be linked to the trading activity on financial markets: absence of news, microstructural frictions, and offsetting demand and supply. We show that these components have distinctive roles in explaining different dimensions of illiquidity, and we employ them to predict the absence of trading volume on financial markets. The article is organized as follows: Section 2 presents the model and its properties. Section 3 discusses ML inference via the EM algorithm. Section 4 outlines the empirical application, and Section 5 provides an analysis of the decomposition of staleness based on the model outcomes. Section 6 concludes. In addition, a document with supplementary material reports additional results concerning the empirical application.
The Model
Let Y n,t ∈ Z be the random variable representing the price change of asset n = 1, 2, . . . , N at time t = 1, 2, . . . , T, and let y n,t be its realization. We collect the price changes of N assets in the N × 1 vector Y t = (Y n,t , n = 1, . . . , N) ∈ Z N , with analogous notation for y t = (y n,t , n = 1, . . . , N) . We assume that Y t has the following stochastic representation where is the Hadamard product. The properties of Y t are determined by the interaction of two unobserved random com- n,t;Z t , n = 1, . . . , N) ∈ [0, 1, 2, . . .] N and X (2) t;Z t = (X (2) n;t;Z t , n = 1, . . . , N) ∈ [0, 1, 2, . . .] N are N × 1 vectors of random variables associated with positive and negative discrete price moves, respectively. Both X (1) t;Z t and X (2) t;Z t depend on the unobserved integer-valued random variable, Z t . In turn, Z t depends on a homogeneous first-order Markov chain, S ω t , following a hierarchical structure as specified below. We assume that, conditional on Z t , both X (1) n,t;Z t |Z t and X (2) n,t;Z t |Z t are iid Poisson distributed random variables for all n = 1, . . . , N and t = 1, . . . , T, with intensities λ (1) n;Z t and λ (2) n;Z t , respectively. Intuitively, X t;Z t and X (2) t;Z t denote the two sides of the order book for the N assets aggregated across traders.
The term B t;S κ t = (B n,t;S κ t , n = 1, . . . , N) ∈ [0, 1] N is a N × 1 collection of Boolean random variables, each responsible to set to zero the corresponding price change at time t. We can interpret B t;S κ t as a component capturing temporary market freezing induced by illiquidity frictions (such as transaction costs), thus significantly contributing to explaining the large fraction of zero returns at high frequency. 2 In particular, B n,t;S κ t |S κ t are assumed to be i.i.d. Bernoulli random variables independent across n and t. For all n = 1, . . . , N, the success probability κ n;S κ t = P(B n,t = 1|S κ t ) depends on the random variable S κ t , which is an unobserved homogeneous first-order Markov chain. In other words, the states of S κ t determine the success probabilities of B n,t;S κ t .
2 We find that B t;S κ t explains on average the 30% percent of the frequency of zeros. In Section 5, we will shed further light on the distinctive role of X The random variables S ω t and S κ t follow independent homogeneous first-order Markov chains with finite state spaces given by {1, . . . , J} and {1, . . . , L}, respectively. The transition probabilities of S ω t and S κ t are P( be the J×J and L×L transition probability matrices of S ω t and S κ t . Under the usual constraints on positiveness and summability of the transition probabilities, we have that γ c i,j > 0, ι c = ι , for c = ω, κ, with ι being a vector of ones of proper dimension. The initial distributions of S ω t and S κ t are indicated by δ ω = (δ ω j , j = 1, . . . , J) and δ κ = (δ κ l , l = 1, . . . , L) , respectively, and they coincide with the limiting distributions of S κ t and S ω t , that is, the two Markov chains are stationary.
The random variable Z t determines a second hidden layer with state space {1, . . . , K}. In particular, we assume that Z t |S ω t is independent of S κ t and it follows a Categorical distribution with P(Z t = k|S ω t = j) = ω j,k . The parameters ω j,k are collected in the J × K matrix , and they are such that ω j,k > 0 for all j = 1, . . . , J and k = 1, . . . , K, and K k=1 ω j,k = 1 for j = 1, . . . , J. In other words, S ω t determines J different compositions of weights (ω j,k ) of the K pairs of Poisson random variables (X (1) n,t;k and X (2) n,t;k ), whose intensities are determined by the realization of Z t . This induces the hierarchical structure between S ω t and Z t . 3 The dependence structure generated by the model in Equation (1) is outlined in Figure 1. The effect of S κ t on Y t is rather straightforward since it directly affects the probability of drawing zeros. For instance, changes in the frequency of price staleness, possibly associated with prolonged periods of absence of trading, can be directly linked to the variations of the Markov chain S κ t . 4 The same ease of interpretation does not apply when looking at the interaction between the chain S ω t and Z t , since their role in determining the serial and contemporaneous dependence of price changes cannot be fully disentangled. However, their hierarchical structure allows for great flexibility with a relatively parsimonious modeling setup. Indeed, conditional on the realization of S ω t ,Z t handles timespecific dependencies across price changes and accommodates departures from the marginal distributions assumed for each series. Next sections present the distribution and the moments of Y t highlighting the distinctive role of S κ t , S ω t and Z t .
Distribution and Identification
t;Z t jointly determine the distribution of the multivariate high-frequency prices moves, through their dependence on S ω t , S κ t and Z t . After marginalization of S ω t , S κ t , 3 A similar hierarchical structure was adopted in Bartolucci and Farcomeni (2015), Geweke and Amisano (2011), Maruotti (2011), and in Maruotti and Rydén (2009) Z t and B t , the unconditional distribution of Y t is 5 δ ω j δ κ l ω j,k × κ n;l ψ(y n,t ) + (1 − κ n;l )SK y n,t , λ where ψ(y n,t ) = 1 if y n,t = 0 (and ψ(y n,t ) = 0 elsewhere) is a Dirac mass at 0, SK(·) denotes the probability mass function of the Skellam distribution (see Skellam 1946) SK y n,t , λ (1) n;k , λ (2) n;k = e −(λ (1) , and I y n,t (·) is the modified Bessel function of the first kind. The distribution in (2) is a three layer mixture of conditionally independent zero-inflated Skellam distributions. For this reason, we label the model dynamic mixture of Skellam, DyMiSk, henceforth.
If we wish to condition on the past values of Y t , the distribution becomes 5 To avoid an excessively heavy notation, in the following we omit the sub- π ω t|t−s;j π κ t|t−s;l ω j,k × κ n;l ψ(y n,t ) + (1 − κ n;l )SK y n,t , λ (1) n;k , λ (2) n;k , for s > 0, with where π ω t|t−s;j := P(S ω t = j|Y 1:t−s = y 1:t−s ) and π κ t|t−s;l := P(S κ t = l|Y 1:t−s = y 1:t−s ) are the predictive distribution of S ω t and S κ t in states j and l, respectively. The terms [( ω ) s ] ij and [( κ ) s ] il indicate the ijth and ilth elements of the s-th power of the matrices ω and κ , respectively. Finally, α ω t;i = P(S ω t = i, Y 1:t = y 1:t ) and α κ t;i = P(S κ t = i, Y 1:t = y 1:t ) are the forward probabilities delivered by the forward filtering backward smoothing (FFBS) algorithm; more details are provided in Section 3.
Identification is proven under the following classic set of assumptions: (A1) S ω t and S κ t are irreducible, (A2) the rows of are linearly independent, (A3) κ n;l 1 = κ n;l 2 and (λ (1) n;k 2 ) for all n, l 1 = l 2 , and k 1 = k 2 . The following proposition establishes the identification of DyMiSk.
Proposition 2.1 follows from Theorem 1 and Proposition 2 of Gassiat et al. (2016). Specifically, consider an equivalent parameterization in which we let ω,κ = ω ⊗ κ be the transition probability matrix related to the homogeneous stationary firstorder Markov chain S ω,κ where j and l are the indexes of the JL states of S ω,κ t . Under (A1), S ω,κ t is irreducible implying that the rank of ω,κ is full. Furthermore, under assumptions (A2) and (A3), the state densities reported in (4) are distinct. Hence, Proposition 2 and Theorem 1 of Gassiat et al. (2016) hold.
Moments
By the property of the Skellam distribution, all moments of Y t exist and can be recovered by marginalization of the latent variables. Let (a, b; s) = E[Y a t+s Y b t ] be the matrix of cross products at lag s ≥ 0 with generic element ξ n,m (a, b; s) = E[Y a n,t+s Y b m,t ]. If s > 0 and n = m, ξ n,m (a, b; s) is where M SK (p, λ 1 , M P (r, λ 2 ) is the pth noncentral moment of a Skellam distributed random variable with intensities λ 1 and λ 2 , and M P (q, λ) is the qth noncentral moment of a Poisson random variable with intensity λ. Equation (5) highlights the attenuation effect of κ m,l 1 and κ n,l 2 on all moments of Y t . An increase in the probability of price staleness due to illiquidity determines a reduction in the magnitude of the moments of price changes.
In the limiting case with κ m,l 1 = κ n,l 2 = 1, Y t is a degenerate multivariate random variable with all probability mass in zero. Furthermore, the two chains S κ t and S ω t directly affect the moments in Equation (5) through the powers of the transition matrices, ( κ ) s and ( ω ) s , respectively. For example, the sth autoregressive moment of the squared prices variations, which measures the persistence in the volatility of the price changes, is given by n;k ), (7) and for n = m they further simplify to ξ n,n (a, b; n;k ). In this case, the moments do not depend on κ and ω , and they are function of the stationary distributions of S κ t and S ω t . The predictive moments of Y t are computed by replacing δ ω j and δ κ l in Equation (7) with the predictive probabilities reported in Equation (3). In particular, the predictive covariance matrix
Estimation Via the EM Algorithm
We now present the EM algorithm for the computation of the ML estimator of the DyMSk model parameters. The EM algorithm is an extension of the one proposed in Catania and Di Mari (2020) for the estimation of a multivariate hierarchical Markov-switching models. Let's consider a sample of T observations for N price changes collected in the N × T matrix y 1:T = (y 1 , . . . , y T ), and the series of random variables S ω where their (unobserved) realizations are collected in the T × 1 vectors s ω 1:T , s κ 1:T , z 1:T , and in the N × T matrix b 1:T . To exploit the stochastic representation of the Skellam as the difference of two Poisson random variables, 6 consider the N × T matrix X (1) , LN for κ and 2KN for λ (1) and λ (2) . This means that model complexity in terms of number of free parameters is quadratic in J and L, while being linear in the number of variables N and mixture components K. The complete data log-likelihood (CDLL) function of DyMiSk, that is, the log-likelihood for the observed and unobserved random variables, is To derive the properties of the model we assumed that the initial distributions of S ω t and S κ t coincide with the limiting distributions δ ω and δ κ , respectively. 7 Unfortunately, the CDLL cannot be directly maximized due to the presence of latent quantities. The EM algorithm threats these unobserved terms as missing values and proceeds with the maximization of the expected value of the CDLL. To this end, we introduce a number of augmenting variables: The variables u ω t;j , u κ t;l , v ω t;i,j and v κ t;h,l follow from the standard implementation of the algorithm for Markovswitching models, see McLachlan and Peel (2000), whereas the variable z t;j,k (for j = 1, . . . , J, and k = 1, . . . , K) is specific to the DyMisK model and is related to the additional latent variable Z t . The new variables allow us to rewrite the CDLL, as The EM algorithm iterates between the expectation-step (Estep) and maximization-step (M-step). Given a value of the model parameters at iteration m, θ (m) , the E-step consists of the evaluation of the so-called Q function defined as Q(θ, θ (m) where the expectation is taken with respect to the joint distribution of the missing variables conditional to the observed variables using parameter values at iteration m, denoted by E θ (m) [·]. Exploiting the formulation of the CDLL, the Q function can be factorized as T \ N 1 2 3 4 5 6 7 8 9 1 0 1000 65 284 120 154 223 227 233 264 271 277 2000 109 492 423 377 426 439 452 512 534 554 3000 153 700 726 600 629 650 671 761 798 830 4000 197 907 1029 824 832 862 889 Computations have been performed on an Intel Xeon E5-2680 v2 CPU at 2.8 GHz. In all cases, the algorithm stops when the relative increment on the log-likelihood function is lower than 10 −7 .
n,t |Z t = k, Y 1:T = y 1:T . The E-step involves the computation of these quantities, but the tasks of filtering and smoothing turn out to be rather involved due to the presence of the two unobserved Markov chains and the additional latent variables Z t and B n,t . Therefore, we rely on an equivalent model representation of DyMiSk that makes filtering and smoothing of the latent chains straightforward via the forward filtering backward smoothing (FFBS) algorithm. This representation is obtained by combing S ω t , Z t , and S κ t , in a single first-order Markov chain, S ω,Z,κ t , with state space {1, . . . , JKL} and restrictions implicitly induced by the structure of DyMiSk. 8 In the M-step of the algorithm, the function Q is maximized with respect to the model parameters θ. Solving the Lagrangian associated with this (constrained) optimization leads to the following closed-form expressions δ ω , and λ (2) n;k .
Given an initial guess θ (0) , the algorithm iterates between the E-and the M-steps until the relative increment of the loglikelihood function is below a given threshold (e.g., 10 −7 ). Dempster, Laird, and Rubin (1977) prove that the EM algorithm provides a nondecreasing sequence of log-likelihood values. Thus, the EM algorithm converges to the maximum of the loglikelihood function. We denote the vector of ML coefficients asθ . 9 Through a simulation analysis, we assess the execution time of the EM algorithm for different combinations of T and N. In particular, we consider N ∈ [1, 2, . . . , 10], T ∈ [1000, 2000, . . . , 10, 000], and we set J = 4, K = 4, and L = 3 (48 hidden states). Table 1 reports the execution time of the EM algorithm. The computation time is below 1 hour even for the largest sample size (T = 10000 and N = 10).
Data Description and Summary Statistics
We consider the high-frequency stock price moves of four companies listed on the Dow Jones index (DJIA) in different time periods. The stocks under investigation are the same as in Koopman, Lit, and Lucas (2017): Caterpillar (CAT), Coca Cola (KO), JP Morgan (JPM), and Walmart (WMT). We consider two sampling periods: a low volatility one from November 6, 2013, to November 19, 2013, and a turbulent one (labeled " Lehman"), from September 11, 2008, to September 25, 2008, which includes the bankruptcy of Lehman Brothers. Prices are collected from the Trades and Quotes (TAQ) database and a preliminary cleaning procedure is performed according to Brownlees and Gallo (2006) and Barndorff-Nielsen et al. (2009). Although the DyMiSk can be employed with price changes observed at any sampling frequency, we have decided to focus on stock prices sampled at 15 seconds by means of the previous-tick method. Both sample periods consist of 62, 400 observations (1560 intradaily observations for 10 days and 4 assets). This sample size is approximately three times larger than the one adopted in Koopman, Lit, and Lucas (2017). 10 9 As for standard HMMs, the log-likelihood function of DyMiSk can present several local optima and there is no guarantee that convergence to the global optimum is achieved. To this end, running the algorithm several times with different starting values is a standard procedure to better explore the log-likelihood surface. 10 Koopman, Lit, and Lucas (2017) considered a sample of one year of stock prices sampled at 1 s frequency. However, the parameter estimates are obtained on a day-by-day basis, that is with T = 23, 400 observations. A comparison of the results obtained with other sampling frequencies would be extremely time consuming in our setting and it would add great length to the article. Thus it is left for future research. Table 2 reports the main summary statistics of the price changes for the four stocks considered. Notably, the median and the mode of Y n,t are zero in both periods. This provides a first evidence on the large share of zeros characterizing stock prices sampled at high frequencies. For instance, during the low volatility period, the percentage of zeros is between 34% for CAT and 50% for KO (the least liquid asset). The percentage of zeros drastically reduces during the Lehman period, as a consequence of the large amount of news arriving to the market and the increased uncertainty about the fundamentals across investors. The sample average of price changes is also very close to zero and, especially for the low volatility period, the level of skewness is almost null. This signals a rather symmetric distribution of price variations. On the contrary, all series are negatively skewed during the Lehman period: this is due to the arrival of several bad news about the overall stability of the financial sector. These generate large negative price moves, resulting in a skewed distribution. Furthermore, both variance and kurtosis are very large, and the magnitude of the price variations is rather extreme as testified by the maximum and minimum variations in the order of hundreds of cents. Notably, the largest price variations in both periods take place at the opening of the trading day. Indeed, a well-known stylized fact of high-frequency prices is that the variability of their changes exhibits a pervasive intradaily seasonal pattern, see among others Andersen and Bollerslev (1997) and the recent contribution of Andersen, Thyrsgaard, and Todorov (2019). For instance, at the opening of the market, the volatility is generally at its peak as a consequence of the rebalancing activity by market participants processing the overnight information. On the contrary, the volatility is typically very low during lunch time. Figure 2 shows that the probability of zeros is also subject to nonnegligible variability at the intradaily level with a reverse U-shape relation reflecting the different amounts of trading activity within the day. This evidence is consistent across the four assets under investigation, with KO being the least active stock with more than half of the trades associated with zeros during the central business hours in the low volatility period.
n;k is slightly modified to λ (1) n;k and λ (2) n;k We also let the Bernoulli probabilities, κ n;l , to depend on a set of deterministic seasonal components, g t,d , where g t,d = 1, if time t coincides with season d, for d = {1, . . . , D 2 }. Specifically, we modify the Bernoulli probabilities as κ n,t;l = D 2 d=1 g t,d κ n,d;l , where κ n,d;l are seasonal-dependent Bernoulli probabilities that need to be estimated alongside the other parameters. The Eand M-steps of all other parameters remain unchanged, while κ n,t;l replaces κ n;l . The M-step for the Bernoulli probabilities To capture the intense trading activity at the opening of the market, the first period coincides with the first 5 min of the trading day from 9:30 to 9:35, the second from 9:35 to 10:00, and the remaining run 30 min each until market closing time.
Model Selection and Goodness of Fit
The DyMiSk is estimated on both the low volatility and the Lehman periods for all combinations of L ∈ {1, . . . , 6}, K ∈ {1, . . . , 15}, and J ∈ {1, . . . , 6}, with K ≥ J for identification. Table 3 reports the computational time (in seconds) for the EM algorithm to converge for different selected combinations of K, J, and L in the low volatility period. The computational time takes few minutes for specifications with K, J < 4 and L < 3, while it is in the order of several hours for richer specifications. The selection of the best model is performed via the Bayesian Information Criteria (BIC). The BIC selects J = 5, K = 5 and 11 Note that, since λ (m+1) k is computed using the previous iteration estimate of β n,t , the resulting algorithm is effectively an Expectation Conditional Maximization (ECM), see Meng and Rubin (1993). L = 1 for the low volatility period, and J = 5, K = 12, and L = 2 for the Lehman period. Interestingly, the variability and erratic nature of the price moves during the Lehman episode requires not only many mixture components (K = 12), but also two states for S κ t . On the contrary, a more parsimonious model is selected for the low volatility period. The estimated parameters are reported in Section 3 of the supplementary document. In the Lehman period, the states of S ω t and S κ t are very persistent since the estimated transition matricesˆ ω andˆ κ are close to identity matrices. As for the matrices λ (1) and λ (2) , they display heterogeneous patterns over the J = 12 rows, with values in the range between 1.4 and 115. This makes the DyMiSk able to adapt to low, medium and high volatile states. Coupled with the different mixing probabilities inˆ , this heterogeneity determines a very flexible correlation structure across assets. We also notice a high degree of heterogeneity in the elements of the vectorsκ 1 andκ 2 , which in some cases take values close to one (thus accounting for episodes of prolonged staleness). As expected, the magnitude of λ (1) and λ (2) is much smaller in the low volatility period than during the financial crisis. Analogously, the magnitude of the Bernoulli probabilities in the vectorκ 1 is reduced compared with the Lehman period. Indeed, the Skellam distribution with small values of λ (1) and λ (2) generates zeros with a larger probability than in the high volatility scenario, so that the role of illiquidity is reduced. Overall, the parameter estimates signal the ability of the DyMiSk to well adapt to changing market conditions. In the following paragraphs, we check the goodness of fit both at the univariate and multivariate level.
Univariate Analysis
The goodness of fit of the univariate marginal distributions can be visually assessed by looking at Figure 3. The fit to the empirical frequencies achieved by the DyMiSk is remarkable, and it signals the ability of the dynamic mixture to adapt to different market conditions and intensities of the trading process. 12 Indeed, the fit proves remarkable in all the intradaily business periods defined according to the seasonal dummies. As expected, the empirical distribution is more dispersed at the opening, that is, from 9:30 until 9:35, thus justifying the use of a specific seasonal term, β 1,d , for this period. Furthermore, during the Lehman episode, the probability mass is more dispersed than in the low volatility period; even during the central hours of the day. We also investigate the quality of the fit of the univariate distributions by means of the test of Berkowitz (2001), which is a tool for assessing the quality of density forecasts for financial risk management applications. The test is based on the probability integral transforms (PITs) of the data with respect to their conditional distribution, which for the DyMiSk is easily computed from the predictive distribution by marginalization. The Berkowitz's test relies upon the previous results by Fisher (1932) and Pearson (1938) stating that, under correct model specification and when the support of the observables is continuous, PITs should be i.i.d. uniformly distributed over the (0, 1) interval, and their transformation through the Gaussian quantile function should be i.i.d. Gaussian distributed. For discrete random variables the PITs cannot be uniformly distributed, and modifications should be made to the testing procedure.
To tackle this issue, we compute the randomized, yet uniform, PITs for integer-valued variables derived by continuization of the discrete conditional pmf, see Smith (1985), Brockwell (2007), and Liesenfeld, Nolte, and Pohlmeier (2008). Figure 4 displays the histogram of the PITs divided in 10 bins for all series. We report results for both the in-sample and the out-of-sample periods around the Lehman episode. 13 The out-of-sample period covers ten trading days after the insample period. Through the out-of-sample analysis, we assess the ability of DyMiSk to adapt to changing market conditions and to capture the relevant features of the high-frequency price changes outside the estimation period. The plots highlight the ability of DyMiSk to provide an overall good fit. Indeed, the PITs are approximately uniformly distributed in all cases since the relative frequencies (blue columns) falls within (or are very close to) the 95% confidence bands (red line), which are very narrow due to the extremely large sample size. Table 4 reports the results of the Berkowitz's testing procedure. The results from Table 4 are mixed and can be summarized as follows: (i) the conditional distribution is generally correctly specified for WMT and CAT in both in-sample periods and only for WMT and JPM in the low volatility out-ofsample period, (ii) during the Lehman out-of-sample period, we always reject the null hypothesis, and (iii) the null hypothesis of independence and correct coverage of the transformed PIT is always rejected. The rejection of the null hypothesis is somehow expected due to the very large sample size and the parameters instability following the Lehman episode. We conclude that, (2007) computed according to the onE-step ahead univariate conditional distribution of each asset for the Lehman period. PITs are divided into 10 bins such that under the null hypothesis of correct model specification the area of each bin should be 10%. Confidence intervals based on the methodology of Diebold, Gunther, and Tay (1998) are computed at the 5% level. Berkowitz (2001).
Lehman period
Low volatility period The tests are computed using the randomized PITs as in Brockwell (2007). We consider the coverage of the left tail below the τ % quantile level. Columns labeled " All" correspond to unconditional coverage of the whole distribution (τ = 100%). Columns labeled " Joint" report the statistics associated with the joint test for the null of correct unconditional coverage and independence of the PITs. Gray cells indicate a p-value above 5%, based on the asymptotic distribution of the test.
although the tests reported in Table 4 often reject the null hypothesis, histograms displayed in Figure 4 are encouraging and suggest that the fit of the univariate distributions achieved by DyMiSk is reasonable in both the in-sample and the out-ofsample periods.
As a final assessment of the quality of the univariate fit, we focus on the relevance of incorporating the frictions component, B t , into the DyMiSk. We therefore consider a restricted DyMiSk specification, where the component Figure 5 illustrates the dramatic worsening in the quality of the fit in this case. Indeed, the PITs arising from the restricted model are far from being uniformly distributed, even if the optimal orders J and K of the chain (Z t , S ω t ) are optimally selected by BIC. This finding highlights the crucial role of B t in improving the overall fit of the DyMiSk by contributing to explain the observed staleness in prices.
Predicting Staleness
As shown in Table 2, the unconditional probability of observing zero variations in the dataset of prices observed at 15 seconds frequency is very high and generally well above 30%. This phenomenon is well known in the high-frequency literature, see the recent contributions of Bandi, Pirino, and Renò (2017) We consider regression (10) with no control variables (a), with seasonal dummies (b), with seasonal dummies and 5 lags of the dependent variable (c), with seasonal dummies, lags of the dependent variable and the bid-ask spread (d). The probability P t|t−1 is computed as n,k ) . The superscripts * * * , * * , and * indicate statistical significance at the 1%, 5%, and 10% significance levels, respectively. The standard errors are computed according to the Newey-West formula (HAC). The pseudo R 2 is the goodness-of-fit index by McFadden (1974McFadden ( ). et al. (2020. We therefore look at the ability of the DyMiSk to predict the occurrence of zeros. Table 5 reports the parameter estimates of a predictive logistic regression over the out-ofsample period. The logistic function is specified as where x t = 1, W n,t , P t|t−1 (Y n,t = 0) , W n,t is a vector of control variables, and P t|t−1 (Y n,t = 0) denotes the model-based predictive probability of zeros at time t conditional on the information set at time t − 1. All figures in Table 5 signal the positive and highly significant dependence between the ex-ante (model-based) probability of zeros and the ex-post realization of price staleness, even when adding control variables in the model. These are intradaily seasonal dummies, lags of the dependent variable, and a proxy of the liquidity of the market as measured by the bid-ask spread (BA). The point estimates of the parameter on P t|t−1 (Y n,t = 0) are such that the associated average partial effect is between 0.9% and 1.3%. This means that a one percent increase in the conditional probability of staleness increases the odds of observing a zero relative to a non-zero variation by roughly 1%. The pseudo R 2 of McFadden (1974) indicates a relatively low predictive ability of the logistic regression model, as it lies between 2% and 4% even when covariates/control variables are included in the logistic regression model. On the one hand, this signals the low predictability of staleness as consequence of the erratic nature of the high-frequency price changes. On the other hand, this finding suggests that the (model-based) probability of zeros incorporates all the relevant information available at time t − 1 needed to predict price staleness at time t.
Bivariate Analysis
The goodness of fit of the bivariate distribution of CAT-WMT and CAT-JPM for different intradaily periods (opening, lunch, closing) is reported in Figure 6. The fit to the empirical frequencies (red area) by the DyMiSk (blue line) is again remarkable for both the low volatility and the Lehman periods. For what concerns the low volatility period, Panel (a) highlights that the bivariate distribution of the price variations is rather sparse at the opening, while around lunch and at closing most of the probability mass is in the range between −1 and +1 cents, with a relatively high percentage of joint zero variations. The picture drastically changes in the Lehman period. The bivariate empirical probability is dispersed in all intradaily periods (including lunch and closing hour). The double hidden Markov structure generated by S κ t and (S ω t ; Z t ) allows for a very flexible characterization of the multivariate distribution of the highfrequency price moves. For instance, the probability mass on zero is high at the opening for CAT-JPM (while not for CAT-WMT). This evidence is associated with an episode of trading halt affecting several stocks traded on NYSE. Indeed, at the opening of Monday, September 15, 2008, the trading of CAT, KO, JPM stopped, resulting in a frozen market and a prolonged period of no price variations. This might be considered a systematic event of staleness (or co-staleness, see Bandi, Pirino, and Renò 2020b). In particular, Panel (j) of Figure 6 displays the effect of the market freezing on the joint probability of zeros on CAT and JPM. The fit of DyMiSk to the bivariate empirical distribution is remarkable even in these extreme situations.
Filtered Estimates and Variance Prediction
The FFBS algorithm can be exploited to extrapolate the conditional intradaily volatilities of each individual stock under consideration. Figure 7 displays the absolute value of the price changes together with the extrapolated volatilities, denoted as σ n,t|t−1 . The latter are computed as the square root of the diagonal elements of the predicted covariance matrix, t|t−1 , as reported in Equation (8). The intradaily patterns in the magnitude of the price variations are clearly reflected in the extrapolated volatilities, which are, by construction, smoother than the ex-post realizations. 14 While Figure 7 provides a visual illustration of the dynamics of the filtered volatilities, we also statistically assess the performance of the DyMiSk in providing precise forecasts of the variance of the price changes. As documented by Patton (2011), comparing the square root of the variance forecast with absolute returns generally leads to biased conclusions. Specifically, the comparison tends to favor models with downward-biased variance forecasts. Therefore, we follow Patton (2011) and use the mean squared error (MSE) and quasi likelihood (QLIKE) to compare different variance predictions. QLIKE and MSE provide correct ranking of competing volatility forecasts. In the analysis, we use the same strategy of Creal, Koopman, and Lucas (2011) and construct six portfolios, p j,t = g j y t , for given N ×1 vectors g j and for j = 1, . . . , 6. Stocks are ordered as WMT, KO, JPM, and CAT, and portfolio weights are set to: g 1 = (0.25, 0.25, 0.25, 0.25) , g 2 = (0.4, 0.2, 0.3, 0.1) , g 3 = (0.5, 0.5, 0.5, −0.5) , g 4 = (0.5, −0.5, 0.5, 0.5) , g 5 = (0.5, 0.5, −0.5, −0.5) , and g 6 = (0.5, −0.5, 0.5, −0.5) as in Creal, Koopman, and Lucas (2011). Since the variance is not observed, we proxy it by the squared price variation of the jth portfolios, that is, σ 2 t+1;j = p 2 j,t . The QLIKE and MSE for the jth portfolio are defined as QLIKE t+1|t;j = log(σ 2 t+1|t;j ) + σ 2 t+1;j /σ 2 t+1|t;j , and MSE t+1|t;j = (σ 2 t+1|t;j − σ 2 t+1;j ) 2 , wherê σ 2 t+1|t = g j t+1|t g j is the prediction of the portfolio variance obtained with DiMiSk.
As a benchmark, we consider the DCC model with periodic GARCH components (DCC-PGARCH). Periodicity is imposed on the univariate GARCH terms via a seasonal intercept as in Rossi and Fantazzini (2015). Specifically, the benchmark model assumes Y t |Y 1: The conditional correlation matrix, R t , follows a DCC specification as in Engle (2002). The DCC-PGARCH is estimated on the intradaily price variations via twostep quasi-maximum likelihood (QML), see Engle (2002). Given the parameter estimates, the one-step ahead prediction H t+1|t is available in closed form, and the predicted portfolio variance is given byσ 2 t+1|t = g j H t+1|t g j . Thanks to its flexible structure, the DCC-PGARCH represents a challenging benchmark model for the assessment of the quality of volatility predictions. The comparison involves the one-step-ahead volatility predictions for both the low volatility and the Lehman periods. Values smaller than one indicate outperformance of DyMiSk with respect to DCC-PGARCH and vice versa. Green (Red) cells indicate rejection of the bilateral null hypothesis of equal predictive ability of Diebold, Gunther, and Tay (1998) at the 5% confidence level and outperformance (underperformance) of DyMiSk with respect to DCC-PGARCH. Table 6 presents a summary of the forecasting accuracy of the DyMiSk relative to that of the DCC-PGARCH. The comparison of the predictive accuracy of the DyMiSk and DCC-PGARCH is performed through the Diebold and Mariano (1995) test. The forecasting window includes the 10 days after the in-sample interval for both the low volatility and the Lehman periods. In many cases (51 out of 96), the DyMiSk provides out-of-sample variance predictions that are statistically superior to those of the DCC-PGARCH at the 5% significance level, while in 26 out of 96 cases, the DCC-PGARCH proves superior to DyMiSk. At the opening of the trading day, MSE and QLIKE disagree in the low volatility period, while they signal underperformance of DyMiSk during the Lehman period. Overall, gains with respect to DCC-PGARCH are more sizable during the low volatility period, and during lunch and closing times, while during the Lehman period the performance of the DyMisk slightly deteriorates (especially at the opening), perhaps as a consequence of the occurrence of extreme realizations. Overall, Table 6 testifies the ability of the DyMiSk to provide a very flexible prediction of volatilities and correlations, which well adapts to changing market conditions.
Heterogenous Tick Size
In this section, we further explore the features of the DyMiSk in an empirical context characterized by a relatively large number of assets (N = 10), with different tick sizes relative to the trading price. In particular, we consider 10 assets belonging to the DJI30 index: General electric (GE), American Express (AXP), Pfizer (PFE), Bank of America (BA), The Travelers Companies (TRV), United Technologies Corporation (UTX), The Coca-Cola Company (KO), Goldman Sachs (GS), IBM, and Apple (AAPL). We select them by sorting all DJI30 assets according to their average price on April 3, 2009. The first three assets (GE, AXP, and PFE) are those with the lowest trading price (≈ 10$), that is with the highest tick size relative to the price. The next four stocks (BA, TRV, UTX, and KO) are those with intermediate tick sizes, that is, with trading prices around 30$. Finally, the last three stocks (GS, IBM, and AAPL) are those with the largest average prices around 90$. They display the lowest tick size relative to the price. We consider a sample period ranging from March 2, 2009to March 11, 2009, for a total of 12472 observations. Figure 8 provides an illustration of the heterogeneous behavior of the price changes of high-tick (PFE) and low-tick (IBM) stocks on a given trading day (March 9, 2009). In high-tick stocks like PFE, most of the price moves are in the range −2 and +2 cents, which is roughly associated with a 0.2% variation in the stock price. On the contrary, for IBM the price moves cover a wider range of values between −10 and 10 cents.
We estimate the DyMisk model for all pairs of (J, K, L) with J ∈ (1, . . . , 12), K ∈ (1, . . . , 15), and L ∈ (1, . . . , 6) with the same seasonal specification described in Section 4.2. The BIC selects the DyMisk model with J = 4, K = 10, and L = 1. As an illustration of the goodness of fit, Figure 9 reports the PITs of AXP and IBM. The fit of the univariate empirical distributions is remarkable irrespectively of the relative tick size of the stocks under investigation.
The outstanding quality of the fit carries over when considering the bivariate empirical distribution of low-tick (IBM) and high-tick (PFE) stocks, as reported in Figure 10. We also formally assess the goodness-of-fit of the univariate distributions by the LR test statistics of Berkowitz (2001), as reported in Table 7. In almost all cases, the LR test statistics are low and we cannot reject the null hypothesis of good coverage of the empirical probabilities. To conclude, the empirical evidence reported in this section further highlights the flexibility of the DyMiSk and its ability to adapt to the distributional features of the stock prices at hand, irrespectively of their relative tick-size. The tests are computed using the randomized PITs as in Brockwell (2007). We consider the coverage of the left tail below the τ % quantile level. Columns labeled " All"correspond to unconditional coverage of the whole distribution (τ = 100%). Columns labeled " Joint" report the statistics associated with the joint test for the null of correct unconditional coverage and independence of the PITs. Gray cells indicate a p-value above 5%, based on the asymptotic distribution of the test.
Disentangling Staleness
Price staleness and irregular trading have been studied in several articles, see the early works of Atchison, Butler, and Simonds (1987) and Lo and MacKinlay (1990), and the more recent contributions of Renò (2017, 2020b). A common trait of most studies on high-frequency market imperfections is the assumption of a continuous underlying price process with microstructural features modeled as an additional source of randomness (like a censoring or a barrier) preventing the efficient price to be observed, see also the recent studies of Kolokolov, Livieri, and Pirino (2020) and Bandi et al. (2020a).
In this section, we look at the ability of the DyMiSk to shed some light on the different sources of price staleness. Intuitively, the large fraction of zeros in the high-frequency prices could be due to several factors. First, stale prices might be the consequence of frictions in the form of bid-ask spread, which are partly responsible for the observed sluggishness of the highfrequency prices. Second, the absence of price variations might be the consequence of the absence of news. In the absence of news, traders do not revise their reservation price and do not generate any trade and price movement. Third, even in the presence of news, if the aggregated traders' reactions to the news are of opposite sign but of same magnitude, then the observed transaction price remains constant. In this case, we say that the market is in a dyadic state. Hence, by conditioning on different realizations of the latent variables of the DyMiSk, we can separately identify the three sources of zero variation in the observed high-frequency transaction price. In particular, the DyMiSk allows us to disentangle the probability of zeros as • No news: P Y n,t = 0|B n,t = 0, X (1) n,t = 0; Y 1:t−1 ; • Dyadic market: P Y n,t = 0|B n,t = 0, X (1) The probability of price staleness generated by frictions is completely determined by the success probability of the Bernoulli random variable, B n,t , that is, P(B n,t = 1|Y 1:t−1 ). This provides a structural interpretation of the excess probability of zeros, as a component not related with the processing of new information on the market, but rather with the presence of transaction costs and frictions. The last columns of Table 8 report the proportion of zeros attributed to the three sources obtained averaging at the daily horizon. The share of zeros attributed to frictions is larger during the Lehman period than in the low volatility period, where the proportion of zeros associated to no-news is larger on average across stocks. A nonnegligible share of zeros (around 30% on average) is attributed by the model to the dyadic state. Peculiar heterogeneous patterns at the intradaily level emerge from the other columns of Table 8. For instance, frictions account for a large proportion of zeros at the opening and closing of the trading day during Lehman, that is, when the bulk of information to be processed is large and the distribution of price moves is more dispersed. Instead, the dyadic state is the relevant source of zeros at the opening during the low volatility period (above 66%). Finally, the no-news component is relevant at lunch and at closing (above 55%) in the low volatility period.
Staleness and Trading Activity
In the following, we look at the relation between trading activity and different sources of price staleness, and we build upon the mixture of distribution hypothesis (MDH) of Clark (1973) and Tauchen and Pitts (1983) to provide an ideal and simple setup The table reports the share of zeros attributed by DyMiSk to no news, dyadic market, and frictions for all series during the Lehman and low volatility periods. Results are reported in percentage (relative to the fraction of zeros in each period-that is, the sum by row is 100%) and are computed at the opening, lunch, and closing times. The decomposition at the daily horizon is also reported in the " Daily" panel. The table reports the results for each asset over the low volatility and the Lehman periods. We consider regression (11) with no control variables (a), with seasonal dummies (b), with seasonal dummies and 5 lags of the dependent variable (c), with seasonal dummies, lags of the dependent variable and the bid-ask spread (d). The superscript * * * , * * , and * indicate statistical significance at the 1%, 5%, and 10% significance levels, respectively. The standard errors are computed according to the Newey-West formula (HAC). The pseudo R 2 is the goodness-of-fit index by McFadden (1974).
to interpret the empirical findings. We relate the absence of price movements to the volume of trades by assuming that the market consists of a finite number of active traders, who take long or short positions on a given asset. The evolution of the equilibrium price is motivated by the arrival of new information to the market. As new information arrives, the traders adjust their reservation prices, resulting in a change in the market price given by the average of the increments of the reservation prices. The reservation price of each trader might reflect individual preferences, asymmetries in information sets, and/or different expectations about the fundamental values. In the absence of news, individual traders do not update their reservation prices and no trading volume is generated. Moreover, due to the presence of microstructural frictions, such as transaction costs in the form of bid-ask spread, the trader does not trade if the difference between her/his reservation price and the equilibrium price is too small in absolute value. Hence, we do not record any price variation nor exchange of stocks in this case. As noted by Bandi et al. (2020a), the presence of transaction costs might reduce the amount of traded securities, when the execution costs are excessively large. Finally, if the aggregated reservation prices are of the same magnitude but with opposite signs, then trades take place (i.e., stocks are exchanged), but the equilibrium price does not move. In this last case (namely in dyadic market), we observe price staleness with nonzero trading volume. Summarizing, we expect the absence of trading volume to be associated with price staleness when the latter is generated by the absence of news and frictions. On the other hand, trading volume can be generated without price moves when the market is in a dyadic state.
In Table 9 we test the empirical prediction outlined by means of a logistic regression. We specify the logistic function as where V n,t is the trading volume at time t on asset n, x t = 1, W n,t , P t|t−1 (Y n,t = 0) , W n,t is a vector of control variables such as intradaily seasonal dummies and autoregressive terms, and P t|t−1 (Y n,t = 0) is the (predicted) probability of staleness due to frictions and absence of news, that is defined as P t|t−1 (Y n,t = 0) = P Y n,t = 0|X (1) n,t > 0, X (2) n,t > 0, X (1) n,t = X (2) n,t Frictions + P Y n,t = 0|B n,t = 0, X n,t = 0, X n,t = 0 No News . For both the low volatility and the Lehman periods, the predicted probability of the absence of news and frictions is associated with a significant increase in the probability of observing zero trading volume. Indeed, the parameter loading P t|t−1 (Y n,t = 0) is significant and it positively impacts on the probability of the absence of trading activity. This finding holds even when controlling for autocorrelation, intradaily seasonality, and bid-ask spread. Indeed, repeated trades on the ask or on the bid sides would result in a sequence of zeros associated with nonzero transaction volume. Hence, it is crucial to control for liquidity proxies, as the they might negatively impact on the number of traded securities. Concluding, the estimates reported in Table 9 support the claim that the DyMiSk can be used to disentangle the price staleness of financial prices observed at high frequencies and to predict periods with reduced trading activity, as measured by the absence of trading volume. 15
Conclusions
Building upon the framework of hidden/latent Markov chains, we provide a multivariate hierarchical HMM model for discrete data based on the Skellam distribution. We apply it to the prices of stocks traded on NYSE and observed at high frequency (15 seconds). Our model captures most of the features of the price variations observed at high frequencies both in-sample and outof-sample. Furthermore, it allows us to disclose new characteristics of the market microstructure. For instance, the model is able to account for the large proportion of zeros, which often occur contemporaneously on several assets (co-staleness, see Bandi, Pirino, and Renò 2020b). These events might be associated with frozen market conditions and illiquidity episodes preventing the efficient transmission of news to the financial prices. Furthermore, we study the relationship between the model-implied probability of zeros and the absence of trading volume, and we find it to be in line with the findings of Bandi et al. (2020a).
To conclude, we believe that the DyMiSk can be beneficial for several financial applications not limited to the one presented in this article, for example, when the goal is to investigate illiquidity spillover effects on a large scale. Furthermore, the DyMiSk might represent a suitable modeling framework also in nonfinancial applications involving signal extraction in the presence of rounding errors. For instance, when measuring air pollutants to assess their effect on air quality or when predicting the risk of a given disease based on censored scores. | 13,986 | 2019-03-08T00:00:00.000 | [
"Mathematics",
"Economics"
] |
Seeing the Supracolloidal Assemblies in 3D: Unraveling High-Resolution Structures Using Electron Tomography
Transmission electron microscopy (TEM) imaging has revolutionized modern materials science, nanotechnology, and structural biology. Its ability to provide information about materials’ structure, composition, and properties at atomic-level resolution has enabled groundbreaking discoveries and the development of innovative materials with precision and accuracy. Electron tomography, single particle reconstruction, and microcrystal electron diffraction techniques have paved the way for the three-dimensional (3D) reconstruction of biological samples, synthetic materials, and hybrid nanostructures at near atomic-level resolution. TEM tomography using a series of two-dimensional (2D) projections has been used extensively in biological science, but in recent years it has become an important method in synthetic nanomaterials and soft matter research. TEM tomography offers unprecedented morphological details of 3D objects, internal structures, packing patterns, growth mechanisms, and self-assembly pathways of self-assembled colloidal systems. It complements other analytical tools, including small-angle X-ray scattering, and provides valuable data for computational simulations for predictive design and reverse engineering of nanomaterials with the desired structure and properties. In this perspective, I will discuss the importance of TEM tomography in the structural understanding and engineering of self-assembled nanostructures with specific emphasis on colloidal capsids, composite cages, biohybrid superlattices with complex geometries, polymer assemblies, and self-assembled protein-based superstructures.
INTRODUCTION
Transmission electron microscopy (TEM) is an indispensable tool for studying the structure and properties of materials at the atomic level.Since its invention in 1931, 1,2 TEM has developed rapidly, with advances in instrumentation, 3 electron sources, 4,5 detectors, 6 specimen preparation, 7 imaging, and imaging processing methods. 6−11 TEM has continuously evolved from a tool to study ultrasmall objects' morphology, size, and shape to complete structure determination of biological and synthetic materials at near-atomic resolution. 12−22 Early attempts to develop biological specimen preparation methods have used heavy metal atoms for metal shadowing and negative staining. 23rthermore, methods including chemical fixation, 24 critical point drying, 25 and sugar (glucose and trehalose) embedding, 26,27 have provided minimum drying artifacts resulting in high-resolution imaging of two-dimensional (2D) crystals of biomolecules, including most challenging membrane proteins. 28−37 Today, cryo-TEM allows the preparation of a broad range of biological specimens and synthetic soft materials.More recently, the application of cryo-TEM has expanded to study battery materials, making TEM one of the most valuable imaging and analytical tools. 38TMV) particles and showed that the unstained sample could provide high-resolution details compared to single projections. 63−69 TEM tomography relies on a series of 2D projections (i.e., tilt series) collected across different viewing angles by tilting the specimen holder with a known increment angle (Figure 1).The 2D projections are computationally aligned using crosscorrelation methods or preloaded fiducial markers.Several methods have been used to collect tilt series, including random conical tilt, increment angle, increment slope, and dual-axis tilt.−81 Extensive discussion on the application of various TEM and STEM tomography methods is beyond the scope of this article, and excellent reviews on fundamental concepts of electron tomography, theoretical insights, and examples can be found in several recent reviews. 8,9,67,75In this Perspective, I will discuss the application of TEM tomography in unraveling the highresolution 3D details of self-assembled soft colloidal superstructures (Figure 1).I will focus on four key areas of applications: (i) understanding the morphology and internal structures of supracolloidal assemblies, (ii) gaining insights on self-assembly mechanistic details of NP frameworks to understand structure−property relationships (e.g., enhanced optical and catalytic properties), (iii) providing self-assembly mechanism, growth patterns and unit cell parameters of biohybrid superlattices and composite frameworks, and (iv) experimental methods to study soft biomolecular assemblies.I will show some of the selected examples of supracolloidal spherical and rod-like capsids, NP frameworks, nanocluster (NC) frameworks, NP-NC composites, toroidal structures, biohybrid superlattices, and soft biomolecular assemblies (Figure 1).
SUPRACOLLOIDAL CAPSIDS
In nature, virus capsids represent a fascinating example of genetic economy, efficiency, and error-free structure formation. 82They display subunit-based self-assembly and are inspirations for synthetic self-assembled systems.Furthermore, capsids undergo facile reversible assembly disassembly by tuning the chemical or other environmental conditions. 83Their reversible nature also offers routes for selective and sizedependent encapsulation of various materials, including nanoparticles. 84However, biological particles are delicate and operate only under narrow experimental conditions.Mimicking capsid-like assemblies using metal nanoparticles offers structure formation under a broad range of experimental conditions for materials with unique chemical, optical, and magnetic properties.Nonappa et al. reported in situ, template-free, and reversible self-assembly of superparamagnetic cobalt nanoparticles (CoNPs) into spherical capsids (Figure 2). 85The capsids were prepared using the heating-up synthesis of a mixture of dicobalt octacarbonyl Co 2 (CO) 8 and p-aminobenzoic acid (pABA) in 1,2-dichlorobenzene (1,2-DCB) solvent (Figure 2a).The TEM imaging of the specimen prepared from the reaction mixture showed capsids with an average diameter of 200 nm.TEM images suggested two morphologies (Figure 2b): (i) capsids with a contrast difference between the core and the shell (Figure 2c) and (ii) capsids with a core and shell with similar contrast (Figure 2d).Furthermore, cryo-TEM imaging and dynamic light scattering (DLS) analysis confirmed the presence of stable capsids in the solution.The capsids were readily disassembled into individual CoNPs (d ∼ 4−10 nm) when treated with methanol.Importantly, reassembled spherical capsids were obtained when the methanol-treated individual CoNPs were redispersed in 1,2-DCB.Reversible assembly disassembly was also observed when the sample was subjected to a heating−cooling cycle.Interestingly, upon exchanging the solvent from 1,2-DCB to acetone, all particles turned into capsids with flexible shells with a clear difference in the contrast between the core and shell (Figure 2h,i).
Solvent exchange studies, spectroscopic analysis, and computational simulations suggested that the self-assembly is driven by the hydrogen-bonding dimerization of carboxylic acid groups of pABA ligands.Surprisingly, the magnetic measurement of capsids revealed superparamagnetic properties with a magnetic diameter of ∼3.2 nm, which corresponds to the magnetic core of individual NPs (neglecting the nonmagnetic oxide layer).The results suggest that the capsids are selfassembled superstructures, not random aggregates.Furthermore, the intrinsic superparamagnetic property of individual NPs is retained in the capsids.More importantly, even a low magnetic field of 0.65T (neodymium magnet) induced onedimensional chains or necklace-like assemblies of capsids (Figure 2j).The capsid chains remained stable once the magnetic field was removed and were resistant to mechanical perturbation.This is attributed to magnetic dipole-induced attraction and intercapsid hydrogen bonding.Importantly, in capsids, individual CoNPs are magnetically noninteracting and purely interacting via hydrogen bonding of surface ligands.
Electron tomography of as-synthesized capsids in 1,2-DCB, acetone-treated capsids, and magnetic field-treated capsid chains revealed some key insights into morphology, internal structures, and packing patterns.Importantly, these observations provide complementary evidence of self-assembly and the rationale behind their morphological differences.The 3D reconstruction revealed that the capsids have ∼20 nm multilayered shells.However, the capsids with low-contrast cores revealed empty interiors (Figure 2e).On the other hand, the capsids with uniform core−shell contrast displayed an interior filled with amorphous material (Figure 2f).The solvent exchange and nuclear magnetic resonance (NMR) spectroscopy analyses suggest that the core was filled with excess and unreacted pABA ligands.Notably, the multilayered shell had no regular packing patterns.A subtomography analysis of the shell suggests that the excess unreacted ligands are distributed or trapped between the nanoparticle cavities (Figure 2g).This was also supported using density functional theory (DFT) calculation studies of a model Co-pABA cluster.3D reconstruction of acetone-treated particles showed a similar shell thickness but with a hollow core.Furthermore, the shell was porous and deformed.This suggests that excess ligands trapped in the capsid core and the shell were removed when treated with acetone (Figure 2h,i).
In another study, capsids were prepared using Co 2 (CO) 8 and 6-amino-2-naphthoic acid (pANA) in 1,2-DCB.Unlike pABAmediated CoNP capsid formation, pANA-capped CoNPs resulted in unique rod-like capsids with an average length of 200 nm and a lateral diameter of 100 nm (Figure 3a). 86he rod-like capsids are composed of ∼20 nm shells.Furthermore, the core of the capsids contains a rod-like nanoparticle assembly (d ≈ 50 nm), i.e., rod-in-rod morphology (Figure 3b).The 3D reconstruction revealed that the capsids comprise shells consisting of a few layers of nanoparticles with a shell thickness of 20 nm (Figure 3c).The sizes of the individual building blocks were similar to those of the Co-pABA NPs and were superparamagnetic.Furthermore, the rod-like assembly within the interior of the capsid is composed of individual CoNPs similar to the shell (Figure 3d).The interspatial distance between the shell and the nanorod was ∼20−25 nm (Figure 3e,f).This suggests that by careful ligand engineering, the structure of the capsids can be tuned toward novel assemblies.While the formation of rod-like structures is interesting, what drives such structures' growth is unclear.The formation of rodshaped structures requires symmetry breaking, which can arise from the specific properties of the ligands, their interaction, and stacking.A more detailed study is needed to understand the phenomenon of this anisotropic growth.Time-resolved TEM, in situ liquid cell TEM, and tomography data-assisted computational simulations may shed more insight into the structure formation mechanism and predictive design of novel colloidal capsids.
The above results showed well-defined capsid formation using nonuniform NP building blocks via hydrogen bonding interaction.The self-assembly of CoNPs was rapid as the synthesis was performed at a high temperature (165 °C) in a nonpolar solvent.In nonpolar solvents, the carboxylic acid remains as a monomer at high temperatures, and as the temperature is lowered, it undergoes rapid dimerization.This might lead to less-ordered multilayer assemblies with excess ligands trapped inside and between the voids of the shells.Identifying uniform building blocks, controllable self-assembly conditions, and tunable inter-NP interactions is crucial to understanding capsid formation.In this context, atomically precise monolayer thiol-protected noble metal nanoclusters (NCs) have emerged as attractive building blocks for selfassembly. 86,87Because of their exactly defined number of metal atoms and ligands, they offer controlled self-assembly.Nonappa et al. reported the template-free self-assembly of p-mercaptobenzoic acid (pMBA) capped atomically precise gold nanoclusters (AuNCs), Au 102 -pMBA 44 into 2D colloidal crystals and supracolloidal capsids (Figure 4). 88Au 102 -pMBA 44 NC contains 102 gold atoms and 44 pMBA ligands (Figure 4a−c).When the carboxylic acid groups of all ligands are protonated, the NCs are dispersible in methanol and insoluble in water.However, partial deprotonation of carboxylic acid groups (∼22) imparts water solubility with an excellent colloidal stability.Therefore, selecting the proper self-assembly conditions can enable a delicate balance between attractive hydrogen bonding (carboxylic acid dimerization) and electrostatic repulsion (negatively charged carboxylates).Furthermore, the spherical coordinate system indicated that in Au 102 -pMBA 44, ligands are anisotropically distributed with a preferential orientation toward the equatorial plane of the NC.Notably, the deprotonation of Au 102 -pMBA 44 leads to patchy negative charges imparting amphiphilic properties to the NCs.Therefore, the patchy and anisotropic distribution allows symmetry breaking, resulting in lowerdimensional structures such as 2D colloidal crystals.By creation of defects, curvature can be induced to obtain spherical structures.When the aqueous dispersion of the partially deprotonated Au 102 -pMBA 44 NC was sequentially dialyzed against methanol, it resulted in 2D colloidal crystals.However, spherical capsids were formed when an aqueous dispersion of NC was rapidly added to methanol (Figure 4e).Tomographic reconstruction suggested that that shell was monolayer thick.i.e., one nanoparticle (∼2.69 nm) thick (Figure 4f,g).While there is little evidence of what stabilizes the interior of such structures, it is likely the solvent or excess organic residue in the interior.The NC also forms ellipsoidal capsids with monolayer shells.The next question is whether the capsids with monolayer shells can be obtained using nonuniform NP building blocks.
Pigliacelli et al. utilized iodinated amyloidogenic peptides for in situ AuNP synthesis and templated assembly of chiroptically active capsid-like structures. 89The modified human calcitonin derived DFNKF peptide fragments were used as ligands and templates.The para position of either one or both phenylalanine (F) residues of the DNFKF peptides was substituted with iodine (Figure 4h).Iodination promotes the self-assembly of the peptides and simultaneously acts as a template for the deposition of Au(III) ions.This approach allows Au-mediated C−I activation to promote spontaneous nanoparticle formation on the surface of the templated superstructure.Spherical particles were produced when Au(III) salts were mixed with iodinated DNFKF peptides in aqueous media.The core was composed of peptide, and the surface was covered with Au ions, supported by STEM EDS spectra.Upon heating the aqueous mixture for 60− 180 min, surface plasmon resonance peaks around 562 nm were observed, suggesting in situ nanoparticle formation.TEM image of the resulting structure displayed spherical superstructures (50−200 nm) composed of 6−10 nm AuNPs (Figure 4i).Electron tomography of the resulting superstructure revealed a spherical capsid-like structure (Figure 4j).The spherical particles displayed a monolayer shell of nanoparticles placed with uniform internanoparticle distance (Figure 4k).The core contained an amorphous, less dense interior.These results suggest that capsids with monolayer shells can be achieved via a templated approach using nonuniform building blocks.
By comparing the 3D reconstructions of the above four examples of self-assembled NP-based capsids, one can conclude the following.First, the nonuniform building blocks with directional hydrogen bonding ligands allow spherical templatefree capsids.However, the capsids are multilayered without any regular packing patterns of NPs.Second, atomically, precise NCs containing hydrogen bonding ligands result in capsids with a well-defined monolayer shell.The shell has highly ordered packing patterns of the individual NCs.Finally, using nonuniform NP building blocks, capsids with monolayer shells can be achieved under a templated approach.Therefore, the size uniformity of the NPs and self-assembly conditions affect the resulting superstructures.TEM tomography provides highresolution details on morphology, internal structures, and packing patterns in nonuniform and noncrystalline colloidal structures.
NANOPARTICLE FRAMEWORKS
The self-assembled capsids provide inspiration to investigate whether adding additional interactions or components can achieve even more ordered and compact arrangements of NPs instead of core−shell structures.Such self-assembled NP-based superstructures allow inter-NP compartmentalization.−56 Pigliacelli et al. reported the self-assembled fluorous supraparticles (FSPs) to efficiently encapsulate poorly water-soluble fluorinated drugs (Figure 5a). 90he FSPs were fabricated using AuNPs capped with 1H,1H,2H,2H-perfluorodecanethiol (PFDT) ligands in the presence of the film-forming protein hydrophobin-II (HFB-II).Two types of NPs, viz., spherical AuNCs with an average diameter of 1.6 ± 0.6 nm and plasmonic AuNPs with an average diameter of 3.8 ± 0.8 nm, were used to study the effect of size on compartmentalization. Cryo-TEM imaging suggested the spherical nature of the SPs with diameters in the range 30−80 nm (Figure 5b).The SPs were observed for both NCs and NPs.The SPs comprised a NP core and multilayered protein (HFB-II) shell with an average 5−10 nm thickness.The SAXS spectrum of a water dispersion of SPs obtained from AuNCs showed two structure peaks at 2.1 and 3.9 nm −1 , with an interparticle distance of about 3 nm.On the other hand, the SAXS spectrum of SPs obtained from AuNPs showed a less ordered structure with a SAXS pattern characterized by only one Bragg peak, corresponding to an average interparticle distance of the confined NPs of about 5.2 nm.Tomographic reconstruction of both SPs revealed a spherical morphology with a densely packed array of individual building blocks (Figure 5c).For SPs obtained from AuNPs, a densely packed array of NPs with intricate voids was formed.AuNC-containing SPs also showed a similar organization.Even though identical ligands were used in AuNCs and AuNPs, their core size differences resulted in different void spaces.AuNCs led to a more efficient and ordered packing with smaller voids, which agrees with the SAXS analysis results.The results also agree with the trend observed for capsids, where NCs resulted in well-ordered shells compared to nonuniform NPs.
Beyond surface ligand-mediated internanoparticle interaction, functional groups such as carboxylic acids can be exploited for metal coordination-directed self-assembly of NPs. 91handra et al. reported metal coordination-induced selfassembly of glutathione (GSH) capped AuNCs. 92By controlling the concentration of divalent metal ions (Cs + , Mn 2+ , Pb 2+ , Cd 2+ , Sn 2+ , Zn 2+ , Fe 3+ , Al 3+ , and Sn 4+ ), the size of the spherical superstructure was tuned from 30 to 200 nm (Figure 5d,e).Among all tested divalent metal ions, Sn 2+ showed more stable self-assembled structures.The resulting spherical particles significantly increased the photoluminescence quantum yield (PLQY), photocatalytic efficiency, and biological properties.For example, the PLQY was increased from 3.5% in individual NCs to 25% in self-assembled structures (Figure 5g,h).Furthermore, the photocatalytic efficiency using a model dye degradation experiment showed that in UV irradiation at 350 nm wavelength the degradation of methylene blue occurs within 5.5 min in the presence of self-assembled structures.However, the degradation occurred at 112 min for AuNCs and 140 min when no catalyst was used.Furthermore, the superstructures displayed better bioavailability than individual AuNCs.Better insights into the structure were needed to understand the amplified PLQY, catalysis, and bioavailability.Cryo-TEM imaging and electron tomographic reconstruction revealed the spherical nature of the superstructures.The cross-sectional view of 3D reconstruction revealed a densely packed network of AuNCs, resulting in a framework-like structure with a regular order (Figure 5f).The metal−ligand (Sn 2+ −GSH) interactions induce a well-defined network prohibiting several nonradiative relaxation modes in the frameworks.The strong luminescence primarily arises from the highly luminescent T1 state to Au (0) HOMO with an enhanced ligand-to-metal-to-metal charge transfer (LMMCT) relaxation mechanism.
The reversible self-assembly of NCs using noncovalent interactions is well-documented in the literature. 93However, dynamic covalent bonding has not been explored for reversible NP self-assemblies.Lakshmi et al. reported the dynamic covalent chemistry driven by [2 + 2] photocycloadditionmediated reversible self-assembly of Au 25 NCs. 94When thiolated umbelliferone (7-hydroxycoumarin) ligands capped AuNCs irradiated with UV light at 365 nm, the coumarin ligands of neighboring nanoclusters undergo [2 + 2] cycloaddition reaction facilitating inter-NC bonding via covalently linked cyclobutene adducts (Figure 5i).TEM, STEM, and 3D reconstruction suggested toroid formation (Figure 5j-m).Using tomography reconstruction of the early stages of assemblies, it was shown that initially, the AuNCs form spherical framework assemblies.Continued irradiation led to the fusion of a spherical structure and elongation, resulting in toroids.The toroidal outer diameter varied from 500 nm to 3.0 μm, and the rim thickness up to 140 nm.TEM tomography of the toroid shows that the rim is composed of densely packed NCs.More importantly, further irradiation led to the fusion of toroids into honeycomb-like supertoroidal macroscopic frameworks.Due to the dynamic nature of the [2 + 2] cycloaddition reaction, irradiation of toroids at 254 nm resulted in disassembly into individual NCs.The reversible nature of the cycloaddition reaction was exploited for the conjugation of 5-fluorouracil and photocontrolled release.
Despite the atomic-level precision of NCs, their self-assembly often leads to nonuniform superstructures or heterogeneous self-assembled end products.Like individual building blocks, control over the size and shape of the superstructures influences their optical, biological, and catalytic properties.Therefore, improved methods to prepare highly uniform NC superstructures are needed.In this context, Bera et al. developed nanoshell-like assemblies called "superclusters" (SCs) using in situ depletion-guided engineering of GSH-capped AuNCs (Figure 6a). 95The Au(I) thiolate complexes were mixed with a high percentage of polyethylene glycol (PEG-600) as a depletant, which resulted in spherical assemblies of an average diameter of 110 ± 10 nm.By maintaining constant depletion, the formation of metallic Au core was triggered by sacrificing the GSH ligands from the Au(I)-thiolate complexes using thermal activation of superstructures.The NC density was tuned by controlling the thermal treatment times at 12, 24, 48, and 72 h.Accordingly, the optical properties of the AuSCs were tuned.For example, at 12, 24, and 48 h, treatment resulted in AuSCs-1, AuSCs-2, and AuSCs-3, respectively, displaying typical absorbance spectral features of AuNCs (Figure 6b).
However, AuSCs-4, treated for 72 h, showed an interesting surface plasmon resonance peak (Figure 6c).Furthermore, the PL intensity also showed that AuSCs-1 and AuSCs-2 displayed PL intensities higher than those of AuSCs-3 and AuSCs-4.The question arises whether the surface plasmon peak is due to AuNP formation or strong NC-NC interaction under a confined environment.The AuSCs were tested for their peroxidase-like catalytic activity using a colorimetric assay based on 3,3′,5,5′tetramethylbenzidine (TMB) oxidation.All AuSCs displayed peroxidase-like activity by initiating the reaction within 5−10 min.However, the results suggested AuSCs-4 displayed higher catalytic activity, 33.5-fold higher than AuSCs-1.To gain insights into the origin of the surface plasmon resonance and the difference in catalytic activity, a 3D reconstruction of all AuSCs was performed.All AuSCs displayed spherical morphologies (Figure 6d−g).Interestingly, from cross-sectional views, it was evident that in AuSCs-1, only the surface Au(I)-thiolates converted into AuNCs, with an amorphous interior and a thin AuNC shell (Figure 6h−k).The shell thickness and the density of NCs increased from AuSCs-1 to AuSCs-4.Therefore, it was concluded that the surface plasmon resonance peak arises from the strong interaction of AuNCs in a confined shell and is not due to plasmonic NP formation.
NANOPARTICLE−NANOCLUSTER COMPOSITES
Hybrid and composite nanomaterials offer possibilities to engineer materials with tunable and controllable functions, properties, and applications.Anisotropic AuNPs such as nanorods (AuNRs) and nanotriangles (AuNTs) display unique surface plasmon resonance peaks, sensitivity to their surrounding chemical environments, and act as optical antennas for conjugated dye molecules. 96,97Importantly, fluorescent organic dye-conjugated NPs have been shown to alter the luminescence properties of the dye molecules.Luminescent hybrid materials offer multimodal imaging, sensing, drug delivery, and photodynamic therapeutic applications. 91For example, AuNRs with selective tip-functionalization with fluorescent dye molecules have been shown to display a 10-fold increase in luminescence compared to fluorescent dye itself. 98,99However, organic dyes undergo degradation and photobleaching.On the other hand, semiconductor quantum dots emerged as luminescent nanomaterials with excellent PLQY. 100 However, they are toxic and not suitable for bioimaging.Silica quantum dots, on the other hand, are nontoxic but prone to oxidation. 101Because of their low toxicity and high photothermal stability, noble metal NCs have emerged as interesting luminescent nanomaterials. 91,92urthermore, combining plasmonic NPs with atomically precise NCs offers unique plasmon-exciton coupling.Therefore, developing methods to fabricate NP-NC composites and hybrids will pave the way for a new type of nanomaterial with enhanced optoelectronic properties.
Som et al. reported the formation of composite bilayered structure when titanium nanowires interacted with atomically precise AgNC, Na 4 Ag 44 -pMBA 30.Importantly, Na 4 Ag 44 -pMBA 30 shows patchy hydrogen bonding bundles. 102This property has been utilized to develop macroscopic, mechanically robust, strong, and elastic monolayer membranes. 103In its solid-state structure, Na 4 Ag 44 -pMBA 30 displays bundles of two (L2) and three (L3) ligands.The L2 bundles allow intralayer hydrogen bonding and L3 form interlayer hydrogen bonding.Chakraborty et al. investigated the hydrogen bonding directed AuNR-Na 4 Ag 44 -pMBA 30 self-assemblies into composite cages (Figure 7a−d). 104The CTAB-protected AuNRs (d ≈ 10 nm, l ≈ 30 nm) were exchanged with pMBA ligands (AuNR@pMBA).The selfassembly was achieved by mixing the AuNR@pMBA and Na 4 Ag 44 -pMBA 30 in N,N-dimethylformamide (DMF).The pMBA ligands on the surface of AuNR and AgNC allow the NR-NC interaction via hydrogen bonding.It was shown that the resulting AuNR-AgNC composite displays the peaks arising from the AuNR and AgNCs in their UV−vis spectra (Figure 7e− g).This suggests that the intrinsic properties of both components were retained in the composite structures.However, significant broadening was observed in the NIR region of the spectrum, presumably due to possible electronic interactions between AuNR and NCs in the composites.The conventional TEM and STEM images show AuNRs in the core and NCs in the shell of the self-assembled structure.TEM and STEM tomographic reconstruction revealed that AuNR-AgNC coassembly resulted in an octahedral cage (Figure 7h−p).Notably, each cage encapsulated a single AuNR, offering a rapid and robust approach to composite supracolloidal cages.
The pure Na 4 Ag 44 -pMBA 30 crystallizes in a triclinic lattice.The crystal structure data were used for computational simulations to understand the octahedral nature of the composite cage.The simulation results suggest that the lattice structure of octahedral assemblies is face-centered cubic.Tomographic reconstruction of different stages of growth suggests that in the early stages of the assemblies Na 4 Ag 44 pMBA 30 formed a uniform assembly around the entire AuNR surface (Figure 7h).As the reaction proceeds, the AgNCs preferentially interact around the AuNR body compared to the tip or the two ends (Figure 7i,j).This is attributed to a higher density of hydrogen bonding sites at the center than the AuNR tips.Furthermore, the preferential attachment of AgNCs to the Au ⟨110⟩ than Au ⟨100⟩ facets of GNR@pMBA, induces anisotropic growth, resulting in octahedral nanocages encapsulating a single AuNR.The NP−NC interactions can be controlled by modifying the functional groups of the ligands.For example, partially deprotonated pMBA-capped AuNCs, Au 102 -pMBA 44 and Au 250 -pMBA n NCs interacted with AuNR@ pMBA in aqueous media.Unlike Na 4 Ag 44 -pMBA 30 , the AuNCs produced a monolayer shell around AuNRs.This is due to the negatively charged carboxylates on the NC surface, which provided sufficient electrostatic repulsion to stabilize the composite structures.Therefore, by controlling the ligand functional group and reaction media, the composites' shell thickness and morphological features can be tuned.
Self-assembly of metal nanoparticles mediated by the noncovalent interactions between the surface ligands allows detailed investigation of the structural, morphological, and compositional effects on hybrid and composite structure formation.Chakraborty et al. investigated the interaction of AgNCs of different ligand functionalities such as hexadecyltrimethylammonium chloride (CTAC), dimethylbenzenethiol (DMBT), 1,2-bis(diphenylphosphino)ethane (DPPE) with CTAC capped AuNTs. 105or example, when Ag 25 DMBT 18 interacted with CTABcapped AuNTs, the formation of Ag-doped AuNTs was observed with the etching of Au atoms from the tips of triangles.Interestingly, dendritic shells of Ag were formed around AuNTs when they were mixed with Ag 25 H 22 DPPE 8 .The etching of Au atoms was found to be affected by the type of ligand on the AgNC surface.For example, faster etching was observed when AuNTs interacted with Na 4 Ag 44 -pMBA 30 .In contrast, the directional hydrogen bonding prevents atom exchange or etching, resulting in a stable composite core−shell structure.This was supported using composite formation using AuNT@ pMBA and Na 4 Ag 44 -pMBA 30 NCs, which resulted in a core− shell structure without any etching or doping.
While doping, etching, and composites discussed above are innovative approaches for multifunctional nanomaterials, they do not have luminescent properties.However, they provide clues to control and tune the interaction between plasmonic NPs and NCs.In this context, Chakraborty et al. reported a threecomponent system consisting of AuNR and lipoic acid (LA) capped AgNCs (Ag 29 LA 12 ) to develop luminescent composites (Figure 8). 106To anchor the NCs on the AuNR surface and avoid direct interaction, ligand exchange, or doping, the AuNRs were coated with mesoporous silica.The silica-coated AuNRs (AuNR@SiO 2 ) were surface functionalized using (3-aminopropyl) triethoxysilane (APTES), which provides a positive surface charge for electrostatic assembly with negatively charged AgNCs.Furthermore, the coating also improves the photothermal stability of AuNRs and prevents photoluminescence quenching.The interaction between the AuNR and Ag 29 LA 12 NCs was tuned by controlling the thickness of the silica layers (Figure 7a−h).The resulting composites displayed a nearly 2-fold increase in photoluminescence compared to Ag 29 LA 12 alone (Figure 7e).To understand the effect of coating and the location of NCs, 3D reconstruction was performed for as-synthesized AuNR, AuNR@SiO 2, and AuNR@SiO 2 @Ag 29 .The 3D reconstructed structures of AuNR@mSiO 2 revealed that the silica shell was not uniformly distributed on the AuNR surface.Instead, the AuNR surface facets were protected alternately in all AuNR@SiO 2 layers irrespective of the silica layer thickness.This is attributed to the difference in the surface energy of different sets of planes of AuNRs.This finding suggests that choosing other particle morphologies with facets such as AuNTs may offer composites with distinct photophysical properties.The 3D reconstruction of AuNR@SiO 2 @Ag 29 showed that the NCs are anchored on the silica surface and no diffusion was observed.
BIOHYBRID SUPERLATTICES
Biological colloidal particles, such as virus capsids, protein cages, and synthetic DNA origamis, are excellent building blocks for hybrid structures. 51Their atomically precise structure, welldefined surface functional groups, and patchy interacting sites offer precise control over the structure, morphology, and functionalities.They are excellent templates for long-rangeordered structures and hierarchically complex assemblies across length scales.Liljestrom et al. reported a virus particle-AuNP superlattice using controlled electrostatic assembly in aqueous media (Figure 9). 107In this study, (11-mercapto undecyl)-N,N,N-trimethylammonium bromide (MUTAB) capped spherical cationic AuNPs of 12.4 ± 9 nm and tobacco mosaic virus (TMV) particles were used (Figure 9a−c).Electrostatic assembly generally leads to uncontrolled aggregation.Therefore, the cationic NPs were first treated with electrolytes, such as NaCl.This resulted in the aggregation of cationic AuNPs.The aggregated AuNPs were treated with intrinsically negatively charged TMVs and were dialyzed against water.AuNP aggregates are disassembled into individual AuNPs upon dilution, facilitating a controlled electrostatic assembly and a stable complex with virus particles.SAXS studies showed clear diffraction peaks across all ratios of n AuNP /n TMV from 0.5 to 500 (Figure 9d).However, the best-resolved peaks were obtained for n AuNP /n TMV between 10 and 25.Cryo-TEM imaging suggests that similar structures were formed, irrespective of different n AuNP /n TMV ratios.However, in the presence of low AuNPs, free TMVs were observed along with the complexes.Variable concentration cryo-TEM imaging suggests that the superlattices nucleate when TMVs are cross-linked by AuNPs (Figure 9e−g).
The cross-linked complexes attract AuNPs to a higher degree than free TMVs.This allows for the alignment of TMVs and the creation of an interstitial channel that is energetically favorable for AuNPs.According to SAXS data, the hybrid structure showed a 2D array in the superlattice.Rather surprisingly, the structure factor S(q) equaled that of a 2D square lattice with a lattice constant of 23.15 nm, suggesting close packing of the building blocks (Figure 9h,i).This was in contrast to typically observed hexagonal packing patterns for rod-like particles.(c) 3D reconstructed structure of AuNR@SiO 2 .(d) Absorbance spectra of Ag 29 LA 12 , AuNR@SiO 2 @Ag 29 , and AuNR@SiO 2 .(e) Photographs showing solutions of AuNR@SiO 2 (i) and AuNR@SiO 2 @Ag 29 (ii) under ambient light (top) and UV irradiation (bottom).(f) Schematic illustration of silica coated AuNR@SiO 2 @Ag 29 .(g) TEM image of AuNR@SiO 2 @Ag 29 .(h) 3D reconstructed structure of AuNR@SiO 2 @Ag 29 .Reproduced with permission from ref 106.Copyright 2022 American Chemical Society.
Therefore, the exact reason and arrangement cannot be understood by using SAXS data.Furthermore, as the ionic strength decreases, the internanoparticle distance also decreases.
Interestingly, CD spectra at the visible wavelengths of the superlattices showed a helical plasmonic nature.No CD spectra were observed at high ionic strength, i.e., when the components were not assembled into a superlattice.Further evidence to support the origin of the CD spectra from the superlattice was provided by mechanically shaking the mixture, which showed no CD signal.However, understanding the self-assembly mechanism, the origin of CD spectra, and the 2D square lattice formation required extensive structural investigation.Cryo-TEM imaging and 3D reconstruction of a single microwire revealed a right-handed helical twist with a well-defined pitch length and twist ω (360°/helical pitch) of ∼0.13°/nm (Figure 9j,k).Furthermore, careful analysis of the 3D reconstructed structure showed a 2D square lattice.From the cryo-TEM images and cryo-ET, the lattice constants were determined to be 25 nm (≈ lattice constant a) for ( 10) and 17 nm (≈ a/√2) for the (11) lattice planes.These values agreed with the SAXS-based lattice constants of 23.2 and 16.4 nm.The inter-NP distance remained constant for a given n AuNP /n TMV ratio and was found to be between 15 and 30 nm.The electrostatic repulsion between AuNPs and attraction between AuNP and TMV control the superlattice formation and inter-NP distance.The weak electrostatic interactions limit the formation of 3D lattices.Furthermore, helical twisting in a 3D superlattice is forbidden, as it breaks the translational symmetry in the direction of the rotational axis.
To further understand the mechanism of helical growth, the 3D coordinates from the cryo-ET reconstruction were collected and used for coupled dipole approximation simulations.For computational simulation 400 AuNPs were arranged in a helical superlattice structure maintaining a lattice constant of 23.15 nm and an interparticle distance of 16 ± 1.6 nm.The simulation reproduced the main features of the experimental CD spectrum (Figure 9o−r).However, there were some mismatch in the width and position of the peak-dip feature between the experimental and simulated CD spectra.The observed mismatch is attributed to the variation in the width and ω of the actual superlattice samples as supported using TEM imaging.Furthermore, the simulations revealed that the CD spectra is dependent on the orientation of the structure.Depending on the viewing direction both right-handed (axis direction) and lefthanded (transverse direction) twists can be observed for this type of superlattices.
Finally, the effects of the size and shape of the nanoparticle was studied.Larger nanoparticles bind to four TMV molecules.On the other hand, in smaller NPs, each NP binds to three TMV leading to a hexagonal lattice.Chakraborty et al. extended this approach to demonstrate near-infrared chiral plasmonic microwires using TMV-AuNR superlattice formation. 108However, the structural variety of the assemblies achieved using protein cages and capsids is limited due to the selected protein's predetermined shape, charge, and size.Therefore, exploring whether programmable and modular DNA nanostructures could be equally used to organize AuNPs into well-ordered structures is desirable.
Julin et al. investigated the effect of AuNP size and the shape of DNA origamis in superlattice formation (Figure 10). 109hree types of DNA origami structures, viz, 6-helix bundles (6HB), 24-helix bundles (24HB), and 60-helix bundles (60HB), with lateral diameters of 6.0, 16.0, and 28.3 nm, respectively, were used (Figure 10a).Cationic AuNPs of three sizes were used, small, large, and extra large, with diameters of 8.5, 14.7, and 15.8 nm, respectively (Figure 10b).The controlled electrostatic assembly between cationic AuNPs and negatively charged origamis resulted in well-ordered 3D tetragonal superlattices (Figure 10c−i).The small-angle X-ray scattering (SAXS) measurements of aqueous samples containing different combinations of DNA origami and AuNP, as well as varying stoichiometric ratios, n AuNP /n origami , revealed well-ordered superlattice structures in the case of 6HB and small AuNP (d core = 2.5 nm).Whereas all other studied combinations produced less ordered aggregates with only a short-range order.The cryo-TEM images and 3D reconstruction revealed that 6HB and small AuNPs (d core = 2.5 nm) form large, micrometersized 3D tetragonal superlattices (Figure 10g−i).The average lattice constants determined from the cryo-TEM images and cryo-ET reconstruction are a = 8.6 ± 0.9 nm (sd), c = 11.8 ± 1.0 nm (sd) and a = 9.1 nm, c = 11.9 nm, respectively (Figure 10f).The results were matched with the lattice constants obtained from the SAXS analysis.Surprisingly, superlattices were not formed when these same AuNPs were complexed with either 24HB or 60HB.Larger AuNPs (d core = 10.9 nm) could immobilize all three types of DNA origami at n AuNP /n origami ∼ 30−40, indicating an efficient binding between large AuNPs and all studied DNA origami structures.The 6HBs are anisotropic rod-like, flexible particles similar to TMVs.On the other hand, 60HBs do not have a sufficient degree of anisotropy due to their box-like shape limiting dimensional anisotropy and superlattice formation.Therefore, the results suggest that size, shape, and charge complementarity between the building blocks are crucial parameters for superlattice.
In a recent study, Julin et al. demonstrated the multilamellar structure using electrostatically assembly of DNA origami with a cationic 1,2-dioleoly-3-trimethylammonium-propane (DOTAP) lipid molecules (Figure 11). 110Three types of DNA origamis, viz., 6HB, 60HB, and plate-like particles, were used (Figure 11a).The cryo-TEM image analysis showed an average interlamellar spacing of 5.1 ± 0.7 nm, irrespective of the type of DNA origami used.However, tilt series for tomography reconstruction of the vitrified specimens were prone to electron beam radiation damage, which limited the analysis.Therefore, 3D reconstruction was performed using negatively stained specimens (Figure 11c−e).
The 3D electron density map from the TEM tomography reconstruction of the assemblies and their cross-sectional views suggest that the resulting complexes comprise a densely interconnected network (Figure 11f).Interestingly, the 3D reconstruction revelated that concentric lamellar structures were formed when 6HB was used.On the other hand, in the case of 60HB and plate stacked lamellar arrangements were observed.DOTAP forms flat lamellar structure due to zero spontaneous curvature (lipid packing parameter, P ≈ 0).However, the observed difference in the lamellar arrangement of hybrid DNA origami-DOTAP structures is attributed to the DNA origami templated lipid packing behavior.For example, the 6HB is a rodlike flexible particle and wraps into a "ball of yarn"-like assemblies with high curvature when combined with DOTAP.On the other hand, 60HB and the plate display low curvature due to their rigid hexahedra structures.The rigid nature of the origami structures promote the stacked lamellar arrangement of DOTAP molecules.
BIOMOLECULAR ASSEMBLIES
Unlike metal-nanoparticle-based superstructures, polymer and biomolecular assemblies face several challenges, including specimen preparation artifacts and electron beam-induced damages.Furthermore, the achievable vitrified ice thickness (70−130 nm) also limits cryo-TEM imaging of soft polymeric and biological structures above 100 nm thickness. 111Such structures will readily deform, seriously limiting any realistic structural insights.Therefore, alternative specimen preparation methods to preserve the original structures are needed.Bertula et al. studied the self-assembly of star-like amphiphilic derivatives of bile acids conjugated with hydrogen-bonding 2ureido-4[1H] pyrimidinone (UPy) moieties (Figure 12a,b). 112he UPy molecules display strong quadrupolar hydrogen bonding.
The star-like amphiphiles self-assemble into nanometric micellar structures in polar solvents such as dimethyl sulfoxide (DMSO).UPy moieties do not form hydrogen bonding dimerization in DMSO, and the self-assembly is due to the intrinsic aggregation behavior of bile acids. 113Sequential solvent exchange from DMSO to water via controlled dialysis triggered the hydrogen bonding between UPy units of micelles, resulting in micrometer-sized spherical particles.However, drying artifacts were observed when the specimen preparation was performed under conventional methods.Moreover, they also were not beam tolerant.An alternative approach was then utilized for TEM specimen preparation using the sequential solvent exchange method.In this approach, after placing the sample on a TEM grid, it was sequentially washed with varying ratios of water/methanol, methanol/tert-butanol, and finally with tert-butanol, followed by vacuum drying.This approach retained the structure and provided specimen stable under electron beam irradiation (Figure 12b,i−v).The tomographic reconstruction showed the spherical nature of the superstructure.Systematic investigation and cross-sectional view suggest the highly interconnected network, i.e., supermicellar structures.A cross-sectional SEM image of spherical particles further supported the dense network, supporting the proposed self-assembly mechanism (Figure 12b,vi).
Fang et al. studied the coacervation of resilin-like peptide fusion proteins containing cellulose-binding terminal domains (Figure 12c). 114However, due to their liquid-like nature and the relatively larger size of the coacervates, they tend to deform during cryo-vitrification.This resulted in a flattened structure with cryo-TEM imaging.The cryo-ET reconstruction showed deformed disclike structures with limited structural details.The solvent exchange approach was utilized to understand the morphology of the superstructures.The particles retained their original structure and shape in this process, indicating a spherical nature.Most importantly, the 3D reconstruction of the coacervates revealed layered onion-like structures, with each layer having a lateral width of 20 nm.Each layer was composed of protein subunits.This study provided the first 3D structural details and possible self-assembly mechanistic details of coacervates.
In nature, lignin is another abundant molecule removed as an unwanted waste during pulping and biofuel production.Lignin is a polyphenolic biomolecule with complex and varying chemical structure and molecular weight, making complete structural understanding at the molecular level challenging.However, in recent years, there has been considerable effort to prepare spherical lignin nanoparticles as a sustainable alternative to synthetic polymeric nanoparticles.However, understanding the spherical nanoparticle formation and determination of their 3D structure remain major challenge.Furthermore, it has been shown that the particle morphology depends on the purity and combination of solvents.However, the exact 3D structures and packing are not known.Zou et al. reported an extended study on LNPs prepared from aqueous acetone and aqueous THF (Figure 12 e,f). 115The conventional and cryo-TEM images suggested the average sizes of LNPs of 47 ± 13 and 66 ± 22 nm, respectively, in acetone (LNP acetone ) and THF(LNP THF ).The electron tomography reconstruction of LNP sacetone and LNPs THF confirmed its spherical nature.The cross-sectional views of the tomographs revealed that the LNPs are composed of homogeneously distributed smaller building blocks (Figure 12e,f).This study provided the first high-resolution 3D structural insight into LNPs.More importantly, it also verified that the LNPs are more compact structures than commonly proposed core−shell structures or hollow structures.Further support on the structure and porosity was provided using SAXS and the nitrogen gas (N 2 ) adsorption−desorption method.
LIMITATIONS AND FUTURE PERSPECTIVES
TEM tomography is a powerful technique for imaging individual nanoparticles, self-assembled structures, and hybrid materials.This Perspective provides insights into some representative examples based on the author's contribution to understanding the structure, packing patterns, self-assembly mechanism, and crystal structure determination of self-assembled colloidal superstructures.Despite the tremendous progress in instrumentation, imaging, and imaging processing, TEM tomography has several challenges and limitations.Tomography utilizes a series of 2D projections of objects collected by tilting the specimen with an increment in angle or slope.Since multiple images of the objects are to be collected, the sample is exposed to a relatively longer electron beam.This causes radiation damage in soft colloidal particles and biological samples.Low-dose imaging combined with fast tomography methods developed by Bals and co-workers can overcome radiation damage. 76In a recent study, Marchetti et al. reported the templated selfassembly of branched Au nanoparticles. 1163D reconstruction of the superstructures using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) imaging and fast tomography revealed the high-resolution internal structure and core−shell nature of the superstructure.Cryovitrification (plunge and high-pressure freezing) and automated image collection for aqueous samples also help to minimize radiation damage.However, the cryo-TEM specimen preparation can be laborious.−119 However, such methods may encounter a poor signal-to-noise ratio of images and final reconstruction.Direct electron detection cameras (DED) offer imaging with high sensitivity, signal-to-noise ratio, and resolution. 120However, the size of data accumulation is relatively large in terabytes.Therefore, data handling, storage, and processing may face challenges.Liquid cell TEM is another emerging technique to study nanoparticle dynamics and self-assembly in their native environment. 121However, such experiments require highly specialized liquid cell holders and microelectromechanical system-based (MEMS) chips.They also suffer from background noise, low signal-to-noise ratios, limited tilt range due to narrow visualization windows, and a long image acquisition time.Wang et al. recently showed that combined liquid cell TEM imaging and fast tomography allow high-resolution 3D reconstruction of CTAB-capped AuNRs. 122By comparing the 3D reconstruction of AuNRs, it was shown that the internanoparticle distance significantly differs in dry state and liquid state.Introducing this approach to study self-assembly offers more realistic details about the dynamics of such assemblies in real-time in their native environment.
The fundamental limitations of TEM, such as specimen thickness and field of view, may limit the acquisition of highresolution data from large self-assembled particles.Furthermore, the tilt range, increment angles, and total number of projections directly impact the resolution of the final ET reconstruction.Therefore, the sample should be thin enough to obtain meaningful structural details.For example, the thickness of an object will increase by a factor of 2 , 2, and 3 tilting the specimen at angles of 45°, 60°, and 70°, respectively.The increased thickness poses challenges in determining the correct focus to image the object and contributes to artifacts in the final reconstruction.Currently, the method is well-suited for samples up to 30−50 nm thick, but 100 nm is the upper limit.However, large samples require other approaches such as embedding and sectioning using a microtome.For biological samples, highpressure freezing and cryomicrotome have been used.Some examples discussed in this paper, such as supermicelles and coacervates, can be reconstructed despite their larger size.This has to do with the fact that polymer-based materials are not as dense as metal nanoparticles.Furthermore, they are transparent to electron beams due to their intrinsically porous structure.However, they are not devoid of artifacts that naturally arise from the missing wedges.
Limited tilting angle results in a missing wedge and severely degrades the spatial resolution of ET along the direction of specimen thickness.To overcome the limited angle, dual-axis tilt approaches have been developed.However, dual-axis tilt can improve the resolution only by a factor of 2 . 123For larger samples, it is useful to utilize multiple other techniques, including SEM tomography, sectioning, and X-ray micro-tomography.Despite these limitations and challenges, TEM tomography is one of the most valuable methods for the 3D structure of materials at the nano-to subnanoscale.Over the past few years, there has been tremendous progress in overcoming the above challenges.From continuous fast image collection, direct electron detectors and new algorithms for image reconstruction together with improved computation power resulted in high-resolution reconstruction.While TEM tomography is a revolutionary technique, it is even more powerful when combined with or as a complementary tool to other analytical techniques such as small-angle X-ray scattering.Nanoparticle self-assemblies are excellent model systems to study in situ liquid cell-based imaging.Research in this direction has already taken significant steps.Integrating real-time imaging with tomography reconstruction will offer profound insight into the nucleation and growth mechanisms of colloidal selfassemblies under native reaction conditions.Because of their diverse sizes, shapes, and properties, nanoparticles display different dynamics and phase behavior at the interface.In this context, computational simulation methods offer valuable information to understand their properties.Various simulation methods, including Monte Carlo, molecular dynamics, mesoscale simulations, self-consistent mean field theory, and ab initio molecular dynamics methods, have been utilized to study the interfacial properties of nanoparticles. 124The results from computational simulations can be effectively utilized to validate the experimental results using dry-state, liquid-state, and cryo-TEM-tomography-based 3D structures of nanoparticle assemblies.A combination of multiple image acquisition methods, advanced image processing, and computational methods has the potential to offer high-resolution structural details of selfassembled nanoparticle superstructures.Such methods are also useful for studying soft biopolymer-based assemblies such as coacervates.
Notes
The author declares no competing financial interest.
■ ACKNOWLEDGMENTS
The author acknowledges the Academy of Finland for Project Funding (No. 352900), Photonics Research and Innovation (PREIN) flagship, and Tampere Microscopy Centre (TMC), Tampere University, Finland.The author thanks Dr. Peter Engelhardt for introducing the field of electron tomography and all collaborators and coauthors (whose names appear in the cited references) in exploring various self-assembled structures.Their contributions were crucial in advancing the topics discussed in this article.
Figure 1 .
Figure 1.Electron tomography of self-assembled superstructures discussed in this Perspective.
Figure 2 .
Figure 2. Supracolloidal capsids.(a) Chemical structures, synthesis, and in situ self-assembly of CoNPs.(b) TEM image of the as-synthesized CoNP capsids in 1,2-DCB.(c,d) Higher magnification images of individual capsids showing core−shell structures.(e) 3D reconstructed image of a capsid with an empty interior and ∼20 nm multilayered shell.(f) 3D reconstructed image of a capsid containing amorphous materials in the core and ∼20 nm multilayered shell.(g) 3D reconstructed cross-sectional view of the shell showing individual CoNPs and the voids filled with amorphous materials.(h) TEM images of capsids after solvent exchange to acetone showing core−shell structure.(i) 3D reconstructed image of acetone-treated capsids showing an empty core and a deformed shell.(j) 3D reconstructed capsid chains formed under the magnetic field suggest capsids' structure remains intact (inset shows the 3D reconstruction of a single capsid from the chain at higher magnification).Reproduced with permission from ref 85.Copyright 2017 John Wiley & Sons.
Figure 4 .
Figure 4. Self-assembled capsids with monolayer shells.(a) Synthesis of the Au 102 -pMBA 44 NC.(b) X-rays single crystal structure of Au 102 -pMBA 44 NC (yellow: Au, blue: S, red: O, gray: C, and white: H).(c) Ligands are represented in arrows to determine their location and orientation in a 3D coordinate system.(d) TEM image of Au 102 -pMBA 44 NCs.(e) TEM image of the self-assembled colloidal capsid.(f) 3D reconstructed structure of the capsid.(g) Part of the shell showing monolayer thickness.Panels a-g reproduced with permission from ref. 88 Copyright 2016 John Wiley & Sons.(h) Chemical structures of DFNKF peptides.(i) TEM image of an in situ generated gold-peptide capsid.(j) 3D reconstructed structure of capsid.(k) cross-sectional view showing monolayer shell with amorphous peptides in the interior.(l) CD spectrum of DF(I)NKF peptides when treated with different ratios of Au.Panels (h)−(l) reproduced with permission under a Creative Commons license (CC-BY 4.0) from ref 89.Copyright 2019 American Chemical Society.
Figure 5 .
Figure 5. Nanoparticle frameworks.(a) Schematic representation of HFB-II mediated assembly of fluorinated AuNPs into supraparticles.(b) TEM image of a supraparticle.(c) Cross-sectional view of a 3D reconstructed FSP and a magnified view showing inter-NP voids for selective encapsulation.Panels (a)−(c) reproduced with permission from ref 90.Copyright 2017 John Wiley & Sons.(d) Structure of GSH-capped AuNC.(e) TEM image of an NC framework.(f) 3D reconstructed structure showing densely packed NCs.(g) Shows the change in PL intensity as a function of metal ion concentration.(h) shows the change in PL intensity when different divalent metal ions were added.Panels (d)−(h) reproduced with permission from ref 92.Copyright 2019 John Wiley & Sons.(i) Chemical structures and schematic representation of dynamic covalent chemistry induced toroid formation of coumarin thiol capped AuNCs.(j) TEM image of a toroid.(k) DF-STEM image of a toroid.(l,m) 3D reconstructed image of a toroid viewed at different orientations.Panels i-m reproduced with permission from ref 94.Copyright 2023 John Wiley & Sons.
Figure 6 .
Figure 6.In situ depletion guided assembly of gold superclusters.(a) Schematic representation of the in situ depletion guided nanoshell-like AuSC formation.(b) Absorbance spectra of AuSCs.(c) PL intensity of AuSCs.(d−g) TEM images of AuSCs at different tilt angles indicate spherical morphologies.(h−k) 3D reconstructed structures of AuSCs (left) and their cross-sectional views (right) showing differences in core−shell structures of AuSCs.Reproduced with permission from ref 95.Copyright 2023 American Chemical Society.
Figure 7 .
Figure 7. Composite cages.(a) Schematics showing the structure of AuNR@CTAB, AuNR@pMBA, and AuNR-Ag composite cage.(b) 3D reconstructed structure of AuNR@CTAB.(c) 3D reconstructed structure of AuNR@pMBA.(d) TEM image of composite cages (inset showing DF-STEM image of an individual cage).(e−g) Absorbance spectra of AuNR, Ag 44 , and composite structures, respectively.(h−j) 3D reconstructed structures of composites showing different intermediate stages of the growth.(k and l) DF-STEM 3D reconstructed structures of composite cage showing octahedral morphology.(m) Computational simulation showing the interaction of Na 4 Ag 44 pMBA 30 .(n−p) 3D reconstructed structure viewed at different orientations, showing the location of AuNR in the composite cage.Reproduced with permission from ref 104.Copyright 2018 John Wiley & Sons.
Figure 9 .
Figure 9. Virus-AuNP superlattices.(a) TEM image, size distribution, and schematic structure of cationic AuNPs.(b) TEM image of negatively stained TMVs.(c) Schematics showing the structure and repeat units of TMV.(d) SAXS patterns were recorded at varying AuNP/TMV ratios.(e) Shows schematics and concentration-dependent cryo-TEM images of the TMV-AuNP cooperative assembly.(f) Cryo-TEM image of a superlattice at n AuNP /n TMV = 1.(g) Cryo-TEM image at n AuNP /n TMV = 15.(h,i) SAXS pattern at n AuNP /n TMV = 15 indicating square lattice.(j) Cryo-TEM image showing right-handed helical twist.(k) Cryo-TEM image used for 3D reconstruction.(l) 3D reconstructed structure of superlattice showing righthanded helical twist.(m) Isosurface view of individual nanoparticle chains and the space occupied by TMVs (indicated in blue).(n) Right-handed twist.(o) Absorbance spectra of AuNPs and AuNP-TMV complexes.(p) CD spectra of AuNPs and AuNP-TMV complexes (inset shows the computationally simulated CD spectra).(q,r) Computational simulations in the transverse direction (left) and the axis direction (right).Reprinted with permission under a Creative Commons license (CC-BY 4.0) from ref 107.Copyright 2017 Nature Publishing Group.
Figure 10 .
Figure 10.DNA origami-based superlattices and lamellar assemblies.(a) Schematic illustrations of DNA origami structures.(b) Schematic illustrations of cationic AuNPs.(c,d) Cryo-TEM images of 6HB-cationic AuNP superlattices.(e) High-resolution cryo-TEM image of a superlattice (inset shows the FFT).(f) Interparticle distances based on the cryo-TEM image in e. (g) Inverse fast Fourier transform (IFFT) from the cryo-TEM image along different project axes and a schematic of the unit cell.(h) 3D reconstructed structure of superlattice (left), density map showing the arrangement of AuNPs along a single DNA origami (middle and right), and packing patterns of AuNPs along a DNA origami (right) denoted by yellow spheres.(i) Schematic illustration of the 3 × 3 tetragonal unit cell based on 6HB-small AuNP superlattice.Reprinted with permission under a Creative Commons license (CC-BY-NC 3.0) from ref 109.Copyright 2019 Royal Society of Chemistry.
Figure 11 .
Figure 11.DNA origami-cationic lipid multilamellar structures.(a) Schematic illustration of DNA origami structures and the chemical structure of the DOTAP molecule.(b) Schematic illustration of the multilamellar structure and chemical interaction between negatively charged phosphate groups and cationic lipid molecules.(c) TEM images showing various morphologies of negatively staining 6HB-DOTAP.(d) High-resolution image (left) and interlamellar distance (right).(e) Cryo-TEM image of 6HB-DOTAP complexes.(f) 3D reconstructed structure (top) and cross-sectional views (bottom) of multilamellar structure at different depths.Reproduced with permission from ref 110.Copyright 2021 John Wiley & Sons.
Figure 12 .
Figure 12.Biomolecular assemblies.(a) Schematic illustration of bile acid-derived star-like amphiphiles.(b) TEM images (i−iii) of a supermicelle at different tilt angles and 3D reconstructed structure, its cross-sectional view, and schematics (iv−vi).Panels (a) and (b) reproduced with permission from ref 112.Copyright 2017 Elsevier.(c) Schematic illustration of CBM appended resilin-like peptides.(d) TEM images (i−iii) of a coacervate at different tilt angles showing spherical morphology and (iv−vi) 3D reconstructed structure, cross-sectional view, and SEM image of a coacervate.Panels c and d reproduced with permission from ref 114.Copyright 2018 Elsevier.(e) 3D reconstructed structures of lignin particles in acetone, crosssectional views, and TEM images at different tilt angles.(f) 3D reconstructed structures of lignin particles in THF, cross-sectional views, and TEM images at different tilt angles.Panels (e) and (f) reprinted with permission under a Creative Commons license (CC-BY 4.0) from ref 115.Copyright 2021 American Chemical Society. | 12,171.6 | 2023-11-09T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Micro-mapping of terrestrial gamma radiation dose rate in typical urban homes in Miri City (Sarawak, Malaysia)
Micro-mapping of terrestrial gamma radiation dose (TGRD) at meter grid spacing in and around four urban homes in Miri City shows rates ranging from 70 to 150 nGy/h. Tiled surfaces (floors and walls) vary between properties and have a clear and significant influence on TGRD which is highest in kitchens, washrooms and toilets. Application of a single indoor value for annual effective dose (AED) may lead to underestimations of up to 30%. The AED is unlikely to exceed 0.8 mSv in homes of this type in Miri, which is within recommended guidelines.
Introduction
Background radiation from natural materials in our everyday environment, like air, soil, rocks and water, is one component of the exposure humans are subjected to, in addition to cosmic radiation [1]. Background radiation depends on a number of factors that are different from place to place, for example, the nature of the underlying geological material [2][3][4]. In addition, humans living in urban environments may also be subjected to external radiation emanating from various materials found in the built environment. Furthermore, radon gas concentrations may contribute to natural radioactivity and the total effective dose that populations are exposed to [5,6] since inhalation of radon gas is a potential source of internal exposure. In general, absorbed dose rates in urban and built-up areas are expected to exceed those in rural areas of similar geology due to the effect of building materials.
Buildings may be constructed using a number of geological or geologically-derived materials, the source of which may be local, regional or international, and these source materials may contain variable amounts of radionuclides and their progenies. Among the common geological materials used in construction are clay (in bricks), limestones, clay and quartz sand (in cement), feldspars, granites, marbles and zircon used in glazing for tiles and ceramics as well as various rocks used for aggregate. Industry by-products such as coal fly ash, alum shale or phosphogypsum may also be incorporated into building materials with radiological implications [6,7].
Other studies have shown that "choice in building materials has a noticeable contribution towards the indoor doses inhabitants are exposed to" [8] and in particular the zircon used in the tile gazing process [9]. In a study by Dodge-Wan and Mohan Viswanathan on Curtin University campus located in the north of Miri, tiles were found to contribute to gamma dose with an average indoor-to-outdoor TGRD ratio of 1.4 [10].
A number of models have been proposed for estimating the gamma dose indoors based on characteristics of the building materials and parameters related to the construction. Typical parameters used in these simulation models are room dimensions, wall thicknesses and density of floor, the surface of tiled areas, wall and ceiling materials, and the activity concentrations of 226 Ra, 232 Th and 40 K of the substances used i.e. their composition [11]. The RESRAD-BUILD computer code is an example of such a model [8,[12][13][14]. This has led to numerous studies that have focused on measuring those activity concentrations in various building materials [15]. In Malaysia for example, Yasir and Yahaya [16] studied 13 types of building materials available locally, while more recently Abdullahi et al. [17] studied 102 types and Abdullahi et al. [13] studied 80 types.
Since the time spent indoors can account for 80% of a person's life, it is important to accurately assess the indoor component of an annual effective dose. A growing number of studies worldwide have measured gamma dose indoors in situ, as opposed to calculating it based on other data [2,5,[18][19][20][21][22][23][24][25].
Mollah et al. [18] used dosemeters and survey meters to measure environmental gamma radiation in 20 homes constructed out of natural materials, in villages near Cox's Bazar, Bangladesh an area of high natural background radiation. Miah [19] measured indoor gamma dose rates for a period of a year in 15 brick and concrete buildings in Dhaka, Bangladesh. Al-Ghorabie [20] used dosemeters over a year to compare indoor gamma radiation in 250 houses in the city of At-Taif, Saudi Arabia including in apartments and mud houses, halls and villas with readings in one room per house. Malathi et al. [21] measured indoor gamma radiation but limited that to inside bedrooms in Coimbatore City, India and it is not reported how many readings were taken. Al-Saleh [22] used dosimeters over a 9-month study period to assess indoor gamma in various living rooms, bedrooms, kitchens and bathrooms in 5 homes in Riyadh city, Saudi Arabia. Svoukis and Tsertos [5] measured gamma radiation in situ in 70 locations outdoors and 20 indoors in urban areas in Cyprus and found an indoor-to-outdoor ratio of 1.4 ± 0.5. Papachristodoulou et al. [23] measured gamma radiation levels indoors and outdoors in 42 workplaces on a university campus in Greece. Hashemi et al. [24] measured gamma radiation in 43 randomly-selected homes in the city of Tehran, Iran but without mention of the type of building or rooms.
In general, these studies tend to be limited to a few isolated or single measurements in a large number of homes. An example is a study by Sakellariou et al. [26] in which 651 homes across 33 cities in Greece were monitored for indoor radiation. As a result, there is a research gap for detailed mapping of TGRD based on numerous in situ measurements within the different rooms and spaces inside typical homes i.e. micro-mapping of TGRD. Hence, this research aims to map the distribution of TGRD within typical urban homes in Miri City (Sarawak, Malaysia) and to assess how TGRD varies spatially and according to the specific building materials present. This study is based on a large number of in situ measurements from within four typical urban homes in Miri. The data can then be used to calculate the external exposure i.e. radiological impact of living in rooms with a range of construction materials within typical urban homes as well as the worst-case scenario of spending a large amount of time in those rooms with the highest radiological impact. This study does not cover internal exposure, which can be caused by inhaling 222 Rn, a decay product of 238 U and which is considered the most significant radionuclide that can accumulate in poorly ventilated dwellings and basements [22]. Radon in dwellings is generally tested using lithium fluoride thermoluminescence dosemeters which are passive monitors left in place over periods of 3 months or more [18,21]
Study area
The research involves four properties in the urban area of Miri, a city in northern Sarawak which had a population of over 350,000 in 2020 [27]. Micro-mapping was carried out inside and outside of four properties in Miri, located as shown in Fig. 1. The properties are named here after the neighbourhood in which they are located or the adjacent street name: Pujut, Senadin, Maigold and Acorus (in order of decreasing number of measurements). The properties are spread out over a distance of approximately 20 km in a north-south direction covering most of Miri city. The age of the properties ranges from approximately 60 years (Pujut), to 14 years (Maigold and Acorus) and approximately 11 years (Senadin). The Pujut house is double-storey detached, whereas the other three properties are single-storey semidetached. It should be noted that most urban homes in Malaysia, including these four properties, have tiled floor surfaces throughout. None of the properties have basements, and kitchens and washrooms are fitted with extractor fans and/or louvered windows to improve ventilation.
The city of Miri is built on a basement of sedimentary rocks of Middle Miocene age belonging to the Miri Formation with overlying Quaternary alluvium [28]. Miri Formation rocks consist of sandstones, mudstones and shales. Previous TGRD mapping has been carried out at Curtin University campus in the northern part of Miri city which is considered a greenfield site of equivalent underlying geology to that present at the four properties covered in this study [10]. The natural background TGRD average, away from campus buildings, was found to be 72 nGy/h. This is lower than the average across the whole of Malaysia (92 nGy/h) and lower than the average in the urban area of Kuala Lumpur [10].
Methodology
Portable Polimaster PM1405 survey meters were used for both gamma and beta measurements. These instruments measure gamma and beta radiation using a Geiger-Muller counter in which radiation is transformed into electropulses [29]. The instruments are calibrated by the supplier's Quality Control Department and considered valid for operation prior to use, to reduce instrumental error.
For Terrestrial Gamma Radiation Dose (TGRD) measurements, the instrument was positioned on a tripod one meter above ground level and allowed to stabilize until the statistical error percentage dropped below 10%. The readings of external environmental gamma radiation dose are expressed in sievert (Sv) the SI unit, and the instrument range is from 0.01 µSv/h to 130 mSv/h. A 1000 conversion factor was used to convert the readings in µSv/h to TGRD in nGy/h [10]. TGRD is the in situ measurement that is equivalent to air absorbed dose rates that can alternatively be determined from calculated activity concentrations of 226 Ra, 232 Th and 40 K [17,30]. For gamma, the instrument has measurement range 0.1 µSv/h to 100 mSv/h [29]. The limitation of this method is that the instrument is not significantly affected by potential presence of radon gas, for which other methods (such as passive devices left in place over several months) are commonly used [18,21].
For beta measurements, the instruments were placed directly on the surfaces and two readings were taken. The first reading is joint beta plus gamma value (β + ϒ, also called beta flux) with the instrument screen filter in open position and after stabilization to less than 10% statistical error which may take several hours. The results are expressed in counts per second (CPS). The second reading is performed after saving the β + ϒ value, closing the instrument screen filter and again allowing stabilization, to measure the beta value alone in CPM/cm 2 after subtraction of the background gamma signal [29]. For beta flux measurement the instrument range is 6.0 to 10 3 CPM cm −1 [29].
Errors of observation were minimized by using standard method for all readings and all operators and allowing The dimensions of rooms in the residences were measured using a Leica Disto D810 device and/or tape measure. The aim was to establish a 1 m by 1 m grid and acquire TGRD measurement data for every square meter within each residence, with additional measurements closer to walls in some areas. Additional data was also collected outside each residence to measure the background TGRD where no building materials are close or only present in part, such as on an open tiled patio or concrete surfaced parking space.
The annual effective dose equivalent (AED) was calculated following the method detailed in the literature as cited by Dodge-Wan and Mohan Viswanathan [10]. It is an estimate of the annual dose, resulting from both natural TGRD background outdoors for 20% of the time and TGRD indoors for 80% of the time.
Excess lifetime cancer risk has also been calculated, following the procedure stated in this paper.
Results
A total of 577 TGRD rate measurements were made in the four homes, and the results of this micro-mapping at Pujut and Senadin, where the highest number of readings were recorded, are shown in Fig. 2. At each property, the observed TGRD rate values have been grouped in the following four general categories according to the type of ground and wall covering materials: • ON: Outdoors with natural surfaces (grass, soil) away from building walls or other man-made structures • OM: Outdoors with mixed surfaces (for example concrete drive, patio, drain) • I: Indoors in room with floor tiles (for example living room, dining room, bedroom, which typically have tiled floors) • IWT: Indoors in room with floor tiles and wall tiles (for example washroom, kitchen and similar) The TGRD values show a consistent minimum of 70 nGy/h within the four urban homes, with the lowest values recorded outdoors on natural surfaces, such as grass or soil. The maximum TGRD is 200 nGy/h in Maigold home, in a 2 m by 2 m sized washroom with grey floor tiles and yellow wall tiles. Slightly lower maximum TGRD value of 180 nGy/h was recorded at Senadin in a washroom of similar size, with textured black floor tiles and white wall tiles. The maximum TGRD at both Pujut and Acorus homes was 150 nGy/h, and was also recorded in washrooms.
At all four homes, the TGRD is lowest in ON and OM categories and highest in I and IWT categories respectively as shown on Fig. 3 and in Table 1. The TGRD values were on average 5 to 19% higher outside on mixed surfaces, such as paved patio, compared to outside on natural surfaces. The TGRD were on average 17, 28, 36 and 62% higher inside the homes in rooms with tiled floors than compared to outside the homes on natural surfaces, at Pujut, Acorus, Senadin and Maigold properties respectively.
The TGRD were on average 51 to 103% higher in rooms where both floor and walls are tiled than compared to outside the homes on natural surfaces. When comparing TGRD in rooms with only tiled floors and in rooms with both tiled floors and tiled walls, across all four homes the increase ranges from 7 to 29%.
In order to better understand the variation of TGRD rates within the homes, the measurements were grouped according to the type of room, or living space as follows: outside on grass, outside on patio with tiled or concrete surface, indoors in living room (includes halls and dining areas), indoors in bedroom, indoors in kitchen and indoors in washroom and toilet (Fig. 2). All of the indoor rooms in all of the properties have tiled floors. The kitchens also have tiled walls (Pujut, Maigold and Acorus) or partially tiled walls (Senadin), whereas the washrooms and toilets have fully tiled walls. Table 2 provides the average TGRD values according to these room types. At all four properties a clear step-wise increase in TGRD is noted from outside on grass, outside on patio, to inside in room with tiled floor and further increasing in rooms with floor and wall tiles (kitchens and washrooms) as illustrated in the box and whisker plots of Fig. 4. The maximum and highest room average TGRD values at each property were consistently found in the washrooms and toilets. In washrooms and toilets, the average TGRD values were 63 to 118% higher than outside on grass.
There are considerable differences in TGRD values indoors in living room, bedroom kitchen and washrooms, between the homes at Pujut, Senadin and Maigold, the three homes for which a large amount of data on TGRD was measured. Table 2 and Fig. 4 show that across all the rooms, the Pujut property has the lowest TGRD values except for in the kitchen. On the other hand, the Maigold property has the highest TGRD values. The relatively low value recorded in the kitchen at Senadin might be due to the fact that Senadin kitchen has walls that are only partially tiled, whereas at Pujut and Maigold the kitchens are fully wall-tiled. The differences between TGRD values in specific rooms across the properties range from 23% (bedrooms, kitchens, washrooms) to 30% (living rooms).
A variety of construction materials are present in the homes: concrete surfaces, walls, glass windows, floor tiles, wall tiles and ceramic bathroom fixtures. In each home, a number of beta radiation values (in CPS/cm 2 ) were measured on each type of surface, including measurements on each different type of tile present in each of the properties. In all, over 300 beta values were measured. The results are summarized in Table 3 and shown in Figs. 5, 6 and 7.
The results for natural surfaces such as soil and grass were found to be low and consistent between properties as shown on Fig. 6, with an overall average beta value of 0.55 CPS/cm 2 (23 measurements). Concrete surfaces, including patio surfaces, drain edges, septic tank covers, interior and exterior walls were also found to be consistent with a slightly higher average beta of 0.98 CPS/cm 2 (72 measurements).
A total of 13 types of tiles were present in the properties and 183 measurements indicate an overall average for tiles of 6.19 CPS/cm 2 . In all the properties, the beta values for tiles were found to be systematically higher than for the natural or concrete surfaces (Fig. 5) and variable. The minimum reading recorded on tiles was 2.52 CPS/cm 2 (textured black tiles at Senadin) and the maximum was 8.81 CPS/cm 2 (glossy white floor tiles at Pujut). Significant differences were found between the average beta for different types of tiles, as shown in Fig. 7. The lowest average beta was 3.51 CPS/cm 2 on black tiles at Senadin home. Eight of the 13 tile types had average beta values between 4 and 6 CPS/cm 2 . Four of the types had average beta values in 6 to 9 CPS/cm 2 range, with the highest average being 8.15 CPS/cm 2 (pink tiles at Maigold).
Calculated annual effective dose
To compare the amount of radiation a person receives from their surroundings in a year with established limits and standards, it is common practice to calculate the annual effective dose (AED). The formula provides AED in mSv, based on the assumption that an individual spends 80% of their time indoors and 20% of their time outdoors in a year (8760 h) [10]. For application of the formula, TGRD values in nGy/h are required for both indoor and outdoor environments. A coefficient of conversion of 0.7 adapted by UNSCEAR is used to convert absorbed dose rate in air to effective dose in adult humans giving the following formula [10]: which can be expressed as: It is common practice to apply formula (2) using a single D out value for outdoor TGRD and another single D in value for indoor TGRD, irrespective of how these values were obtained [1,[31][32][33]. In this study, which generated a large amount of actual measured in situ data on both indoor and outdoor gamma dose rates in specific homes and in the specific inhabitable spaces within those homes, we propose to apply a more detailed and novel method to assess AED. The proposed method is based on formula (1) but with specific outdoors and indoor TGRD values (D out and D in ) based on the findings of micro-mapping for each inhabited space i.e. type of room and in each home. Table 4 outlines four scenarios that were considered here in the calculation of AED using the formula. In the scenario 1 a single D out value for outdoors on natural ground was applied for 20% of the time and a single D in value for average in a room with only floor tiles (such as typical living room) was applied for the remaining 80% of time. Scenario 1 method can be considered as the standard calculation as applied in most studies, and where there is limited data (single value for D out and D in ) [1,[31][32][33]. The scenario 2 is more detailed in that the specific TGRD values obtained by micro-mapping each type of inhabited space in each home are applied. For time spent outdoors, it was subdivided into 10% time on grass and 10% time spent on tiled or concreted patio. For time spent indoors, it was subdivided into 35% time spent in living room (for example 8.4 h for a person working from home), 35% time spent in bedroom (for example 8.4 h typical sleeping or in bedroom), 5% time spent in kitchen (1.4 h) and 5% time spent in washroom or toilet (1.4 h). The later two spaces may typically have tiled walls in urban homes. Scenario 2 represents the closest estimate to the actual realistic situation for calculation of AED where a lot of data is available. The scenario 3 is based on a fictitious home in which the single highest average TGRD for each type of inhabited space was used, based on the results of this micro-mapping study in four urban homes. Scenario 3 assumes that the highest average values observed anywhere in this study were all present together in a single home and applied in each space of that fictitious home. For scenario 4, it was considered that in addition to this, the inhabitant spent a larger proportion of their time in the specific spaces that have the higher TGRD values (such as 3.6 h spent in washrooms and toilets). Scenario 4 represents a fictitious "worst-case scenario", and is unlikely to be exceeded in homes of this type in Miri.
The results of AED calculations using the four scenarios are given in Table 5. The results for scenario 1 indicate that the AED ranges from 0.573 mSv at Pujut to 0.709 mSv at Maigold, which is a 24% difference between properties. The results for more realistic scenario 2, using large number of in situ TGRD measurements on the specific characteristics of each home and their respective building materials, confirm that there is a considerable difference in the annual exposure dose for inhabitants of the different homes. The Pujut home, which is the oldest property, had the lowest AED of 0.611 mSv and Maigold, one of the more recent properties, had AED of 0.742 mSv which is 21% higher. The other properties were 5% (Acorus) and 9% (Senadin) higher AED compared to Pujut.
Assuming a property with the highest observed TGRD for each space, i.e. a property which combines all the high averages for the respective building materials in one home, could lead to AED of 0.755 mSv (scenario 3), which is 22% higher than Pujut and similar to the Maigold home where highest TGRD values were actually observed. The calculation results for scenario 4, in which a person spends a lot of time in the rooms with highest TGRD, show it would be possible to reach AED of 0.801 mSv in this worst-case scenario. This is 31% higher than the realistic scenario 2 at Pujut with the difference being in the specific TGRD of the rooms and the amount of time spent in them.
Excess lifetime cancer risk
Excess lifetime cancer risk (ELCR) is a calculated indication of the additional risk that a person would develop cancer due to exposure to cancer-causing substances, over and above the "normal" risk without exposure to those substances. It is "the difference between the proportion of people who develop or die from the disease in an exposed population and the corresponding proportion in a similar population without the exposure" [34].
Excess lifetime cancer risk (ELCR) is calculated as follows: [35]. For this study, the calculated ELCR under the scenarios presented for the calculation of AED are given in Table 6 for comparison with data from Curtin University campus in Miri [10], Malaysian and world averages [1]. The results obtained by this study suggest that using single value for D out and D in may lead to an underestimation of the ELCR (scenario 1) compared to more realistic calculation that considers the specific TGRD in each room and typical time spent in them (scenario 2), with differences of the order of 0.2 × 10 -3 in ELCR. The results of micro-mapping indicate that in worst-case scenario (scenario 4) in properties of this type, the ELCR might be up to 0.8 × 10 -3 above the underestimated value obtained with the standard calculation (formula (2), scenario 1).
Discussion
The minimum TGRD and averages for outdoors (natural surfaces) are slightly higher in the urban areas (i.e. the gardens of the four homes) than those reported on Curtin University campus built on a greenfield site near Miri [10]. This suggests that the TGRD might still be influenced by building materials to some distance, estimated at a few meters away from those materials, as for example in gardens close to properties where there may be walls, covered patios and other materials. The maximum TGRD are consistent across all the properties and also Curtin University site [10] -they are also consistently highest in small rooms with tiles floors and walls i.e. in typical washrooms.
The outdoor average TGRD values obtained in this study, given in Table 1, are all below the reported Malaysian average [1]. As mentioned, the geology of the area is not expected to have high background radiation, being essentially quartz-rich sedimentary rocks.
This study indicates indoor-to-outdoor ratios for TGRD that range from 1.17 to 1.62 with the lowest ratios in the oldest property (Pujut) and higher ratios in new properties. The ratio at Curtin University campus in Miri was reported to be 1.43 which is within this range [10]. UNSCEAR [1] report Malaysian average of 92 nGy/h outdoors and 96 nGy/h indoors, so a ratio of 1.04. More TGRD data has been obtained by a number of authors since 2000 and has been summarized by Dodge-Wan and Mohan Viswanathan [10] which indicates significant variability outdoors in several areas of Malaysia, with some outdoor averages exceeding 200 or even 300 nGy/h in high radiation hot spots. UNSCEAR [1] indicates a world average indoor-to-outdoor ratio of 1.4.
The Malaysian average indoor TGRD is reported to be 96 nGy/h [1]. This study has obtained a very high number of readings, rarely obtained in other studies of indoor radiation. Previously Sulaiman and Omar [36] reported an average indoor TGRD value of 42 nGy/h for 20 towns in Sarawak state. This is significantly lower than the averages obtained in this study which range from 97 to 159 nGy/h depending on the rooms. It is thought that the difference may be due to the fact that Sulaiman and Oman studied a range of houses made of concrete, brick and wood including wooden houses in water villages [36]. This study targeted the concrete constructions with tiled floors i.e. the urban homes in areas built up over the last 60 years.
Although there is a research gap on micro-mapping of TGRD inside buildings, a number of studies have measured gamma radiation in various dwellings around the world. With large difference in local geology that can be expected to affect the results, in addition to differences in building styles, materials and other factors, it is not appropriate to directly compare with the results of this study in four urban homes in Miri. There are however, a number of findings are relevant.
Miah [19] noted and inverse linear relationship with correlation coefficient of -0.96 between building age and annual average dose rate in 15 houses around the Atomic Energy Research Establishment at Savar, Bangladesh and suggests that this relationship might be due to various factors including the materials used in the constructions. In Miri, it is noted that the oldest property at Pujut showed the lowest TGRD and AED values, with higher values in newer properties at Maigold and Senadin. However, it is not known if this is due to a change in the type of building materials used or other factors. Al-Ghorabie [20] found indoor gamma dose rates were highest in apartments and villas, compared to large halls and mud houses with average values of 192, 154, 167, 92 nGy/h respectively with some of the difference attributed to the building materials and some to the degree of ventilation, as well as the season. The values Numerous studies have shown that commercial tiles, frequently contain zircon which is used for glazing, and the presence of zircon can lead to higher concentrations of naturally occurring radionuclides [9,17]. Whilst the results of this study, using hand-held sensors, so cannot be directly compared to studies based on measured activity concentrations of the radionuclides, they do clearly show that the presence of tiles in typical urban homes increases the gamma radiation. This leads to higher TGRD rates in kitchen, washrooms and toilets which typically have tiled walls in addition to the tiled floors that are found throughout all rooms in most urban homes in Malaysia. Higher gamma radiation leads to higher AED and this study showed 28% difference between typical properties. In the worst-case scenario of a person spending a lot of time in rooms with the highest TGRD, this could lead to a 33% increase in AED. This study also shows that tiles and ceramics have higher beta radiation of 6 to 8 times that recorded on concrete, with variability between different tile types. It would be advantageous, in future studies, to measure both the values in situ, as done in this study, together with measuring the activity concentration of radionuclides in the specific building materials found in these homes.
Micro-mapping has shown that in typical urban homes in Miri, there is considerable variation in TGRD values within each home according to the presence of different building materials. The use of single D out and D in values in the calculations (as in scenario 1) may lead to an underestimate of AED and ELCR. The results shown in Table 5 suggest that the underestimate (between scenario 1 and more realistic scenario 2) is of the order of 1% to 7%. Figure 8 shows the indoor and outdoor components of AED in the four properties, in the worst-case scenario in comparison to results from Curtin University campus in Miri [10] and those reported for Malaysia and worldwide [1]. There is very little variation in the outdoor component of AED but approximately 33% variation in indoor component either measured or calculated for worst-case in Miri. The worst-case scenario estimate, based on a person spending a long period of time in rooms with the highest likely TGRD for this type of homes, amounted to AED of 0.801 mSv. The more realistic scenario calculations of annual effective dose based on the in situ measurements of this study range from 0.611 to 0.742 mSv. All values of AED for Miri fall below the ICRP [34] recommended effective dose limit of 1 mSv/y coming from all radiation sources, for public exposure although they are above the world average of 0.48 mSv.
Conclusions
This micro-mapping study conducted during Covid-19 pandemic lockdown, obtained a very high number of indoor TGRD readings, rarely obtained in other studies of indoor radiation. The focus was four homes in typical urban areas of Miri, Sarawak. A total of 577 gamma and 300 beta readings were obtained in the various rooms of these homes which is a significantly large data set with which to assess the external exposure component of radiation impact of building materials in these specific homes.
The results indicate a clear step-wise increase in TGRD from a background of approximately 80 nGy/h outside on grass, slightly increasing outside on patio, increasing more inside in rooms with tiled floors and with the highest TGRD being recorded in rooms with both floor and wall tiles, such as kitchens and bathrooms. The indoor room averages ). Hence, the study revealed significant differences in the TGRD values between the homes and between the rooms in each home. On the whole, the oldest property has the lowest TGRD values, with higher values in more recent constructions, possibly reflecting differences in the use or source of various building materials. In addition, lower ventilation rates may play a role, allowing for accumulation of radon in some rooms, although all rooms are relatively well ventilated, with extractor fans and/ or louvered windows common in kitchens and washrooms.
Outdoor TGRD values were slightly higher adjacent to the urban homes than previously reported at greenfield site in Miri [10] but are below the Malaysian average [1]. It should be noted that the Malaysian average is based on data that was collected over 20 years ago [1]. In this study indoor-tooutdoor ratios of 1.17 to 1.62 were recorded. Beta readings show significant differences between natural surfaces and tiles, with tiles having 8 to 15 times higher beta radiation than grass. Beta radiation was measured on 13 different types of tiles in use in the four homes. The values range from 2.52 CPS/cm 2 to 8.81 CPS/cm 2 with significant differences between the types. Annual effective dose was calculated for a range of scenarios. In the studied homes, AED ranges from 0.611 mSv to 0.742 mSv. The numerous data obtained from micro-mapping have made it possible to calculate that in a worst-case scenario, a person living in a property of this sort might receive up to 0.801 mSv annual effective dose, but it is unlikely that the dose would be exceeded in properties of this type in this region. The value is below the 1 mSv dose limit for public recommended by the International Commission on Radiological Protection [34]. This study is significant in that it shows that having only limited data for indoors (for example only a single indoor value for each property) can lead to a potential underestimation of AED of the order of 30%. To minimize annual effective dose, it is recommended to use available building materials with the lowest radiological impact and this is most critical for tiles especially those typically used for flooring throughout Malaysian homes and for wall surfaces in washrooms and kitchens. | 7,742 | 2023-03-08T00:00:00.000 | [
"Physics"
] |
Multilevel B-Spline Repulsive Energy in Nanomodeling of Graphenes
Quantum energies which are used in applications are usually composed of repulsive and attractive terms. The objective of this study is to use an accurate and efficient fitting of the repulsive energy instead of using standard parametrizations. The investigation is based on Density Functional Theory and Tight Binding simulations. Our objective is not only to capture the values of the repulsive terms but also to efficiently reproduce the elastic properties and the forces. The elasticity values determine the rigidity of a material when some traction or load is applied on it. The pairpotential is based on an exponential term corrected by B-spline terms. In order to accelerate the computations, one uses a hierarchical optimization for the B-splines on different levels. Carbon graphenes constitute the configurations used in the simulations. We report on some results to show the efficiency of the B-splines on different levels.
Introduction
Nanotechnology is a very important field which has emerged in the last decades and developed very quickly in several directions.It has important applications in various disciplines including aircraft, automobile, electronic and medical engineerings.Nanomaterials admit several important properties which can be exploited in applications.For instance, electric conductivity of nanomaterials is applied in electronic components so that the materials conduct electricity more efficiently than diamonds.Thermal resistivity of nanomaterials can be used to reduce or accelerate heat conduction.It also has a good thermic property so that materials can be designed to resist heat at a very high intensity.Graphene has obtained a significant attention from scientists in the last decades for several reasons.Its material property can be controlled for that it can become a stronger material than steel.The objective in this paper is to use an accurate and efficient fitting of the repulsive energy instead of using a standard parametrization.Many approaches have been proposed to represent empirical estimation of repulsive terms.Before presenting our method, let us briefly survey some traditional repulsive methods.Molecular dynamics employing the Lennard-Jones potential have been well understood so far.It is based upon the well-known potential ( ) which is decomposed into attractive and repulsive components.Since it is only expressed in terms of the inter-atomic distances, it is easy to handle.Due to the simple expression of the potential, it can be differentiated easily and it is not computationally expensive to evaluate.Another important parameterization is the Harrison parametrization: The screened Harrison parametrization is an improvement of the former one as provided by ( ) ( ) where c r controls the range of the interaction and µ is some parameters obtained by experiments or a fitting process.Sawada parameterization uses the expression ( ) ( ) ( ) The most currently used parametrization is the GSP parameterization (Goodwin-Skinner-Pettifor) which is expressed as where n , e n and c r are fitting parameters.Several other methods have been also suggested to achieve some desired properties.Some approaches use certain combinations of known ones.
Our motivation is to generate a system which is both accurate and fairly inexpensive to evaluate.We are interested in graphenes and its properties including energy, force and elastic stress.Geometrically, graphenes admit a honeycomb pattern in form of repeated organized hexagons as illustrated in Figure 1 For the generation of the unit cell, one needs a translational vector T perpendicular to the chiral vector C .Let d designate the greatest common divisor of n and m .Define : 3d In the following sections, scaling a graphene amounts to enlarging the unit cells by scaling its primitive vectors.The coordinates of the centers of carbon atoms in the unit cell provided as fractional coordinates within [ ] 0,1 remain unchanged.We are interested in the property of the graphenes as they are confined or stretched as illustrated in Figure 1(c) where we consider a graphene of chirality ( ) 2,1 .For significantly confined graphenes, the repulsive energy is very large.For extremely stretched ones, the repulsive energy vanishes so that the energies are the sum of the energies of all constituting atoms.
Quantum Repulsive Representation
We consider the Born-Oppenheimer or adiabatic approximation assumption stating that the mass and the volume of the atoms are very large in comparison to those of the electrons.The atoms move comparatively slower than the electrons.Thus, we treat the time-independent Hamiltonian operator with respect to the a set of nuclei which are supposed to be stationary: x x (6) where the coordinates of the i -th electron are denoted by , , . The above formula describes the kinetic energy along with the nuclear-electron interaction and the inter-electron interaction.Several simplifications of the stationary Hamilton operators have already been proposed.Our proposed potential energy uses two of such simplifications which we survey below.
For the DFT(Density Functional Theory), one solves one equation for each electron.The Kohn-Sham formalism [1] consists in replacing the complicated single problem into several simpler ones.For each 1, where eff V is the effective potential energy which depends implicitly on the total electron density ( ) ( ) x x .The problem is then reduced from dimensions 3 e N to e N sets of smaller problems of dimension 3D .The influence of one electron with respect to the other electron is measured by the total electron density.These approaches enable the treatment of Hamiltonian problem even for an electronic structure having a large number of particles on a single desktop.The eigenvalue problem in (7) is nonlinear because its variational [2] [3] operator ( ) x Ψ depends on ρ which in turn depends on i ψ .It is solved by using a sequence of the linear eigenvalue problems SCF (Self Consistent Field).The effective potential is constituted of the Hartree potential H V , the exchange correlation potential XC V and the external electrostatic field such as in which the Hartree potential is the inverse of the Poisson operator such as For its evaluation, either a Poisson problem is solved or one convolves with the Green fundamental solution such as x x x x .The main feature of DFT is that one has to approximate the potential by using some correction terms known as exchange-correlation potential [4] [5].That is usually done by LDA (Local Density Approximation) or GGA (Generalized Gradient Approximation).Analytic expressions of the correlation energy are only known in a few special cases which mainly consist of the high and low density limits.The external electrostatic field potential ext V is provided by the kernel where For the local density approximation (LDA), the exchange energy density is expressed as ( ) ( ) ( ) [ ] ( ) Analytic values of the correlation energy density are only known for some extreme cases.For the high density limit, the exchange correlation energy density is approximated by when the Weigner-Seitz radius s r is very small.For the low density limit where s r is very large, one has ( ) ( ) ( ) ( ) For other values of s r , some interpolation of those extreme values is considered.For example, by using the VWN-approximation (Vosko, Wilk, Nusair) as in [6], one has Once the solutions i E to (7) become known for all 1, , e i N = , the Khon-Sham approach uses the approximation to E of ( 6) by The main improvement from LDA to GGA is that the exchange-correlation energy does not depend only on the total electron density but also on its gradient such as [ ]( ) x .As a second simplification, we survey the semi-empirical (SE) method using Hueckle method.Consider the spherical coordinates ( ) r θ φ such that ( ) The atomic orbitals sharp (s), principal (p), diffuse (d) and fundamental (f) correspond to linear combinations of ( ) respectively.The basis functions centered at the origin are [7] defined by where the radial function The parameters = − a a a .In fact, the overlap matrix entries can be expanded as , , = ∑ a where one uses the inter-atomic distance ij i j d = − a a .Computing the integrals by quadrature is too expensive.
One stores the expansion coefficients ( ) C α β a .The value of ( ) are stored in Slater-Koster table.It does not depend on the coordinates of i a and j a but only on the inter-distance (see [7] for similar discussion).For the Hamiltonian of the SE Hückle, the on-site situation with respect to the center i a is ( ) where E α is approximately the eigenenergy for index α .The off-site term is 2 The value of x .This partial differential equation needs to be solved for every evaluation of the Hartree term ( )( ) In the Atomistix Toolkit package [7], that is solved by a fast multigrid solver.The coefficient ( ) ε x is a dielectric coefficient [8] and As a matter of fact, the SE empirical method is much more efficient than the DFT method in term of computational speed.But the DFT computation produces much more accurate results.As a consequence, one searches a certain correction term for the SE method in such a way that the resulting method keeps the efficiency of the SE method while approximating the quality of the DFT approach.The ultimate objective is thus to find a repulsive term to add to the SE energy as described below.We want to generate a repulsive term which conserves most of the properties from the DFT computation.For a configuration { } 1 , we intend to conserve the energy ( ) . In addition, we are also interested in approximating the forces.For each atom ( ) , , , the corresponding force is ( ) , , In addition, we focus also on the elastic property of the graphenes [9].In general, this property determines the rigidity of a graphene when a traction is applied on it.The strain tensor which is is represented in the longitudinal, transversal and normal components.The stress σ is also represented in a similar tensor way.The strain is related to the displacement u having components i u by ( ) The correlation between the strain ε , stress σ and displacements u is governed by some elasticity equation [9].Practically, the stress contains implicitly some property of the second derivatives of the energy for the reason that it is the derivative of the energy with respect to strains which are functions of the gradients of the displacements ( ) For a set of graphene configurations, the ideal objective functional for the nonlinear optimization is . uniformly.Afterwards, one refines gradually in the vicinity of the optimal scaling factor ( ) opt λ c of the configuration c .The principal objective for that construction is to accumulate many points in the neighborhood of the optimum ( ) opt λ c .The determination of the stress is computationally more intensive compared to the computation of the energies.That situation holds even for the case of semi-empirical Hueckle method.The computation of stress for the DFT case is even more intensive but it needs only be done once and stored during the whole optimization.As a consequence, one needs only to handle elastic properties at a few positions in the course of the optimization computation.Otherwise, the whole optimization execution would be too slow since the evaluation of the objective functional would be very intensive.For example, the stress is only applied in the neighborhood of the minimal energy in our computation.Generally, λ ∈ Λ c is of the same importance.The vicinity of the optimal scaling factor opt λ is more valuable because the equilibrium takes place there.As a consequence, one introduces some positive weights to the scaling factors.For our implementation, we used some Gaussian functions centered at the optimal value added by some minimal shift shift > 0 δ such as ( ) The purpose of shift δ is to prevent the value of λ ω from being practically zero when λ is far from opt λ .
In our case, we have taken the parameter values to be 0.005 c σ = = and shift 1 δ = .Since the objective function ( 10) is very intensive to evaluate, we use in practice its simplification where the forces are provided by finite difference of the energy.Now we would like to describe the parameters with respect to which the nonlinear optimization is performed.The semi-empirical energy with zero repulsive term 0 SE E behaves as a pure attractive energy.That is, in order to obtain an energy comparable to the DFT energy, one appends a .
That is, the function acts pairwise on the carbon atoms with nuclei coordinates i a and j a such that ( ) ( ) a a .In other words, the whole process amounts to replacing the repulsive term of the SE energy by an optimal potential energy ( ) ∑ a a .We search for the optimal pair potential function in the form ( ) ( ) in which N designates B-spline basis functions such that we obtain an energy that behaves very similarly to the DFT in term of energy, force and stress.
In the expression (12), the function e bt a − captures the general behavior of the pair potential function .The role of the is to correct the small imperfection produced by the principal function e bt a − .Some cut-off radius C r is used so that the pair potential function vanishes beyond that value.In our situation, a cut-off radius of about 4.0 C r = Å suffices completely.In order to obtain zero value and derivative , we insert a short transition function next to the cut-off radius C r .One extends next to the cut-off radius C r by a polynomial so that one obtains a smooth transition toward zero.Since the unknown pair potential function is partly expressed in B-spline basis as in ( 12 . One defines the B-spline basis functions for 0, , , , , , in which the truncated power functions ( ) We only focus on B-splines which are internally uniform: except for the boundary multiple knots, all knot entries i ζ are uniformly spaced.The integer k controls the smoothness of the B-spline for which the resulting function admits an overall smoothness of 2 k − so that the case 1 k = corresponds to discontinuous piecewise constant functions.The integer n controls the number of B-spline functions.In Figure 3(a), we see some illustration of B-spline bases on an internally uniform knot sequence.Figure 3(b) displays an instance of a B-spline curve defined on [ ] 0,1 .In Figure 3(c), the knot sequence has been refined uniformly by increasing n to 2n while keeping 2 k = .That is achieved by introducing a new knot entry between every two knots of the B-spline in the former Figure 3(b).For our application, we insert several knots at once so that the new knot sequence is again internally uniform.A new knot entry is inserted between two consecutive old ones.The evaluation of B-spline functions is not calculated by using the above definition but rather by means of the de-Boor algorithm.Since the knot sequence is internally uniform, we use the notation ).We will describe next the procedure of inserting new knots into existing ones.That is important when one needs to increase the degree of freedom in the pair potential function in (12).The principal objective is to efficiently express a function defined on the coarse knot sequence in term of B-splines on a fine one.Consider two knot sequences ( ) ) where : are evaluated by using the recurrence In our simulation, we took 3 k = which corresponds to continuously differentiable pair potentials.In Figure 4, we observe some illustration of such knot insertions.Not only the two B-spline functions admit the same image but their parametrizations from their interval of definition [ ] 0,1 are completely identical.The whole process of the determination of the repulsive energy is performed in increasing levels as follows.First, one determines the optimal value for e bt a − without the B-spline part in (12) by using a global optimizer.
Then, one fixes the resulting optimal values of ( ) , a b during the subsequent computation.Second, one searches the optimal B-spline ( ) where 4 n = by starting a local optimization with the initial guess , , . Now, one repeats the following steps iteratively.Inject the optimal value ( )
Computer Implementation
In this section, we report on some practical results of the formerly proposed method.The implementation of the method was realized by combining ATK, NLOPT and python.The ATK (Atomistix ToolKit) has some GUI extension well known as VNL (Virtual NanoLab).We use NLOPT for the diverse nonlinear optimization operations [10] in which both global and local optimizers are involved.A global optimizer searches for the best parameters among all possibilities while a local one searches only inside a local neighborhood of a certain provided starting initial guess.For the global optimizer, we use the NLOPT option GN-CRS2-LM standing for Controlled Random Search with Local Mutation.The local optimizers are performed by using BOBYQA algorithm which is an efficient gradient-free method available in NLOPT.In order to facilitate the combinations of options, we implemented several python classes.The class for the reference configurations organizes the graphene structures to be used together with their respective optimization weights.There is also a class for the optimization parameters specifying the property such as orders and levels of B-splines as well as the abortion criterion.It controls the contribution of the energy, force and stress ( ) , , E F S µ µ µ in the optimization functional (10).The construction of the sets ( ) Λ c as well as the interval for the range of interest has been equally supported by some python classes.In order to save computations, one needs to precompute and store the data for the DFT as well as the semi-empirical with zero pair potential.
As a first test, we consider multiple computations for different configurations of graphenes.The configuration n is based upon the first index n of the chirality parameters ( ) , n m where m is allowed to vary.That is, each configuration n is composed of all graphenes admitting chirality ( ) , n m such that 0 m n ≤ ≤ .In Figure 4(a), we observe some comparisons for graphenes in n where 1 n = .Most values align on the diagonal which implies the agreement between the outcomes provided by the DFT and the SE methods.Similar tests for graphenes where 2 n = and 3 n = are depicted respectively in Figure 4(b), Figure 4(c).The resulting SE energies do not exactly provide the same results as the DFT but the current SE energies should be more reliable in comparison to the empirical potential in (1)-( 5) which contain very few parameters.In addition, the speed of computation is much faster for the currently presented SE than the one for DFT.In fact, the execution time of the SE in comparison to DFT has a speed of factor 10 or more.Due to that acceleration gain, the method is in many aspects good to attain efficiency.If the accuracy is not satisfactory, then one has to use the direct DFT with the cost of much more computing time.
As a further test, we investigate the decrease of the objective function with regard to the B-spline levels.We consider again the three configurations n above for 1, 2,3 n = . The results of such tests are displayed in Table 1 where the initial line describes the SE with zero pair potential (PP).The next one is the SE with exponential pair potential ( ) e bt t a − = without B-splines.The following ones are the pair potentials with more and more B-splines as in (12).The error barely drops down after level 4 for all graphene configurations n .In fact, the minimal value of the function in (10) is not always zero.As a consequence, one cannot expect an arbitrarily accurate approximation.As a next test, we consider the complex band structures for using the DFT and SE computations whose results are respectively displayed in Figure 5(a), Figure 5(b) for the graphene with chirality ( ) 1, 0 .The plots depict band lines which are not shown as continuous curves but as sets of sampling points.The points which are purely real and explicitly complex are depicted in red and green respectively.In order to provide more validation for the efficiency of the proposed method, some comparison of the elastic properties was performed when computed by means of the DFT and SE methods.In Figure 6, we observe the elastic properties corresponding to the two methods.In general, the stress tensor σ is presented in three directions similar to (9).Nevertheless, we omit the normal components of the stress tensor σ in this particular
Conclusion
A method was presented to determine the optimal pair potential for the repulsive quantum energy.We concentrated on configurations which are constituted of carbon The method was based upon hierarchical B-splines layered on different levels.The principal objective function consists of terms involving not only energies but also forces and elastic stresses.Several computer results validate the reliability of the newly proposed method as compared to outcomes from Density Functional Theory.
(a).They are controlled by the chirality which is a couple of integers ( ) , n m so that 0 m n ≤ ≤ .In the case n m = , one has an armchair graphene while 0 m = corresponds to the case of a zigzag graphene as in Figure1(b).Suppose 3 a designates the carbon bond length of the graphene.Define a and b the directive vectors of the honeycomb describing a 2D-lattice so that
Figure 1 .
Figure 1.(a) Nanosheet: chiral vector C and translational vector T ; (b) Armchair and Zigzag graphene; (c) Ground state energy for graphenes.Confining and enlarging the volume of the graphenes.
The above exchange-correlation potential is related to the exchange-correlation energy by XC XC and the correlation parts.In term of the exchange-correlation energy density XC ε one has
Λ
c are sets of scaling factors with respect to the reference configuration ∈ c for the energy, force and stress respectively.Now, we show the construction of prescribes the range of interest.That interval contains the optimal factor geometry optimization.The construction is performed in several steps as depicted in Figure2.As a first step, one refines the interval [ ]
), we recall briefly some important properties of a B-spline setting.It is in fact a very flexible way of representing piecewise polynomials on any interval of definition [ ] , a b .Consider two integers , n k such that 1 n k ≥ ≥ .Suppose the interval [ ] , a b is subdivided by the knot sequence For both knot sequences, the smoothness index k is conserved intact.The following discrete B-splines enable the expression of a coarse basis i N ζ as a linear combination of the fine basis
Figure 3 .
Figure 3. (a) B-spline bases; (b) Original B-spline; (c) a finer B-spline which has the same parametrization as the original B-spline.
above knot insertion technique where n is increased into 2n .Apply then a local optimization with respect to ( )
Figure 6 (
Figure 6(b) where one observes the alignments of DFT and SE elasticities.The two results for ( ) 1, 0 and For the nondiagonal values or off-site cases centered at two different atoms i a and j a , the entries are computed by a Slater-Koster table lookup where the values are functions of the interatomic coordinates ij
Table 1 .
Errors at each B-spline level.case of the graphene configurations which are planar.In addition, the stress components LT | 5,485.2 | 2014-04-16T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A Counterexample to Monotonicity of Relative Mass in Random Walks
For a finite undirected graph $G = (V,E)$, let $p_{u,v}(t)$ denote the probability that a continuous-time random walk starting at vertex $u$ is in $v$ at time $t$. In this note we give an example of a Cayley graph $G$ and two vertices $u,v \in G$ for which the function \[ r_{u,v}(t) = \frac{p_{u,v}(t)}{p_{u,u}(t)} \qquad t \geq 0 \] is not monotonically non-decreasing. This answers a question asked by Peres in 2013.
Introduction
Let G = (V, E) be a finite undirected regular graph. Let p u,v (t) denote the probability that a continuous-time random walk starting at vertex u is in v at time t. In this note we are interested in the function r u,v (t) = p u,v (t) p u,u (t) t ≥ 0 .
Clearly, in regular connected graphs for any u = v, we have r u,v (0) = 0 and lim t→∞ r u,v (t) = 1. One might wonder if the function is monotonically non-decreasing. It is not difficult to see that there are regular graphs for which this is not the case. In fact, there are regular graphs such that r u,v (t) > 1 for some vertices u, v and time t; in particular, r u,v (t) is not monotonically nondecreasing. We give an example of such a graph in Appendix A. We thank Jeff Cheeger [Che15] for pointing this out to us. For vertex-transitive graphs, however, it holds that r u,v (t) ≤ 1 for all vertices u, v and all t ≥ 0. Indeed, using Cauchy-Schwarz and the reversibility of the walk, p u,v (t) = ∑ w∈V p u,w (t/2)p w,v (t/2) ≤ ∑ w p u,w (t/2) 2 1/2 · ∑ w p w,v (t/2) 2 1/2 = p u,u (t) 1/2 · p v,v (t) 1/2 = p u,u (t) . This motivates the following question, asked in 2013 by Peres [Per13]: Is the function r u,v monotonically non-decreasing in t for all vertex-transitive graphs and all vertices u, v?
More recently, a special case of that question was asked independently by Price [Pri14]. Namely, Price asked whether for Brownian motion on flat tori (i.e., on R n modulo a lattice), it holds that for any point x, the density at x divided by the density at the starting point x 0 is monotonically non-decreasing in time. This would follow from a positive answer to Peres's question through a limit argument. Price gave a positive answer to his question for the case of a cycle (n = 1) and recently, a positive answer for arbitrary flat tori was found [RSD15]. This can be seen as further evidence for a positive answer to Peres's question.
In this note we give a negative answer to Peres's question. In fact, we do so through a Cayley graph.
Theorem 1.1. There exists a Cayley graph G = (V, E) and two vertices u, v ∈ V such that the function r u,v is not monotonically non-decreasing.
One remaining open question is whether r u,v is monotonically non-decreasing for Abelian Cayley graphs. The positive result of [RSD15] is a special case of that.
Some basic facts about continuous-time random walks
Given a weighted finite graph G = (V, E) with weight function w : E → R + a continuous-time random walk X = (X t ) t≥0 on G is defined by its heat kernel H t , that at time t > 0 is equal to where L is the Laplacian matrix of G given by L u,v = −w(u, v) for u = v, and L u,u = ∑ v w(u, v). As a result, for a random walk X starting at a vertex u the probability that X is in v at time t is equal to p u,v (t) := H t (u, v). When G is a d-regular unweighted simple graph, we think of the edges as all having weight 1/d, in which case the Laplacian of G is given by In this note we consider only vertex-transitive graphs, for which the sum ∑ v w(u, v) is the same for all vertices u of the graph. Note that we do not insist that this sum is equal to 1, though this can be achieved by normalizing L, which corresponds to changing the speed of the random walk. For basic facts about continuous-time random walks see, e.g., [LPW09].
If G is a weighted Cayley graph with a generating set S and a weight function w : S → R + , then a continuous-time random walk X = (X t ) t≥0 on G is described by mutually independent Poisson processes of rate w(g) for each group generator g ∈ S, where each process indicates the times when X jumps along the corresponding edge.
Non-monotonicity of time spent at the origin in the hypercube graph
For an integer d ≥ 1 denote by Q d the d-dimensional hypercube graph. The vertices of Q d are {0, 1} d and there is an edge between two vertices u and v if and only if they differ in exactly one coordinate. Let X = (X t ) t>0 be a continuous-time random walk on Q d starting at the origin, denoted by 0 = (0, . . . , 0) ∈ {0, 1} d . Denote by C d (t) the expected time spent at the origin until time t, conditioned on the event that X t = 0. That is, In this section we show that for d sufficiently large C d (t) is not monotonically non-decreasing.
Lemma 2.1. Let d ∈ N be sufficiently large. Then, there are some t 1 < t 2 such that C d (t 1 ) > C d (t 2 ), and in particular, the function C d is not monotonically non-decreasing in t.
Remark. Numerically, one can see that the function C d is not monotone for d ≥ 5. See Figure 1. Since C d has a closed form expression (as can be seen from the calculations below), one can probably show nonmonotonicity directly for C 5 by analyzing the function, though doing so would likely be messy and not too illuminating. Before proving Lemma 2.1 we prove the following claim.
Claim 2.2. Let d ≥ 1, and let Q d be the d-dimensional hypercube graph. Let X = (X t ) t>0 be a continuoustime random walk on Q d starting at 0. Then, Proof. Since X moves in each coordinate with rate 1/d, it follows that for each i ∈ [d] the number of steps in direction i up to time t is distributed like Pois(t/d). Therefore, where we used that the probability that Pois(λ) is even is Since the coordinates of X move independently the result follows.
We now prove Lemma 2.1.
Proof of Lemma 2.1. We show below that for all d ≥ 1, it holds that This clearly proves the lemma for d sufficiently large.
To prove Item 1, we show that if a walk starting from the origin is at the origin at time √ d, then with constant probability it stayed at the origin throughout that time interval. Intuitively, this is because the probability of a coordinate flipping twice during that time is of order only 1/d and so with constant probability none of the d coordinates flips. In more detail, by Claim 2.2, where we used the inequality e −x ≤ 1 − x + x 2 /2 valid for all x ≥ 0. On the other hand, by definition of a continuous-time random walk the probability that X stays in 0 during the entire time interval [0, and hence the expected time spent at the origin conditioned on X √ d = 0 is as claimed in Item 1. We next prove Item 2. Intuitively, here there is enough time for coordinates to flip twice, and only a very small part of the time will be spent at the origin. By definition of C d and Claim 2.2 we have Since h d (t, s) is convex as a function of s, for all 0 ≤ s ≤ t/2 we have h d (t, s) ≤ (s) where is the unique linear function satisfying (0) = h d (t, 0) and (t/2) = h d (t, t/2). Therefore, taking t = d, we get This completes the proof of Lemma 2.1.
Proof of Theorem 1.1
In this section we prove Theorem 1.1. We first give a proof for a weighted graph, and then remark on how to convert it into an unweighted graph. For d ∈ N sufficiently large we define the weighted graph G to be the lamplighter graph on Q d , whose edges corresponding to steps on Q d are of weight 1/d, and edges corresponding to toggling a lamp are of weight ε, for some ε > 0 sufficiently small that depends on d and t 1 , t 2 from Lemma 2.1. In more detail, the weighted lamplighter graph G is described by placing a lamp at each vertex of Q d and a lamplighter walking on Q d . A vertex of G is described by the location x ∈ {0, 1} d of the lamplighter, and a configuration f : {0, 1} d → {0, 1} indicating which lamps are currently on. In each step the lamplighter either makes a step in the graph Q d or toggles the state of the lamp in the current vertex. More formally, we have an edge between (x, f ) and (y, g) if and only if either 1. (x, y) ∈ E d and f = g (this corresponds to a step in Q d ) or 2. x = y and f and g differ on the input x and are equal on all other inputs (this corresponds to toggling a lamp at x).
The weights of the edges of the first type are 1/d, and the edges of the second type are of weight ε. Thus, in a random walk on G, the steps of the lamplighter are distributed as in a random walk on Q d , and the number of times the lamps are toggled in a time interval of length T is distributed like Pois(εT) independently of the the lamplighter's walk. It is well known that the lamplighter graph is a Cayley graph (see, e.g., [PR04]). Let u be the vertex in G corresponding to the lamplighter being at the origin with all lights off. Let v be the vertex in G corresponding to the lamplighter being at the origin with the light at the origin being on, and all other lights off. We show below that r u,v is not monotonically nondecreasing. More specifically, we show that r u,v (t 1 ) > r u,v (t 2 ), where t 1 < t 2 are from Lemma 2.1.
Let X = (X t ) t≥0 be a continuous-time random walk on G starting at X 0 = u. Denote by Y t the number of times a toggle occurred during the time interval [0, t]. Denote by Z = (Z t ) t≥0 the trajectory of the lamplighter, i.e., the projection of X to the first coordinate. Note that by definition Z is a continuous-time random walk on Q d , and that Z is independent of Y t . Claim 3.1. Let u, v ∈ V be as above. Then, for all t > 0 it holds that and (2) Using the claim, where O(·) hides a constant that depends on d and t. In particular, for t 1 < t 2 from Lemma 2.1, and ε > 0 sufficiently small we get that r u,v (t 1 ) > r u,v (t 2 ), which proves Theorem 1.1. Intuitively, (1) holds because the probability of toggling a lamp twice is very small, and hence p u,u (t) is approximately equal to the probability that no lamp has changed its state multiplied by the probability that a random walk on Q d will be at the origin at time t. The intuition for (2) is that in order to get from u to v, in addition to getting back to the origin, the lamplighter must toggle the switch while being at the origin, and the probability of that is roughly ε · C d (t).
Proof of Claim 3.1. For p u,u we have Since Y t is distributed like Pois(εt), the second term satisfies and for the first term, by independence between Y t and Z t we have For p u,v we similarly have As above, the second term is at most ε 2 t 2 . For the first term, let E t be the event that Y t = 1, and the unique lamp that is on at time t is the lamp at the origin. Denote by T 0 the time spent by Z at the origin in the time interval [0, t]. Then, conditioning on Z, the event E t holds if and only if a unique switch happened during T 0 time, and zero switches in the remaining time. Therefore, by independence of a Poisson process in disjoint intervals This implies that Therefore, since C d (t) = E[T 0 |Z t = 0] we get (2), and the claim follows.
Converting G into an unweighted graph. Below we show how to convert a weighted Cayley graph G into an unweighted one, while preserving the property in Theorem 1.1. Let (G, S G ) be a weighted Cayley graph with the generating set S G = {g 1 , . . . , g k }, and suppose that the weights w : S G → R + are integers for all g ∈ S G . For N ∈ N sufficiently large define the graph H by replacing each vertex v ∈ G with an N-clique {(v, i) : i ∈ Z N }, and replacing each edge (u, ug) in G of weight w(g) with w(g) perfect matchings {(u, i), (ug, i + j) : Formally, the vertices of the graph H are G × Z N = {(v, i) : v ∈ G, i ∈ Z N }, and the set of generators S H for the Cayley graph on H is given by Note that the projection of a continuous-time random walk on H to the first coordinate is a random walk on G, slowed down by deg(H). Moreover, assuming N is larger than, say, ∑ g∈S G w(g), after constant time the two coordinates become close to independent with the second coordinate being uniform. Therefore, if u, v are vertices in G, and x = (u, 0), y = (v, 0) are the corresponding vertices in H, then for any time t > 0 and t = deg(H) · t it holds that p x,y (t ) = 1 N (p u,v (t) ± o N (1)) and hence r x,y (t ) = r u,v (t ) ± o N (1).
For the graph G given in the proof of Theorem 1.1 above, we may assume that 1/ε is an integer, and so, by multiplying all weights by d/ε we get a Cayley graph with integer weights. Hence, by applying the foregoing transformation we get a simple unweighted Cayley graph H for which r u,v is not monotonically non-decreasing for some u, v ∈ H.
A Appendix: A counterexample in a regular non-transitive graph
Below we give a simple example of a regular non-transitive graph such that r u,v (t) > 1 for some vertices u, v and some time t; in particular, r u,v (t) is not monotonically non-decreasing, since r u,v (t) → 1 as t → ∞. We thank Jeff Cheeger [Che15] for pointing this out to us.
Proposition A.1. Let L be the Laplacian of a regular graph on vertex set V. Denote its eigenvalues by 0 = λ 1 ≤ λ 2 ≤ · · · ≤ λ |V| and by f i ∈ R V the corresponding normalized eigenvectors. Suppose that 0 < λ 2 < λ 3 , and that f 2 is such that f 2 (v) > f 2 (u) > 0 for some vertices u, v. Then, there is some t > 0 such that r u,v (t) > 1.
Proof. Let π u ∈ R V be the vector with π u (u) = 1 and π u (u ) = 0 for all u = u. Writing π u = ∑ α i f i for α i = π u , f i = f i (u), for all w ∈ V we have e −tL π u (w) = |V| ∑ i=1 e −tλ i α i · f i (w) = c + e −λ 2 t f 2 (u) f 2 (w) + O(e −λ 3 t ), where O() hides some constants that may depend on the graph, but not on t, and c = α 1 · f 1 (w) is independent of w since f 1 is a constant function. Using the facts that f 2 (v) > f 2 (u) > 0 and λ 3 > λ 2 , it follows that for sufficiently large t, r u,v (t) = e −tL π u (v) e −tL π u (u) > 1 , as desired. | 4,092.2 | 2015-06-29T00:00:00.000 | [
"Mathematics"
] |
Defective Toll-Like Receptors Driven B Cell Response in Hyper IgE Syndrome Patients With STAT3 Mutations
Autosomal dominant hyper-IgE syndrome (AD-HIES) is a rare inherited primary immunodeficient disease (PIDs), which is caused by STAT3 gene mutations. Previous studies indicated a defective Toll-like receptor (TLR) 9-induced B cell response in AD-HIES patients, including proliferation, and IgG production. However, the other TLRs-mediated B cell responses in AD-HIES patients were not fully elucidated. In this study, we systematically studied the B cell response to TLRs signaling pathways in AD-HIES patients, including proliferation, activation, apoptosis, cytokine, and immunoglobulin production. Our results showed that the TLRs-induced B cell proliferation and activation was significantly impaired in AD-HIES patients. Besides, AD-HIES patients had defects in TLRs-induced B cell class switch, as well as IgG/IgM secretion and IL-10 production in B cells. Taken together, we first systematically reported the deficiency of TLRs driven B cell response in AD-HIES patients, which help to have a better understanding of the pathology of AD-HIES.
INTRODUCTION
Hyper-IgE syndrome (HIES) is a kind of rare inherited primary immunodeficient disease (PIDs), which is characterized by elevated IgE levels, eczema, recurrent infections, and pneumonia. Both autosomal dominant (AD) and autosomal recessive (AR) modes of inheritance were reported in the patients with HIES, of which AD-HIES was the most common form. Loss-of-function (LOF) mutations of the gene encoding the signal transduction and activators of transcription 3 (STAT3) were identified as the cause of AD-HIES. In addition to the typical clinical manifestation of HIES mentioned above, AD-HIES patients were also reported to suffer from some non-immunological manifestations such as scoliosis, pathologic fractures, pneumatoceles, retained childhood dentition, coronary-artery aneurysms, brain lesions, craniofacial abnormalities (1,2).
Signal transduction and activators of transcription 3, which belongs to the signal transducer and activator of transcription (STAT) family of signal responsive transcription factors, was reported to be involved in multiple biological functions, including cell proliferation, inflammation, differentiation, and survival. Therefore, it is not surprising that AD-HIES patients with STAT3 mutations displayed a wide array of clinical features, which involves multiple organs in the body. According to researchers, STAT3 was involved in regulating T cells, B cells, neutrophils, and macrophages (3) in the immune system. For example, AD-HIES patients were reported to have reduced neutrophil chemotaxis and function, defective development and maintenance of T cell memory, reduced Th17 cells, reduced memory B cells, defective IL-10 and IL-21 signaling (3). The innate immune system is the first defense by detecting and eliminating invading pathogens in humans. Pattern recognition receptors that widely exist in cells can further recognize pathogen-associated molecular patterns (PAMPs) such as liposomes, lipoproteins, proteins, and nucleic acids. Toll-like receptors (TLR), which belong to a special class of PAMPs, are involved in the recognition of molecular structures specific for microbial pathogens. Toll-like receptors are reported to express in antigen-presenting cells such as dendritic cells and macrophages, which play an important part in innate and adaptive immune responses (4). There are 10 types of TLRs reported in human beings, which can be grouped into two main categories: cell surface receptors that can recognize microbial membrane lipids including TLR1, 2, 4, 5, 6, 10, whereas receptors localized in the endosome including TLR3, 7, 8, 9 recognize microbial nucleic acids (4,5). Among all the TLRs, TLR7 and TLR9 were demonstrated to be the widely expressed TLRs in human B cells, which were also regarded to be crucial to B cells' functions including proliferation, apoptosis, activation marker expression, and cytokine and immunoglobulin secretion (6)(7)(8). Studies showed the TLR9 agonists CpG oligodeoxynucleotides (CpG ODNs) could activate B cells response, promoting cell proliferation, plasma cells generation, cytokine secretion, and protecting them from apoptosis (6,7,9,10). More than that, B cell activation mediated by TLR7 and TLR9 agonists can stimulate the production of IgG and IgM, making antibody shift to IgG2a and blocking the production of IgG1 and IgE (11)(12)(13)(14)(15). Of note, recent studies showed that STAT3 might play an important role in the B cell response mediated by TLRs, including cell proliferation, differentiation, and immunoglobulin production (16,17). However, up to now, the other TLRs mediated B cell responses in AD-HIES patients were not fully elucidated. Therefore, more efforts are still needed to be done for further exploration.
Herein, we aimed to study the B cell responses upon TLRs agonists in AD-HIES patients, systematically evaluate the TLRinduced B cell response in patients with STAT3 mutations, including proliferation, apoptosis, surface marker expression, memory B cell subsets, cytokine, and immunoglobulin secretion. Illustrating the role of STAT3 in TLRs-induced B cell responses in AD-HIES patients is the combination of basic and clinical medical studies and has great clinical value. It will not only help to give us a better understanding of the pathogenesis of AD-HIES but provide new ideas in target treatment development.
Patients and Control Samples
Six STAT3 mutant HIES patients (2 females and 4 males, age range 0.5-15 years; median 8.5 years) were enrolled in the experiment treated at Shanghai Jiaotong University-affiliated hospitals from June 2003 to August 2017. Patients met all of the following criteria: (1) Typical clinical symptoms of AD-HIES, including bacterial infections of the skin and lungs (especially pneumatocele and bronchiectasis), craniofacial abnormalities, mild traumatic fractures, scoliosis, retained childhood dentition, etc.; (2) Serum IgE levels > 2,000 IU/ml, and there were common allergen-specific IgE positives; (3) Patients who were determined to have a heterozygous dominant-negative mutation in the STAT3 gene by gene mutation analysis ( Table 1). The detailed clinical features of these patients including history, immunophenotype characterization, and the treatment has already been published before by our group (1). At the same time, 12 healthy age-matched controls were recruited as the healthy controls, who had no obvious infection 4 weeks before blood donation, no blood products, and no immune modulators were used. All AD-HIES patients and healthy age-matched controls themselves or their guardians signed written informed consent and volunteered to be enrolled in the study. The study was approved by the local ethical institute (Shanghai Children's Medical Center, Shanghai Jiaotong University).
Statistical Methods
Graphpad Prism 6 software was used to generate graphs and process the experimental data. The data were compared by Student's t-test, one-way ANOVA, or Mann Whitney test. P < 0.05 indicated that the difference was statistically significant.
Defective TLRs-Induced B Cells Proliferation in AD-HIES Patients With STAT3 Mutations
The proliferation of B cells is essential to adaptive immunity and is a key event to evaluate B cell function. To investigate the TLR-induced B cell proliferation in AD-HIES patients, CFSE labeled PBMCs from AD-HIES patients and age-matched healthy controls were stimulated with R848 or CpG (TLR7/8 or TLR9 Results were expressed as mean ± SD. Healthy controls (Hc, n = 12 each) and AD-HIES patients (Pt, n = 6 each) in this study. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.
agonists, respectively) alone, or in combination with F(ab') 2 fragments (anti-IgM) plus soluble CD40L (sCD40L). After 5 days' culture, CD19 + B cell proliferation rate was determined. As shown in Figure 1, significant proliferation of CD19 + B cell in TLRs stimulation group alone, as well as in combination with anti-IgM and sCD40L, were observed in both AD-HIE patients (P = 0.037 and P < 0.001 for R848 group and CpG group vs. control group, respectively; P = 0.003 and P < 0.001 for R848 and CpG together with anti-IgM plus sCD40L group vs anti-IgM plus sCD40L group, respectively), and healthy controls (all Pvalues were <0.001). However, compared with healthy controls, AD-HIES patients had significantly decreased B cell proliferation rate upon TLRs stimulation alone, or in combination with anti-IgM and sCD40L. These data indicated that TLRs-induced B cell proliferation was defective in AD-HIES patients.
Defective TLRs-Induced B Cell Activation in AD-HIES Patients
CD40, CD80 CD86, and MHCII are markers close related to B cell activation. Evaluation of the expression of these markers could help to assess B cell functions in the immune system. It was reported that B cell could be activated upon TLRs stimulation, with an upregulation of CD40 molecules, as well as some costimulation molecules including CD80, CD86, and HLA-DR on B cell surface (18,19). As reported before, the expression of CD40, CD80, CD86, and HLA-DR on B cell surface were all significantly upregulated upon TLRs stimulation in healthy controls. However, only the expression of CD86 and CD40 on B cell surface was upregulated upon TLRs stimulation in AD-HIES patients; while the expression of CD80 and HLA-DR was not affected (Figure 2). Moreover, compared with healthy controls, TLRs-induced up-regulation of CD40, CD80, and CD86, but not MHCII was significantly decreased on B cell surface in AD-HIES patients, which suggested a defective TLRs-induced B cell activation in AD-HIES patients.
Defective TLR-Induced Intracellular IgG, IgM, and IL-10 Secretion in B Cells From AD-HIES Patients
Antibodies secretion is the most important part of the adaptive immune system and a signal of B cell differentiation and activation. Recently, a defective TLR9-induced IgG secretion by PBMC was reported in AD-HIES patients (16). In the present study, we further detected TLRs-induced IgG, as well as IgM secretion in B cells from AD-HIES patients. Our results showed that, compared with healthy controls, AD-HIES patients had significantly decreased TLRs-induced IgM and IgG secretion (Figure 3). B cells are capable of producing cytokines, which depends on their differentiation state and activation conditions. The TLRinduced IL-6 and IL-10 secretion in B cells from AD-HIES patients was determined in this study. As shown in Figure 4, R848 and CpG could significantly increase intracellular IL-10 secretion in B cells from healthy controls (Figures 4A,C), but they had little effects on the IL-6 secretion (Figures 4B,D). Of note, compared with age-matched healthy controls, a defective TLRs-induced IL-10 secretion in B cells was observed in AD-HIES patients. These results indicated that the STAT3 might be involved in TLR-induced IgG and IgM secretion, as well as IL-10 secretion in B cells.
Defective TLR-Induced B Cell Class Switch in AD-HIES Patients
Immunoglobulin isotype switch occurs after activation of B cells, which is a crucial part of functional antibody secretion followed by B cell proliferation. Autosomal dominant hyper-IgE syndrome patients were previously reported to have significantly decreased CD27 + memory B cells, including both CD27 + IgD − classswitched memory B cells and CD27 + IgD + non-class-switched memory B cells (20). The B-cell subset distribution observed in the present study was consistent with the previous reports ( Figure 5). The results showed that CpG stimulation significantly increased the percentage of CD27 + IgD − class-switched memory B cells in healthy controls. However, there was no significantly increased CD27 + IgD − class-switched memory B cells observed in AD-HIES patients after CpG stimulation, which indicated that STAT3 might be involved in B cell antibody isotype conversion ( Figure 5). However, R848 stimulation, which had a small effect on the B cell isotype switch, could be seen as a negative control (Figure 5).
TLRs-Induced B Cell Apoptosis Was Not Affected in AD-HIES Patients
Induction of B cell apoptosis and its regulation are likely to play important roles in humoral immunity. Apoptosis and necrosis of B cells after TLR stimulation were analyzed in AD-HIES patients and healthy controls in this study. As shown in Figure 6, R848 and CpG decreased the apoptosis of B cells in both AD-HIES patients and age-matched healthy controls. However, there was no significant B cells' apoptosis and necrosis difference between the two groups, which indicated that B cell apoptosis was not affect in AD-HIES patients (Figure 6).
DISCUSSION
Autosomal dominant hyper-IgE syndrome with STAT3 deficiency is an extremely rare primary immunodeficiency disease with a prevalence of nearly 0.64-1/1,000,000 (1). One of the most striking clinical features for patients with LOF STAT3 mutations is recurrent infections. Therefore, it would be easy to deduce that the STAT3 signaling pathway played a vital role in human immune systems. Remarkably, the decreased memory B cells reported in AD-HIES patients throw light on the role of STAT3 in B cell development and function. Previous research has shown that STAT3 was downstream of DOCK8 in TLR9-mediated B cell proliferation and IgG secretion. However, the TLRs-induced B cell response in AD-HIES patients has not been fully explored yet. In the present study, we demonstrated STAT3 participated in TLR7/9-induced B cell response, including proliferation, activation, IgM/IgG secretion, IL-10 secretion, and B cell class switch.
Anti-human IgM antibodies can bind to IgM receptors expressed on the surface of B cell membranes and activate cross-linking of surface receptors, signal transduction, and antigen presentation. It has been reported that sCD40L acts similarly to CD40L and can help activate B cells (21). CD40 and CD40L signal pathways provide the second signal for B cell activation, which is crucial for the growth, differentiation, and proliferation of B cells (21). A wide variety of studies have shown that sCD40L and anti-human IgM promoted B cell activation induced by R848 and CpG (22)(23)(24). It is reported that patients with DOCK8 mutations had B cell proliferation defects induced by CpG, in which STAT3 was downstream of DOCK8 (16). In this study, we further demonstrated that, in addition to CpG, R848 could also induce B cell proliferation both in healthy controls and AD-HIES patients. However, compared with the healthy controls, the TLRs-induced B cell proliferation was significantly defective The results are expressed as the mean percentage of CD27 + IgD − cells in CD19 + B cell ± SD. Healthy controls (Hc, n = 12 each) and AD-HIES patients (Pt, n = 6 each) in this study. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.
in AD-HIES patients. These results indicated that STAT3 participated in TLRs-induced B cell proliferation. Apart from the STAT3 signaling pathway, other pathways were also reported to be involved in the TLRs-induced B cell proliferation. For example, Fruman et al. reported that subunit p85α of phosphoinositide 3-kinase (PI3K) was an important enzyme for B cell differentiation and proliferation (25). Defects in Bcell proliferation in the TLR-MyD88-STAT3 signal in AD-HIES patients might be related to p85α in PI3K, which is worthwhile to further explore.
It is known from the previous study that CD80 and CD40 expression on the surface of naive B cells from healthy controls can be significantly up-regulated even after stimulation with CpG (6). In this study, we showed that the B cell surface CD80, CD86, and CD40 expression in AD-HIES patients induced by R848 and CpG was significantly decreased than that in healthy controls, which implied the connection between STAT3 and TLRs-induced B cell activation. Moreover, given the well-demonstrated role of CD80 and CD86 in T-B cells' cooperation, we can speculate that the TLR-STAT3 signaling pathway might also be involved in the interaction of B cells and T cells. In AD-HIES patients, Th17 differentiation disorder results in a defect of IL-17 secretion and neutrophil proliferation and chemotaxis abnormal, which makes patients vulnerable to Candida infection (26,27). The impaired CD80 and CD86 expression on B cells in AD-HIES patients might be related to their impaired CD4 + T cell differentiation. CD40 can bind to CD40L which express on T cells, providing a second signal for B cell activation. CD40 expression on the surface of B cells reflects the degree of B cell activation. STAT3 mutation may affect B cells' second signal transduction by downregulating CD40 expression. HLA-DR expression in healthy controls was significantly up-regulated only after stimulation, but not in patients. Previous studies had a similar result that HLA-DR expression in naive B cells was significantly increased induced by CpG in healthy controls (6). Whereas, TLR-STAT3 signaling pathway had no close correlation to B cell antigen presentation.
IgM and IgG secretion in healthy human B cells increased after being stimulated with R848 and CpG. However, AD-HIES patients had no significant changes in IgM and IgG secretion stimulated with R848 and CpG. Similar results were obtained in the studies of Wei Jiang and Mark Glaum, which showed that R848 and CpG can stimulate IgM and IgG secretion in naive B cells in healthy controls (6,8). These results proved that the functional deficiency of antibody secretion in the patient has no connection to the large population of naive B cells in patients. Therefore, the TLR-STAT3 signal pathway might be related to the secretion of IgM and IgG. Moreover, Giardino reported that patients with NF-κB deficiency can not secrete IgG antibodies after CpG stimulation, which proved that the TLR9-NF-κB signal pathway is also involved in B cell antibody secretion (28). Previous studies showed that the total serum IgM and IgG concentrations in AD-HIES patients were similar to normal people (1). However, the detection method in this study is different from most studies. The lower secretion of IgM and IgG in patients during the study might be due to the abnormal transmission of the TLR9-STAT3 signal pathway, which weakened the immune response, slowed the secretion of antibodies, shifted the peak of antibody secretion, then led to patients' repeated Candida infection.
IL-6 secretion increased slightly in healthy controls after CpG in our study, which is associated with NF-κB and STAT3 phosphorylation. The expression of STAT3 protein and STAT3 phosphorylation in IL-6 knockdown mice was significantly reduced (29). At the same time, other studies had found that IL-6 secretion was reduced in samples from AD-HIES patients, and STAT3 deficiency led to impaired IL-6 signal pathway (30). The STAT3 mutation had no significant effect on IL-6 secretion in B cells but inhibited serum total IL-6 secretion. The reason might be the fact that the production of IL-6 in the patient's serum was mainly produced by T cells. Hence, patients had no significant change in B cells' IL-6 secretion. The amount of IL-10 secreted by healthy controls' B cells was very low. IL-10 secretion increased significantly induced by R848 and CpG in the control group, but not in patients. Therefore, the TLR-MyD88-STAT3 signal pathway might be involved in IL-10 secretion by B cells. IL-10 was mainly secreted by Th2 cells and other immune cells including B cells, dendritic cells, and NK cells (31). STAT3 deficiency caused up-regulation of Th1 cytokines and down-regulation of the antiinflammatory factor IL-10 (32). IL-10 secretion by dendritic cells and Th17 cells can also be regulated by the TLR9-MyD88-ERK-STAT3 signal pathway, which was consistent with the results of our study.
CD27 is expressed on the surface of memory B cells. Two types of BCRs are expressed on the surface of mature primary B cells, which are mIgM and mIgD. The mIgD of activated B cells or memory B cells gradually disappeared. After being stimulated by CpG, CD27 + IgD − B cells were significantly increased in healthy controls. The R848 stimulation did not induce any classswitching in healthy controls, which made the outcome of this experiment not that informative. However, we observed that CpG stimulation did induce class-switching in healthy controls, which suggested that the TLR9 pathway might contribute to B cell isotype switch in healthy controls. On the contrary, stimulation with the TLR9 agonist did not trigger B class-switching in HIES patients with STAT3 mutations, indicating that inhibition of STAT3 might be involved in TLR9-induced B cell isotype switch. Therefore, R848 stimulation, which had a small effect on the B cell isotype switch, could be seen as a negative control. Studies showed that CpG can lead to B cells antibody isotype switching through innate immune pathways (13,33), or the up-regulation of MyD88 expression (13). Further studies about TLR9-induced B cell isotype switching remains to be done. Early studies showed that the number of memory B cells in AD-HIES patients was significantly lower than that in normal people (1,20), which was also confirmed by the significantly lower number of switched memory B cells in patients in our study.
After CpG stimulation, healthy human B cell apoptosis was significantly reduced, while R848 stimulation was not. In AD-HIES patients, both R848 and CpG significantly reduced B-cell apoptosis. A study by Wei Jiang also reported that R848 and IL-4 significantly reduced B cell apoptosis. However, the STAT3 mutation did not affect B cell apoptosis which suggested that there might be other TLR signaling pathways that participate in saving B cell apoptosis of patients with STAT3 mutations (6).
Despite having mutations in different domains of STAT3, all patients showed similar kinds of defective B cell function ex-vivo results. It was reported that these mutations were all LOF STAT3 mutations and led to impaired STAT3 signaling (34,35), which might have a similar effect on patients' B cell function.
In conclusion, we found that the B cell proliferation, CD80, CD86, and CD40 expression, IgG and IL-10 secretion, and switched memory B cell subsets were defective in AD-HIES patients induced by TLR7 and/or TLR9 agonist. We studied and found the connection between the TLR-STAT3 signal pathway and B cell function from a clinical perspective by using AD-HIES patients' cells. Therefore, abnormal TLR-MyD88-STAT3 signal pathway might participate in the pathogenesis of B cell dysfunction and a series of immune phenotypes in AD-HIES patients. Whereas, the detailed pathogenesis of STAT3 mutation in the TLR-MyD88-STAT3 signal pathway needs to be further studied. At the same time, this signal pathway might also be related to the pathogenesis of other primary immunodeficiency diseases, such as hyper IgM syndrome, and chronic granulomatosis. Further research about the TLR-MyD88-STAT3 signal pathway will help us to reveal every small link of AD-HIES immune deficiency, and develop targeted treatments to reduce the pain for children currently suffering from the disease.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Local Ethical Institute (Shanghai Children's Medical Center, Shanghai Jiaotong University). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
RG, JW, YJ, and TC contributed to the design and implementation of the research, patients' organization, analysis of the results, and the writing of the manuscript. All authors contributed to the article and approved the submitted version. | 5,110 | 2021-11-05T00:00:00.000 | [
"Medicine",
"Biology"
] |
Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson χ2 divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches.
Introduction
In recent years, deep learning has achieved incredible performance in theoretical research and many application scenarios, such as image classification [1,2], natural language processing [3], and speech recognition [4]. For high-dimensional data generation, deep neural networks-based generative models, particularly Generative Adversarial Networks (GANs) [5] have quickly become the most powerful model in unsupervised learning of image generation. Compared with the previous strategies, GANs have the power to generate high-resolution and vivid images. In a nutshell, GANs provide a framework to learn implicit distribution of a given target dataset X . There are typically two networks in GANs architecture: a generator network G(·) that produces vivid images, and a discriminator network D(·) that outputs scores on input images. The generator G(·) adopts a given latent noise z as input which is sampled from arbitrary χ 2 divergence and adopts the least square loss for critic output. In [10], the authors proposed a new mechanism f -GAN to give an elegant generalization of GAN and extended the value function to arbitrary f -divergence. Compared with the original GANs, another contribution of f -GAN is that only single-step back-propagation is needed. Thus, there is no inner loop in the algorithm. However, the model collapsing problem remains unsolved by f -GAN [11].
To overcome the existing problems of GANs, one effective mechanism is proposed by Arjovsky et al.: the Wasserstein-GAN (i.e., WGAN) [12]. There are two main improvements in WGAN: a new objective based on the Wasserstein distance (or Earth Mover distance) and the weight clipping method. The Wasserstein distance has been proved to have better convergence performance than Kullback-Leibler divergence and Jensen-Shannon divergence in [12]. WGAN applies the approximated Wasserstein distance to estimate the distance between real and fake samples on a discriminator. The Kantorovich-Rubinstein duality is used to formulate the optimization. The original WGAN also requires the discriminator to be 1-Lipschitz continuous function, which can be achieved by clamping the model weights within a compact space (W = [−0.01, 0.01] l ), called weight clipping method. With these two methods to improve the model stability, WGAN can train the model till optimality to make the model less prone to collapse. However, this approach may lead to undesired behavior in practice [13]. To alleviate this effect, Gulrajani et al. proposed WGAN with gradient penalty (WGAN-GP) [13]. WGAN-GP introduces a soft penalty for the violation of 1-Lipschitz constraint, which guarantees high-quality image generation at the cost of the increasing computation complexity. Recently, many researchers have paid more attention to optimizing network architecture to improve training stability. For example, SN-GANs [14] create a novel weight normalization technique which is called Spectral Normalization to stabilize the training process. Although these approaches effectively improve the training stability, they provide less flexibility to strike a balance between the training stability and the desired quality of generated images.
In this paper, we propose Alpha Generative Adversarial Networks (Alpha-GANs) to train the generative model, which leverages the alpha divergence in information geometry [15]. We note that there is another so-called Alpha-GAN in [16]. However, there is no direct connection between these two models. Alpha-GAN in [16] is an application of GANs in natural image matting, while our proposed Alpha-GAN provides better objective function for the training scheme of GANs. Previous work has addressed the advances of alpha divergence and generalized it to many domains [17,18]. The alpha divergence can be seen as a generalization of multiple divergence functions, including Kullback-Leibler divergence [19], reverse KL divergence, Pearson χ 2 divergence, and Hellinger distance. Each one corresponds to a unique value of alpha in the alpha-divergence family. We assume that a real-world data distribution is denoted by p data , the goal of generator network G is to recover p real through its generated distribution p fake such that p fake is as close to p real as possible. However, it is always a tricky problem to keep the balance between G and D in existing approaches. The key contribution of our method is to propose a new value function that generalizes the alpha divergence to a tractable optimization problem in the generative model. Our new formulation involves two-order hyper-parameters for D(x real ) and D(x fake ) respectively which control the trade-off between p real and p fake in the training process. Moreover, we provide theoretical analysis to suggest effective guidance to select these hyper-parameters to strike a balance between the training stability and the desired quality of generated images.
The main contributions of our work can be summarized as follows: (1) We derive a new objective function for GANs inspired by the alpha divergence. Our new formulation preserves the order parameters of alpha divergence which can be further manipulated in the training progress. We note that f -GAN gives another generalization form of alpha divergence. Compared with the derivation in f -GAN, our Alpha-GAN has a more tractable formulation for optimization. (2) The proposed adversarial loss function of Alpha-GAN has a similar formulation to Wasserstein-GAN.
It introduces two hyper-parameters to strike a balance between G and D, and it can converge stably without any 1-Lipschitz constraints. Thus, Alpha-GAN can be regarded as an upgrade of WGAN, and the experimental results show the advanced performance of our model. (3) Through our new value function, we dig out a trade-off between training stability and quality of generated images in GANs. The two properties can be directly controlled by adjusting the hyper-parameters in the objective.
The rest of this paper is organized as follows. Section 2 briefly reviews the background of alpha divergence and some state-of-the-art architectures of GANs. More details about our proposed Alpha-GAN are formally stated in Section 3. In Section 4, experimental results are shown. Finally, we conclude our work in Section 5.
Background and Related Work
In this section, we introduce necessary background information regarding the alpha-divergence family and entropy and explain its relationship with novel generative models. Then, we go over some state-of-the-art GANs in the literature.
Entropy and Alpha Divergence
Before introducing the alpha divergence, we first review the concept of information entropy. The information entropy is proposed by Shannon, which is an important definition in information theory. Give a random variable X and its probability density p(x), the entropy can be defined as: It can be seen that if p(x) gets closer to uniform distribution, the corresponding entropy will be greater.
Although information entropy has few direct applications in machine learning, the cross-entropy is widely used in machine learning which is a derived from basic entropy: Cross-entropy is used to evaluate the difference between two distributions.
Kullback-Leibler (KL) divergence is another method to measure the disparity of distributions, which is also called relative entropy. KL divergence is generalized as the value function of original GANs. Given two probability densities p and q of random variable X, the KL divergence can be defined as: We can find that there is a relationship between entropy, cross-entropy and KL divergence: Divergence function is a critical part of the overall framework of the GANs, since it is used to measure the difference between two data distributions p real and p fake . The regular GANs use Kullback-Leibler divergence as the critic measurement, which proves not to be the optimal choice in previous studies. In this work, we use the alpha divergence, and derive a new objective for GANs in Equation (13). We first give a brief review of alpha divergence upon which our Alpha-GAN model is based. Here, we mainly introduce two kinds of alpha divergence: Amari-alpha divergence [15] and Rényi-alpha divergence [20]. Considering two probability densities p and q of random variable θ, these two forms of alpha divergence can be defined on {α : α ∈ R \ {0, 1}} as follows: • Amari-alpha divergence: • Rényi-alpha divergence: These divergences are related to the Chernoff α-coefficient c α (p : q) = p(θ) α q(θ) 1−α dθ [21]. Please note that when α → 0, the Kullback-Leibler divergence can be recovered from both Amari and Rényi divergence while α → 1 leads to the reverse Kullback-Leibler divergence [22,23]. We present some other special cases of the Amari-alpha-divergence family in Table 1. It can be regarded as the final criterion of our Alpha-GAN with some simple manipulations. We also include some useful properties of Amari-alpha divergence [23] in the following: Theorem 1 (Convexity). Given two distributions p and q, the alpha divergence D A [p q] is a convex function with respect to both p and q. So for any 0 ≥ λ ≥ 1, we have Theorem 2 (Strict Positivity). The alpha divergence is a strictly positive function D A [p q] ≥ 0, and it has a unique minimum D A [p q] = 0 if and only if p = q.
We will show how to adopt Amari-alpha divergence in our proposed GAN objective in Section 3. An effective guidance to select appropriate hyper-parameters for the rich alpha-divergence family is also provided.
Generative Adversarial Networks
In recent years, generative adversarial networks (GANs) have been one of the most attractive architectures in machine-learning systems. Since it was proposed by Goodfellow et al. in 2014 [5], tremendous variants of GAN have been produced by researchers. Most of them preserve the initial framework of vanilla GAN, which consists of two neural networks: a generator G and a discriminator D. G and D will adversarially learn from each other during the training phase. Figure 1 illustrates the schematic diagram of vanilla GANs, where G generates a fake image from a random latent code z ∼ p z and D learns to distinguish between real and fake samples. The key idea of GANs is usually defined as a game-play problem with a min-max objective. Researchers aim to obtain the optimal generator, which can generate high-resolution vivid images similar to natural images by fine-tuning the hyper-parameters properly. Next, we briefly show some popular objectives used to train a generative model.
Vanilla GAN
The original GAN proposed in [5] can be defined as a contest between two networks G and D. The min-max objective is formally defined as follows: where x stands for the input image, p real and p fake represent the distribution of real-world and the generated data respectively. This objective follows the formulation of binary cross-entropy loss. The outputs of discriminator D(·) are confined within [0, 1] through a sigmoid activation unit. The final critic value function of vanilla GANs can be formulated as the Jensen-Shannon divergence between p real and p fake : The above min-max optimization problem is a popular mechanism in deep generative models. However, this model suffers from the unbalanced training problem of two neural networks.
LSGAN
One of the GANs' variants is LSGAN [9]. Compared with vanilla GANs, LSGANs substitute the binary cross-entropy loss with a least square loss, which has better properties for optimization and is less likely to saturate. LSGAN adopts the Pearson χ 2 divergence as the decision criterion. It is computed as follows: where z ∼ p z is the input latent noise of generator. LSGANs adopt the Pearson χ 2 divergence as the decision criterion: Nowozin et al. proposed a new mechanism f -GAN to give an elegant generalization of GAN and extended the value function to arbitrary f -divergence including χ 2 divergence in [10]. However, there still exists model collapsing for LSGANs and f -GAN.
Wasserstein-GAN
To further enhance the stability of GANs, Arjovsky et al. applied the Earth Mover (also called Wasserstein-1) distance, which is used to measure optimal transport cost between two distributions [12]. The Wasserstein distance is defined as: where Π(P r , P g ) represents all joint distributions of P r and P g . Wasserstein GANs also employ the Kantorovich-Rubinstein duality of Wasserstein-1 distance to construct the value function: where D should be a 1-Lipschitz function and the weight parameters are clipped within the numerical interval [−c, c]. Gulrajani et al. proposed WGAN with gradient penalty (WGAN-GP) [13]. WGAN-GP introduces a soft penalty for the violation of 1-Lipschitz constraint, which guarantees high-quality image generation at the cost of the increasing computation complexity. SN-GANs [14] create a novel weight normalization technique which is called Spectral Normalization to stabilize the training process. Although these approaches effectively improve the training stability, they provide less flexibility to strike a balance between the training stability and the desired quality of generated images.
Proposed Method
We introduce our Alpha-GAN, a novel architecture of generative model based upon the minimization of alpha divergence [15]. The exact formulation of Alpha-GAN is defined in Equation (13) and we will show the relationship between Alpha-GAN and alpha divergence in Section 3.2.
Alpha-GAN Formulation
Inspired by the alpha divergence, we propose our new framework: the Alpha-GAN. In contrast to original GANs, Alpha-GAN removes the sigmoid output layer in discriminator network and substitutes the binary cross-entropy loss with our power function formulation. The proposed method further introduces two more hyper-parameters compared to WGAN. Specifically, Alpha-GAN model solves the following optimization problem: Please note that a, b are two-order indices for D(x) and D(G(z)) respectively. They are hyper-parameters introduced to balance the emphasis on D(x) and D(G(z)) during training process. To enhance the convergence stability, our proposed method only considers a, b > 0 in order to avoid the case that a term like 1 D a appears in the loss function when a ≤ 0 or b ≤ 0. When the discriminator's output is smaller than 1, the loss value would be extremely large and accordingly the model would become less stable and hard to converge in training phase. Another update is that we take the absolute value of the discriminator output. Otherwise, the output would produce a trivial solution when a < 1 or b < 1. It seems like the objective function of Alpha-GAN is not immediately related to the formulation of alpha divergence in Equation (5). We will give the detailed theoretical analysis of how to derive Alpha-GAN from alpha divergence in Section 3.2. The training scheme of our Alpha-GAN is shown in Algorithm 1.
In [10], f -GAN also provides a value function related to alpha divergence. The authors generalize f -divergence to GAN objectives via a variational lower bound. The f -GAN objective with respect to alpha divergence can be defined as: where V(x) denotes the output of last layer of discriminator network and g f is the output activation. For different values of α in alpha divergence, the activation g f also has different formulations: The above objective has complex formulations and constraints, which makes it inconvenient for optimization in deep generative models. In addition, the severe model collapsing problem remains unsolved. In our proposed method, a simplified objective function is given with a similar induction process as the vanilla GANs, which has a more elegant form to balance output between stability and quality. A detailed analysis of the derivation will be shown in the next section. Sample x r i ∼ p real , i = 1, · · · , m.
Theoretical Analysis
The original GAN model from Ian Goodfellow et al. proposed to minimize the Jensen-Shannon divergence: The JS divergence can be written as the summation of KL divergence. Therefore, the final criterion of original GAN is the KL distance between the distributions of the ground-truth image and the generated one. However, there are many research results ( [12]) showing that KL divergence is not a good objective for optimization. The alpha divergence employed in our approach can be seen as a generalization of KL divergence and we have already presented some basic properties of the alpha divergence in Section 2.1. Next, we show how Alpha-GAN is related to the alpha divergence mentioned in Equation (5). We first give the proof of optimal discriminator D * for arbitrary generator G.
Theorem 4.
For any fixed generator G and a < b, we prove the optimal discriminator D * as: Proof. To prove the optimal D * defined in Equation (17), we show that the objective function for discriminator D is to maximize the following Equation: In Section 3.1, we already stated that we only consider a, b > 0 and we keep this setting in the proof. Then for a < b, the upper function is concave in [0, ∞). We can take a derivative of it with respect to D and the optimal D * in Equation (17) will be obtained. Since the optimal solution only lies within [0, ∞), we take the absolute value of critic output.
After that, we substitute the optimal D * (x) into the initial objective function as defined in Equation (13). We can reformulate it as follows: If The final training criterion of generator G shown above can be seen as a linear transformation of the alpha divergence. Hence, our Alpha-GAN aims to reduce the distance measured by alpha divergence. We can manipulate the order α of divergence function through adjusting the value of hyper-parameters a and b.
Selection of Hyper-Parameters
Our Alpha-GAN uses two hyper-parameters a and b to control the update rate of D(x) and D(G(z)). (19) and (20) already states that changing a, b is equivalent to adjusting the order of alpha divergence. The relationship between a and b represents the preferences of model on real images or fake images. In practice, it is flexible for users to balance the training stability and the desired quality of generated images according to their specific requirements. One key problem is how to select proper hyper-parameters to obtain the optimal model. Here we give some useful suggestions on parameters selection:
The derivation in Equations
To prove the optimal discriminator D * in Theorem 4, the hyper-parameters have been set to a < b to satisfy the optimal condition. In the experiments of Alpha-GAN, we find that the scope can be reduced to b 2 ≤ a ≤ b. This will help us to determine ratio between two parameters in the applications. Noting that a = b could also lead to good generation results while it does not satisfy the optimal condition in Theorem 4. We interpret this phenomenon as that the Alpha-GAN has a similar formulation like WGAN when a = b. These can be written as: We believe that the setting of a = b will have some similar convergence properties like WGAN. • a, b ≥ 0.4: For the training stability of Alpha-GAN model, we only consider a, b > 0 to avoid forms like 1 D a as stated in Section 3.1. Otherwise, the loss will be extremely unstable when D 0. In evaluation experiments, when we set a, b < 0.4, the model cannot converge successfully, and the generated images are very blurred. The small values of parameters mean that the gradients feedback will be multiplied by a small coefficient in back-propagation. It is hard for the generator and discriminator to learn useful information from image data in such settings. Thus, we recommend setting the parameters to a, b ≥ 0.4. • a, b ≤ 1: This suggestion is also summarized from the experimental results, and may not always be valid. In the image generation experiments of Alpha-GAN, we find that the loss curves fluctuate largely when a, b > 1. However, the quality of generated images is not too bad. We believe the model will be difficult to converge well when it is faced with more complex problems, such as larger image datasets.
One way to select proper hyper-parameters is referring to the special cases of alpha divergence as shown in Table 1. For example, we observe that a good convergence result will be obtained when the parameters are set as 2a = b, which corresponds to the Pearson χ 2 divergence in alpha-divergence family. We already denote α = b b−a in Equation (20). Then for 2a = b, we can get: And we also suggest a, b ≤ 1 in the previous analysis. Here we further simplify the parameters as a = 1 2 , b = 1. Then, the adversarial loss of Alpha-GAN can be written as: Noting that we do not claim the values of a = 1 2 , b = 1 or Pearson χ 2 divergence are optimal for Alpha-GAN. It is one of our observations that such parameter settings can bring stable convergence performance in applications, thus we give a piece of reasonable advice to initialize a and b. In [9], the author employs Pearson χ 2 divergence to generalize the LSGAN. Compared with the adversarial loss of LSGAN in Equation (9), our model has a totally different formulation in Equation (23). Our Alpha-GAN is derived from a special case of alpha divergence, not directly from χ 2 divergence. We also evaluate the effects of different settings under diverse hyper-parameters and the effectiveness of our mechanism will be shown in Section 4.
Experiments
In this section, we conduct extensive experiments to evaluate the proposed method. We compare Alpha-GAN with some baseline models to show the competitive results of our approach. The algorithms are all implemented with PyTorch [24] in this section. The source code can be found in https://github. com/cailk/AlphaGAN.
Datasets
There are three datasets involved in our paper, including the handwritten digital dataset MNIST [25], and two real-world image datasets SVHN [26] and CelebA [27].
• MNIST: MNIST is a widely used database of handwritten digits, which contains a training set of 60,000 images and a test set of 10,000 images. There are 10 labels from '0' to '9' for dataset and all digits are normalized to 28 × 28. We use MNIST to evaluate the trade-off effect between two hyper-parameters in the value function. • SVHN: SVHN is a real-world color image dataset obtained from house numbers in Google Street View images. Its training set contains 73,257 32 × 32 digital images. SVHN dataset is similar to MNIST, but comes from a harder problem since all digits are sampled in natural scene. • CelebA: Last dataset used in this paper is CelebA, which is a large-scale face attribute dataset with more than 200,000 images. Samples are all 64 × 64 color celebrity images. CelebA is an important dataset in the scenario of image generation since it only contains information of face attribute and is easy to learn for GANs.
Model Architectures and Implementation Details
The architecture of our generator and discriminator is designed based on the InfoGAN [28]. The generator network is fed by a latent variable z ∼ N 128 (0, I). It contains a fully connection layer that upscales the input tensor to size 512 × 2 × 2, four transposed convolution layers (kernel size = 4 × 4, stride = 2, padding = 1) and the tanh activation layer. The discriminator network consists of 4 convolution layers that extract features from 32 × 32 inputs. ReLU activation function is used after each layer in generator network and Leaky-ReLU for discriminator network. Batch normalization is employed in each layer of both networks.
For our Alpha-GAN in Equation (13), we remove the last activation layer of discriminator like WGAN [12], and we apply an abs function to the critic output. We employ Adam optimizer [29] with learning rate of 0.0002 and decay rates of β 1 = 0.5, β 2 = 0.999 to train the generator network. In addition, the discriminator network is also trained using an Adam optimizer with learning rate of 0.0002. The total number of epochs is 50 for MNIST, SVHN, and 30 for CelebA. All experiments are conducted in a machine with one NVIDIA GTX 1080 GPU.
Evaluation Metrics
Measuring the quality of generated images is usually a more tricky and challenging problem than simply generating vivid images. It is almost impossible to directly establish an objective on the space of natural and generated images. To measure the quality of generated image samples, we employ the Fréchet Inception Distance (FID) proposed in [30], which is a commonly used metric for GANs. The FID is supposed to be more advanced than Inception Score (IS) [31] which is another metric to evaluate deep generative models. Suppose two multivariate Gaussians X real ∼ N (µ 1 , Σ 1 ) and X fake ∼ N (µ 2 , Σ 2 ) are the 2048-dimensional activation outputs of the Inception-v3 [32] pool_3 layer for real and generated samples respectively. The FID can be defined as follows: FID compares the statistics of fake images to real ones, instead of only evaluating the generated samples. Thus, FID will give a more reliable standard to measure the effect of GANs. For FID score, low is better, meaning real and generated samples are more similar, measured by the distance between their activation distributions.
The Influence of Hyper-Parameters
In our Alpha-GAN, we introduce two hyper-parameters a, b to the objective function, and we interpret them as how favorite the model want to learn from real and fake data distribution. To verify the influence of changing values of a and b in Equation (13), we conducted extensive experiments on MNIST dataset to demonstrate the trade-off between p real and p fake .
First, we test various parameter settings of Alpha-GAN to evaluate the basic convergence performance on different values of parameters a and b. The results can be found in Table 2. a and b are the two parameters in adversarial loss of Alpha-GAN. The symbol ' √ ' denotes the models with corresponding parameter setting can converge normally and generate high-quality digital images. '-' means the quality of generated by corresponding models is slightly poor. In addition, '×' means the model cannot converge and generated samples are blurred. We also find that when a < 0.4 or b < 0.4, the model will not converge. Thus, we recommend keeping the parameters greater than or equal to 0.4. In the theoretical analysis of Alpha-GAN, we suppose a < b to ensure the concavity of objective function. We can see that almost all settings on a > b will lead to poor model performance except for (0.6, 0.4), (0.8, 0.6), (1.0, 0.8). Figure 2 shows the training loss curves of these settings, which means that models do not converge during the training progress. This also suggests the quality of generated results may get worse when handling the larger datasets. It is worth noting that all generative models with satisfactory effects have a parameter pair within ratio b 2 ≤ a ≤ b as we suggest in Section 3.3. Another advice we give in the analysis of parameter selection is a, b ≤ 1, and there exist some models without such constraint which can still converge. However, the loss curves of these models look not good as shown in Figure 3. Table 2. Convergence ability on different value selections of parameters.
To show the effect of hyper-parameters on Alpha-GAN more clearly, some of the generated results are illustrated in Figure 4. The index a is gradually set as 0.3, 0.4, 0.5 and 0.6 while b is fixed at 1. One intuitive phenomenon is that the quality of generated samples is becoming better when we increase the value of a. The image samples are very fuzzy and difficult to distinguish when a = 0.3, and a = 0.4 performs better. When a = 0.5, 0.6, the model can generate high-quality handwritten digits. As we interpreted before, a, b represent the restraint level of D(x real ) and D(x fake ) respectively in Alpha-GAN. Decreasing the value of a means reducing the gradient feedback of critic output on real data. In that case, the discriminator will learn less information from the ground-truth data, the generated results will lack diversity and become unreal. One subsequent question arises naturally that "Can the value of a be arbitrarily large to generate decent and recognizable samples?". In our experiments, we explore another property of Alpha-GAN, which indicates that the loss curve becomes less stable when a or b increases. Especially when one of the parameters is bigger than 1, the output loss becomes extremely large and model is less possible to converge. For example, the final output loss of discriminator is beyond 1e10 in Figure 3c. Therefore, it is essential to strike a balance between the training stability and the desired quality of generated images.
Generation Results
We further show the generated results on the real-world datasets SVHN and CelebA, and compare our Alpha-GAN model with some baseline approaches, including WGAN, and WGAN-GP. Models all run in the same network architecture with fine-tuning.
Comparison with Baseline Models
In this section, we conduct extensive experiments to compare our Alpha-GAN with some baseline generative models. The Fréchet Inception Distance is calculated for each generator trained on the CelebA dataset. As aforementioned, lower FID score means GANs can generate samples closer to real data. We randomly sample 10,000 images with each GAN model and calculate the corresponding FID scores on ground-truth dataset with over 200,000 images. Figure 5 shows the generated results on CelebA dataset of our model and some competitors. Figure 5a illustrates samples generated by WGAN without weight clipping method and the results are not quite good. According to the theoretical analysis of WGAN, the weight clipping ensures the 1-Lipschitz continuous property of discriminator and convergence stability. This explains the poor quality of shown images. The samples of original WGAN is shown in Figure 5b, we observe that the results become better but still not good enough. In Figure 5c, the results of WGAN-GP have higher quality and are clear to be recognized. Similarly, our Alpha-GAN could generate competitive samples without any gradient penalty applied. Table 3 shows the Fréchet Inception Distance of our Alpha-GAN and some prominent GAN models on CelebA and SVHN dataset. Our proposed Alpha-GAN clearly outperforms WGAN and WGAN-GP.
Generated Results
We also evaluate our model on SVHN and CelebA with several different hyperparameter settings. The generated samples are shown in Figure 6. As can be seen, Figure 6a-c illustrate some sample images with a = 0.4, a = 0.5 and a = 0.8 respectively. As the value of a increases, the digits figures become clear and recognizable. Figure 6d-f show some generated results on CelebA with a = 0.4, a = 0.5 and a = 0.6 respectively. When the dataset becomes more complex and the network architectures go deeper, the increasing value of a can bring more instability on the results as we stated before.
Conclusions
In this paper, we propose a novel value function for GAN framework using the alpha divergence which can be regarded as a generalization of the Kullback-Leibler divergence. To improve Wasserstein-GAN, our objective introduces two more hyper-parameters to keep a balance during the training procedure. Moreover, we conduct a theoretical analysis for selecting appropriate hyper-parameters in order to control the information of p data and p g to maintain the training stability. Furthermore, we also find some trade-off between the training convergence and generation quality. Experimental results demonstrate that attempts to generate extremely high-quality images may bring instability to GANs. A novel mechanism for explicitly controlling the two properties is explored and outperforms previous works. For future works, we hope to extend Alpha-GAN to large-scale datasets such as CIFAR10 and ImageNet.
Author Contributions: L.C. proposed the main idea. L.C. and Y.C. performed the experiments to validate the results and wrote the manuscript. N.C. and W.C. reviewed and edited the manuscript. H.W. gave advice. L.C. and Y.C. contributed equally to this work. All authors have read and agreed to the published version of the manuscript. | 7,586.6 | 2020-04-01T00:00:00.000 | [
"Computer Science"
] |
A Secured Proxy-Based Data Sharing Module in IoT Environments Using Blockchain
Access and utilization of data are central to the cloud computing paradigm. With the advent of the Internet of Things (IoT), the tendency of data sharing on the cloud has seen enormous growth. With data sharing comes numerous security and privacy issues. In the process of ensuring data confidentiality and fine-grained access control to data in the cloud, several studies have proposed Attribute-Based Encryption (ABE) schemes, with Key Policy-ABE (KP-ABE) being the prominent one. Recent works have however suggested that the confidentiality of data is violated through collusion attacks between a revoked user and the cloud server. We present a secured and efficient Proxy Re-Encryption (PRE) scheme that incorporates an Inner-Product Encryption (IPE) scheme in which decryption of data is possible if the inner product of the private key, associated with a set of attributes specified by the data owner, and the associated ciphertext is equal to zero 0. We utilize a blockchain network whose processing node acts as the proxy server and performs re-encryption on the data. In ensuring data confidentiality and preventing collusion attacks, the data are divided into two, with one part stored on the blockchain network and the other part stored on the cloud. Our approach also achieves fine-grained access control.
Introduction
It has been estimated that there will be an enormous growth in the number of devices that will be connected to the internet by 2030 [1], and this will diminish the boundary between physical and digital worlds [2]. Human populace is not the main driver for this growth, but rather it is as a result of advances in wireless communication, embedded computing technologies, actuation and sensing that allow devices in a cyber physical world to become connected entities. The Internet of Things (IoT) is expected to fundamentally transform human daily activities, thereby outlining human-to-machine (H2M), machine-to-machine (M2M) and human-to-human (H2H) interactions in the connected world. Services provided by the IoT, which ensure safety, can be thought of as real drivers towards a better world of connectivity, as expressed by the authors of [3]. A complex task is the development of IoT systems and IoT services, which in particular is a crucial activity that requires It's true edge computing brings a better satisfaction to IoT devices. However, the storage of the data is done on the cloud and not much processing is done by/on the cloud. All processes are executed by the blockchain processing nodes, which have more computing power than the resource-constrained IoT devices. Moreover, the protocol still works, be it on the cloud or at the edge, since the main focus of this paper is on the security scheme. Due to constraints on resources by the IoT devices, the implementation of the security model, which is part of the computations and processing, is done by the blockchain network because it has enough processing power. Therefore, there are no specific hardware/software requirements for the resource-constrained IoT devices.
To summarize, our proxy re-encryption satisfies fine-grained access control in that users have access right to different sets of data, which is made possible by the ABE scheme. Our scheme is also collusion resistant as the cloud server and/or the proxy and the (revoked) user cannot collude to access data. This is made possible because the blockchain network is a decentralized system and all processes (transactions) are monitored by every participant on the network, and also recorded and stored into blocks. Furthermore, there is an appreciable level of trust between the data owner and the users due to the utilization of blockchain, as it ensures a trustworthy environment among participants involved. The proxy is uni-directional, as it transforms a ciphertext C into a ciphertext C in only one direction, but not in the reverse transform.
The remainder of this paper is organized as follows. In Section 2, related works on the cryptographic primitives, IoT and blockchain are reviewed. In Section 3, we introduce the notations to be used in this paper, while the system model is formulated in Section 4. Our proposed scheme and its security model are presented in Sections 5 and 6, respectively. Implementation and performance analysis are presented in Section 7, while Section 8 provides a set of discussions. Section 9 concludes the paper.
Related Works
The secured sharing of data among several users via a cloud service provider is extensively researched in [13][14][15]. Mambo and Okamoto's [16] novel PRE scheme has been adopted as the technique to achieve this, and it was further extended by Blaze et al. [17] by basing their findings on the El-Gamal cryptosystem [18]. In their work, a proxy can transform a message encrypted under Alice's key into an encryption of the same message under Bob's key because it utilizes a re-encryption key. While effective data sharing can be achieved by these schemes by meeting some security requirements and properties, there is no enforcement of fine-grained access control on the shared data.
Attribute-based proxy encryption techniques [19][20][21][22] have therefore been adopted to enforce this. Both the ciphertext and the private key of the user are associated with an attribute set in the ABE scheme, and decryption is possible when there is a match between the set of attributes for both the private key and the ciphertext [8,23,24]. These approaches, nevertheless, help only in the adversary not obtaining any information about the encrypted message. Katz et al. [25] therefore presented an attribute hiding scheme for a class of predicates. This was known as Inner-Product Encryption (IPE), and it preserves the confidentiality of the attributes associated with the ciphertext. Following that, a hierarchical IPE scheme that uses an n-dimensional vector space in bilinear maps of prime order was proposed by Okamoto et al. [26], and the full security under the standard model was achieved. Park [12] therefore presented an IPE scheme that supports an attribute hiding property, and also is secure against Decisional Bilinear Diffie-Hellman (D-BDH) and decisional linear assumptions.
Du et al. [27] presented an efficient and scalable key management scheme for heterogeneous sensor networks. Their scheme utilizes the fact that there is a lower communication and computational cost when a sensor only communicates with a small portion of its neighbors. An Elliptic Curve Cryptographic (ECC) scheme is used to further improve key management, as it also reduces sensor storage requirement and energy consumption while achieving better security. Xiao et al. [28] surveyed the various techniques utilized in the key management for Wireless Sensor Networks (WSNs). Their survey paper looks at both the advantages and disadvantages of the various techniques.
It is realized that no key distribution technique is ideal to all the scenarios where the sensor networks are deployed, and therefore the technique being employed should meet the requirements of both the application in question and the resources of the individual sensor networks. The authors of [29] presented an effective key management scheme for heterogeneous sensor networks, which is quite similar to the work in [27]. Their work portrays how efficient the performance of their scheme is, and that it significantly achieves a better security than existing sensor network key management schemes. Du et al. in [30] presented the security issues in WSNs. Quite similar to the aforementioned sensor-related papers, they investigated schemes that achieve better security and also lower computational cost for the sensor networks.
Blockchain technology offers a suitable platform that can be used for numerous applications in medical care. Improving the security in medical data sharing and automating the delivery of health-related notifications are the massive potentials of this technology, and they are compatible with the Health Insurance Portability and Accountability Act (HIPAA) [31]. Several authors have provided blockchain health-related applications [32][33][34][35]. The authors of [32] determined the current challenges of Electronic Medical Record (EMR) systems and the potential they have in providing solutions to security challenges and interoperability, with the use of blockchain technology. Focus has been on the application of blockchain to Electronic Health Records (EHRs) to facilitate interoperability. Medrec, a prototype released by MIT, expresses a practical way of sharing healthcare data between EHRs and blockchain [33]. A secure and scalable access control system for confidential information sharing on blockchain was also presented by the authors of [34]. Their results portray the effectiveness of their system in instances where traditional methods of access control failed. Yue et al. designed a concept for an application that presents patients with the opportunity to grant access to information about their health records to designated individuals [35]. The authors of [36] proposed a novel protocol that achieves patient privacy preservation by applying the concept of blockchain in an eHealth platform.
A possible efficient data sharing platform among interested parties and the preservation of privacy are a just few of the opportunities blockchain technology offers. For blockchain to reach its maximum potential, it is essential to tackle one of the most important problems facing this technology: data access control. This work therefore places more emphasis on providing a secured data access control in a data sharing environment. A blockchain processing node acts as a proxy and performs re-encryption on data that are given to a secondary user. Our system preserves data confidentiality and integrity, and avoids collusion attacks. Fine-grained access control is also achieved.
Preliminaries
We introduce some of the notations that will be utilized throughout this paper in this section.
Bilinear Maps
Our protocol is based on bilinear maps [37]. Let G and G T be two multiplicative cyclic groups that have a prime order p, and g be a generator of G. A bilinear map e : G × G → G T has the following properties: 1. Bilinear: For all a, b ∈ Z p , g, h ∈ G, then e(g a , h b ) = e(g, h) ab can be computed efficiently. 2. The map is non-degenerate. That is, if g generates G and h also generates G, then e(g, h) generates G T . In addition, e(g, h) = 1. The map does not send all pairs in G × G to the identity in G T . 3. It is computable; there exists an efficient algorithm to compute the map e(g, h) for any g, h ∈ G.
Note that e(, ) is symmetric since e(g a , h b ) = e(g, h) ab = e(g b , h a ).
Inner-Product Encryption (IPE)
The Inner Product Encryption (IPE) scheme, as proposed in [12], is an attribute-based encryption technique in which both ciphertext(s) and private (secret) key(s) are associated with vectors. Access to and decryption of an encrypted data can only be possible if and only if the inner product of the private key, which is related to vector − → v , and the ciphertext, also related to vector − → x , is 0. That is, for the two vectors, ( − → v · − → x ) = ∑ n i=1 x i · v i mod p = 0. Let ∑ be a set of attributes peculiar to particular encrypted data that involves vector − → v and has a dimension of n. Denote F as representing a predicate class that involves an inner product over vectors, i.e., Two n-dimensional vectors, − → x = (x 1 , ..., x n ) and − → v = (v 1 , ..., v n ), all belonging to the set of attributes, ∑, are, respectively, utilized in the encryption and key decryption phases.
We incorporate the rationale behind a proxy's re-encryption key (RE key) into this work by using the IPE scheme to transform a ciphertext associated with a vector into a new ciphertext associated with another vector but encrypts the same message (m ∈ M). We ensure that there is no revealing of the information about the encrypted data.
Attribute Based Encryption (ABE)
There are two main classifications of ABE schemes, namely Ciphertext Policy-Attribute Based Encryption (CP-ABE) [23] and Key Policy-Attribute Based Encryption (KP-ABE) [38]. In this paper, we make use of KP-ABE, as the data are encrypted by a set of attributes and the private keys of the users are associated with the access structure of KP-ABE. Thus, if the attribute of the encrypted data satisfies the access structure of the user's private key, decryption of the ciphertext can occur.
Proxy Re-Encryption (PRE)
The notion of "atomic proxy cryptography" is the basis for proxy re-encryption, which was first introduced by Mambo and Okamoto [16]. This scheme basically makes use of a semi-trusted proxy that transforms the ciphertext for Alice into a ciphertext for Bob, without actually knowing or gaining access to the plaintext. Popular, well-known proxy re-encryption schemes are the Blaze, Bleumer and Strauss (BBS) [17] and the Ateniese, Fu, Green and Hohenberger (AFGH) [39] schemes, which are based on El Gamal and Bilinear maps cryptographic algorithms, respectively. In this work, the blockchain processing node (a trusted entity) serves as the proxy, and performs re-encryption on the data.
Blockchain Network
Blockchain technology, originally proposed by Satoshi Nakamoto [40], acts as a shared, decentralized ledger to record transactions. Public, private and consortium blockchains are the three main types of blockchain. For decentralized networks and offering transparency, public blockchain is predominantly used. Private and consortium blockchains are, however, preferred when more control and privacy are of the essence. Consensus and decentralization, key features of blockchain, are the reasons for using blockchain technology in our system. Moreover, our blockchain's processing node serves as the trusted proxy that performs the re-encryption on the data before they are given to the secondary user. Proof-of-work (PoW) and Practical Byzantine Fault Tolerance (PBFT) provide security offered by the use of this technology. They utilize the agreement of nodes in the addition of a block to the chain, which acts as a ledger for all transactions.
Blockchain has helped in the effectiveness and advancement of many industries. It is also capable of implementing smart contracts, which are programmable scripts that automatically execute actions based on pre-defined triggers. The smart contracts are called upon when a data user requests access to data. Prior to the data being sent to the cloud, the owner specifies how its data are to be used and gives the details to the blockchain network. A processing node then embeds the contract into the data being given to the requestor. Our blockchain keeps logs of the transactions to achieve effective auditing.
Due to privacy concerns, our system utilizes the distributed ledger property of the blockchain, namely immutability, for authenticity and verifiability, and also the use of the consortium blockchain. Only authorized users can gain access to data. This enhances transparency for data owners, and allows them to effectively manage their data.
A block consists of a single event, with the event spanning from the time a request is made to when the block is broadcast onto the blockchain. Consensus nodes are responsible for mining and reporting all activities. A block is made up of a format that distinctively describes the block. This is followed by a block size, and then a block header, which is hashed with sha256(sha256()) as implemented in Bitcoin headers [40]. The block size contains the size of the block and the header ensures immutability. Changing a block header, in order to falsify a piece of information, requires a change to all headers starting from the genesis (parent) block.
A block header also contains the version number which indicates the validation rules to follow. The previous block's hash is also contained in the header. A timestamp is also included in the header and it indicates when the block was created. A target difficulty, which is a value that indicates how processing is achieved by the consensus nodes, is also found in the header. This makes processing difficult for malicious nodes but solvable by verified consensus nodes. There is also an arbitrary number generated by the consensus nodes, which modifies the header hash in order to produce a hash below the target difficulty. This is called a nonce. A transaction counter is found in the block, whose function is to record the total number of transactions in the entire block. The transaction is made up of the consensus transaction and the user transaction. Each type comprises a timestamp and the data. A block locktime defines the structure for the entire block. This is a timestamp that records the last entry of a transaction as well as the closure of a block. When all conditions are met, the block is then broadcast onto the blockchain network.
For scalability concerns, our blockchain stores hashes of transactions. Transactions on this blockchain typically include data requests, data processing (encryption and/or re-encryption), and data access.
Problem Statement
We demonstrate a simple IoT file/data sharing scenario in a healthcare environment for the sake of clarity, where we consider a patient whose data can only be accessed by his/her physician, pharmacist or relatives. Patients' data are normally collected and collated by health sensors that are usually bound to them, and uploaded onto a cloud server after recording. Before a patient's medical data are outsourced to a cloud server, the patient encrypts their own data under a set of attributes, which indicates the access privilege on the data. The patient then gives the details of all authorized users to the blockchain's processing node. Thus, access to a patient's data can be possible only if the user satisfies the attribute set and also uses the private key related to that attribute set.
However, there may be an instance where a physician might share the patient's data, depending on the kind of ailment they are treating, with other healthcare professionals who are not in the same hospital and therefore have a different access policy on the data. It now lies of the proxy (blockchain processing node) to re-encrypt the patient's data under the patient's attribute set to the new attribute set in a way that does not reveal any information about the data and its corresponding attributes. This must also be done in an efficient and secured way. The model of our system is presented in Figure 1.
Data Owner:
This is the entity (the patient in this case) whose data are to be accessed. Access is possible if and only if the private key of the data user corresponds to the attribute set specified by the data owner. 2. Data User: This is the entity who wants to make use of the data from the owner. Both the data owner and user(s) should be registered on the blockchain. 3. Cloud Server: This is the repository for the data from the owner. All encrypted files are sent to the cloud server (honest, but curious) through a secured communication channel. 4. Blockchain Network: This primarily consists of the following entities: • Issuer: This entity registers the participants (data owner and users) on the blockchain network. It gives out membership keys to them and that serves as their identity (ID). • Verifier: The verifier, which also serves as an authentication unit, checks whether a user who makes an access request or a data owner who uploads its data onto the cloud, are actually members of the blockchain network. • Processing node: This is the heartbeat of the blockchain network. All processes (transactions) that ever occur on the network are performed by this entity. In this work, however, it serves as the (trusted) proxy that oversees the re-encryption process. • Smart contract center: This unit prepares the contract that binds how data are to be used.
The various processes that happen in the system model are described below: 1. The proxy generates a secret key, SK, and a public key P pub , and hands the public key and access policy to the data owner. That is, the data owner is given {P pub , H access }. 2. The patient encrypts the data with the attribute set and sends the encrypted data to the cloud through a secured channel. The encrypted data are CT = {Enc M, − → x }. 3. The data user makes a request for the data. 4. The proxy accesses the permission rights of the data users from the cloud server. After accessing it, the blockchain network, which also serves as a trusted authority, gives the private key to the user according to the user's attributes. 5. Users can now access data from the cloud server. 6. The primary user is given PK− → v while the secondary user is given PK− → v . The proxy generates a re-encryption key REKey and transforms the policy set H → H for the secondary user who wants the shared data from the primary user but holds a different access policy, H .
The Scheme
As in several security algorithms, our proposed scheme consists of the following algorithms: Setup, KGen, Encrypt, RKGen, ReEncrypt, and Decrypt. The IPE scheme, as presented in [12], is adopted in this work and therefore most of the algorithms will be the same. Setup, KGen, Enrypt and Decrypt have been previously presented in [12].
The assumption is made here that ∑ = (Z p ) n is the set of attributes bound to data, where n is the dimension of the vectors, − → x and − → v , and p is the prime order of the group, Z. For any vector − → v = (v 1 , ..., v n ) ∈ ∑, each element v i belongs to the set Z p . The algorithms are as follows.
(P pub , SK) ← Setup (λ, n): With any security parameter λ ∈ Z + , the setup algorithm runs σ(λ) after which a tuple (p, G, G 2 , e) is obtained. A random generator g ∈ G, along with random exponents , found in Z p are all selected. A random element, g 2 ∈ G, is also selected. Furthermore, it selects a random number, Ψ ∈ Z p and obtains the set The setup algorithm then computes Now, the following notations are also given: The public P pub and secret SK keys are then, respectively, computed as: .., v n ), the algorithm selects random exponents λ 1 , . The composition of the various elements in the PK− → v is defined as follows: To encrypt a message M ∈ G T and a vector − → x = (x 1 , ..., x n ) ∈ (Z p ) under the public key P pub , the algorithm selects random elements {s i } 4 i=1 ∈ Z p and uses them to compute the ciphertext CT as follows: KGen algorithm is first called and a random element, l ∈ Z p , is selected. It then computes α, α δ 2 , α −δ 1 , α θ 2 , and α −θ 1 , where α = g l 2 . The Encrypt algorithm is then called to encrypt α under the vector − → x by utilizing Encrypt(P pub , − → x , α). The output is a ciphertext CT A . The RKGen algorithm then selects random exponents λ i 2 i=1 , r i , φ i n i=1 ∈ Z p and uses them to compute REKey− → v as follows: On input of the ciphertext CT and the re-encryption key REKey− → v , this algorithm first checks whether the attributes list of the user in REKey− → v satisfies the attribute set of the CT. If that is not the case, it returns ⊥; else, ∀i = {1, ..., n}, the algorithm first computes the following: After completing this computation, the algorithm then computes CT B as: recalling that A = g s 2 , B = g Ψs 1 , with s 2 = Ψs 1 .
The re-encrypted ciphertext CT therefore becomes the tuple On the input of the ciphertext CT and a private key PK− → v , the algorithm begins to decrypt the ciphertext, but based on two conditions. Case I: For a well-formed ciphertext, the algorithm decrypts in order to output a message M, given by Correctness: Assume the actual vector − → x = (x 1 , ..., x n ) is used for the formation of the ciphertext CT. The message can be recovered as follows: Let β = D · e (A, K A ) · e (B, K B ) and i · e g w 2,i s 1 , g δ 1 r i · e g f 2,i s 2 , g δ 1 r i g −λ 1 v i w 1,i ·e g δ 2 x i s 3 , g −λ 1 v i w 1,i · e g t 1,i s 1 , g −θ 2 φ i · e g h 1,i s 2 , g −θ 2 φ i g λ 2 v i t 2,i · e g θ 1 x i s 4 , g λ 2 v i t 2,i · e g t 2,i s 1 , g θ 1 φ i ·e g h 2,i s 2 , g θ 1 φ i g −λ 2 v i t 1,i · e g θ 2 x i s 4 , g −λ 2 v i t 1,i = ∏ n i=1 e g −δ 2 w 1,i , g s 1 r i · e g s 2 , g −δ 2 r i g λ 1 v i w 2,i f 1,i · e (g, g) λ 1 δ 1 w 2,1 ·x i ·v i ·s 3 · e g δ 1 w 2,i , g s 1 r i ·e g s 2 , g δ 1 r i g −λ 1 v i w 1,i f 2,i · e (g, g) −λ 1 δ 2 w 1,1 ·x i ·v i ·s 3 · e g −θ 2 t 1,i , g s 1 φ i · e g s 2 , g −θ 2 φ i g λ 2 v i t 2,i h 1,i · e (g, g) λ 2 θ 1 t 2,1 ·x i ·v i ·s 4 · e g θ 1 t 2,i , g s 1 φ i · e g s 2 , The message M can then be recovered as, The probability of being the identity then becomes 1/p since the exponents λ 1 , λ 2 , s 3 , and s 4 are all randomly chosen from Z p . , CT A . The deduction and correctness are shown below. We first compute CT A as follows: Thus, the output of the above is α iff − → x · − → v = 0. After completing the computation for α, we compute the message M as M ←− D · CT B · e g Ψs 1 , α , and it is shown below. = e (g, g 2 ) −s 2 M · e (g s 2 , g 2 ) · e (g, g) Ψ[λ 1 s 3 +λ 2 s 4 ]( − → x · − → v ) · e g −Ψ , α s 1 · e g Ψs 1 , α Recalling that α = g l 2 , we have = e (g, g 2 ) −s 2 · M · e (g, g 2 ) s 2 · e (g, g) Ψ[λ 1 s 3 +λ 2 s 4 ]( − → x · − → v ) · e (g, g 2 ) −Ψls 1 · e (g, g 2 ) Ψls 1 The probability of being the identity then becomes 1/p since the exponents are all randomly chosen from Z p .
Security Model
Following the approach in [25], we prove that our scheme exhibits attribute-hiding property. The adversary, A, and the challenger, C, are engaged in a series of games in our security model. Both A and C are, by assumption, given the attribute set ∑, and the predicate class F beforehand. The security game is played over the vectors of the re-encryption process.
Initialize:
The adversary, A, outputs two vectors − → x , − → y ∈ ∑. Setup: The challenger, C, runs Setup to obtain the public key P pub and the secret key SK, after which A is given P pub .
Query Phase 1: A adaptively issues private key queries for the vector − → Query Phase 2: Additional private key queries are made by A for additional vectors, subject to the same restrictions as stated above. Guess This is done throughout all the query phases. If that is not the case, for a vector − → v i , the adversary can obtain a private key PK− → v i and decrypt the challenge ciphertext using the private key corresponding to that vector. The restriction is however not required for the case where M 0 = M 1 .
Security Proof
In proving the security of our scheme, we introduce a series of security games between the adversary and the challenger as stated above. We also consider the case where there is a distinction between the two messages. As stated in the security model, the adversary is not in any way permitted to make private key queries for the vector − → Game 1 : The challenge ciphertext is generated under ( − → x , − → x ) and M 0 , and it is computed as and a random message R x ∈ G T are used to generate the challenge ciphertext, and it is computed as , R x Game 3 : The challenge ciphertext is generated under ( − → x , − → 0 ) and a random message R x ∈ G T , and it is computed as , R x Game 4 : The challenge ciphertext is generated under ( − → x , − → y ) and a random message R x ∈ G T , and it is computed as , R x Game 5 : The challenge ciphertext is generated under ( − → 0 , − → y ) and a random message R x ∈ G T , and it is computed as , R x Game 6 : The challenge ciphertext is generated under ( − → y , − → y ) and a random message R x ∈ G T , and it is computed as , R x Game 7 : ( − → y , − → y ) and M 1 is used to generate the challenge ciphertext, and it is computed as We prove that Game 1 and Game 7 are indistinguishable to an adversary with polynomial time. This is achieved by proving the computational indistinguishability of the transitions between the games. This is because the indistinguishability between Game 1 and Game 2 also indicates that Game 6 and Game 7 are also indistinguishable, by the property of symmetry of the hybrid games [25].
Under the (t, ) Decision Bilinear Diffie-Hellman assumption, Game 1 and Game 2 cannot be distinguished by an adversary running in polynomial time t with an advantage greater than , assuming there is an adversary A with non-negligible advantage that can attack the scheme. We describe the game between the challenger and the adversary as follows. On input (g, g a , g b , g c , Z) ∈ G 4 × G T , the goal of the challenger is to output 1 if Z = g abc , and 0 otherwise. The challenger and the adversary engage in the following interaction: Public parameters: The challenger chooses random exponents under the constraints If Ψ = 0, the challenger selects a new set of random exponents. It then sets the following conditions .., n and g 2 = g −Ψab g ω .
The challenger then initiates the following notations: Key Derivation: A issues private key queries for the vectors. Considering making queries for the vector − → v = (v 1 , ..., v n ) ∈ Z p , A can request for private key queries as long as − → v , − → x = = 0. The challenger selects random exponents λ 1 , λ 2 , r i , φ i n i=1 ∈ Z p in generating the re-encrypted key REKey− → v , and setsλ 1 = µa + λ 1 ,λ 2 = µa + λ 2 where µ = 1 2 . The re-encrypted keys K 1,i , K 2,i , K 3,i , K 4,i are then generated as follows: The K A and K B elements are, respectively, computed as . Computing for both X and Y yields The challenger can then compute K A as The challenger issues the private key PK− → for the queried vector. Challenge Ciphertext: In generating the challenge ciphertext, the challenger selects random elements s 1 , s 3 , s 4 ∈ Z p , and setŝ The challenger then computes A = g s 2 = g c and B = g Ψs 1 = g Ψ s 1 = g s 1 1 , and ∀i = 1, ..., n, the ciphertexts C 1,i , C 2,i , C 3,i , C 4,i are computed as follows The challenger then computes D = Z −Ψ · e (g, g c ) ω · M 0 . Under the Decisional BDH assumption, Game 1 and Game 2 are indistinguishable since, if Z = e (g, g) abc , the challenge ciphertext is as given in Game 1 , while, if Z is a randomly chosen element in G T , then the challenge ciphertext is as shown in Game 2 .
Implementation and Performance Analysis
In this section, we provide details of the implementation of our system and also evaluate the performance of our system. Experiments were designed and some useful parameters were measured. In our system, users (data owners inclusive) are registered on the blockchain network and this involves aggregating information pertaining to a specific user. Users are categorized as specified by the data owner. Each user is then given a public and private key pair, which are associated with their details, and to be used in requesting and accessing data.
We implemented the blockchain system on a private Ethereum blockchain network. Ethereum is a programmable blockchain platform that utilizes the robust nature of Solidity (a state-based scripting language). An application was designed in Python that connects each data owner and performs the proxy re-encryption scheme on the data. This application synchronizes with the blockchain using the JSON-RPC (JavaScript Object Notation-Remote Procedure Calls) library. With the blockchain notified about data request, queries are sent to the cloud server and data are filtered and sent to the blockchain. Re-encryption is either performed or not, based on the user type.
Experiment 1
In this first experiment, we measured the time it takes to register a user (both data owner and data user) on the blockchain network. To register, the user sends its details to the blockchain and membership keys are given to the user. We measured the delay it takes in mining this transaction. Variations over 40 runs of this scenario were simulated and the average registration delay was obtained. Experiment results indicate an average delay of 13.94 s, which is not far off the 13 s for a block generation in Ethereum networks. The experiment result is shown in Figure 2.
Experiment 2
In this second experiment, the impact of proxy re-encryption was measured. A flow chart, as shown in Figure 3, was designed that describes data processing as the data are requested by a user. As soon as data request is made, the blockchain network checks if the user is a legitimate member of the network. If successful, it sends a notification to the cloud server, which then filters and retrieves the data before sending them back to the blockchain network. After receiving the data, the blockchain checks for the user type. For a primary user, the blockchain delivers the data and proceeds to mine the address and this becomes a transaction. For a secondary user, the proxy is called upon and it re-encrypts the data before giving it out, after which it is also mined. Experiment results are shown in Figure 4. The tests were run for a variation of 40 times and it was realized that it takes an average of 30.18 s for an end-to-end data processing without re-encryption (as described in the flow chart) to be completed. Similarly, an average of 47.73 s was recorded for a process that involves re-encryption. Consequently, we realized the addition of re-encryption to the scheme increased the delay by 58.15%.
Collusion Resistance:
Our proposed scheme prevents collusion attack in the sense that the re-encrypted data are divided into two parts with one part stored on the blockchain network, and the other part stored on the cloud. Because the blockchain network and the cloud server work in tandem, a data user has to first obtain the bit-part data stored on the blockchain before obtaining the other half from the cloud. As a first level security check (usually performed before decryption), a data user must prove to the blockchain networks' verification unit its membership before gaining access to the data. A revoked user is deprived of this right because its membership keys have been completely removed from the network and therefore the user becomes unknown to the network. However, for a revoked user who still colludes with the cloud server for access to data, the cloud server still has to provide the user's details to the blockchain processing node for the necessary checks to be made. With collusion attack prevented, the confidentiality of the data is preserved/guaranteed. 2. Fine-grained access control: There is an effective management of user access by the implementation of the ABE scheme. The utilization of the inner product encryption scheme enables a fine-grained access control to data. The data owner specifies which attribute set or right a data user enjoys and therefore, to access data, there should be a match-up between the attribute set and the private key set. There is also the possibility of selective delegation due to the weight (information type) set by the data owner. Furthermore, depending on the level of trust between the data owner and the user(s), decryption of either all or some data can be delegated selectively to the user(s).
Conclusions
In this paper, an inner-product proxy re-encryption scheme that ensures an efficient and secured data access to IoT data is presented. The encryption of IoT data is done according to a given access policy and shared with the various data users, and therefore the problem of data sharing has been addressed. We incorporated a blockchain network, whose processing node acts as the proxy server. A user can access data when it is a registered member of the network, with the verification performed by the blockchain network. The proxy also re-encrypts the data by transforming the policy set in the process of sharing the data. The blockchain network works in tandem with the cloud server to ensure a collusion-resistant scheme. Our approach also achieves a fine-grained access control to data. Experiment results show that proxy re-encryption increased the delay, but the utilization of a blockchain kept a record of all interactions between entities and eliminated the need of a trusted third party. Making improvements to our scheme, in terms of its efficiency, is the focus of our future work. We also plan to include a detailed smart contract algorithm and more experimental results in the next work. | 9,279.2 | 2019-03-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Neuromorphic processor-oriented hybrid Q-format multiplication with adaptive quantization for tiny YOLO3
Deep neural networks (DNNs) have delivered unprecedented achievements in the modern Internet of Everything society, encompassing autonomous driving, expert diagnosis, unmanned supermarkets, etc. It continues to be challenging for researchers and engineers to develop a high-performance neuromorphic processor for deployment in edge devices or embedded hardware. DNNs’ superpower derives from their enormous and complex network architecture, which is computation-intensive, time-consuming, and energy-heavy. Due to the limited perceptual capacity of humans, accurate processing results from DNNs require a substantial amount of computing time, making them redundant in some applications. Utilizing adaptive quantization technology to compress the DNN model with sufficient accuracy is crucial for facilitating the deployment of neuromorphic processors in emerging edge applications. This study proposes a method to boost the development of neuromorphic processors by conducting fixed-point multiplication in a hybrid Q-format using an adaptive quantization technique on the convolution of tiny YOLO3. In particular, this work integrates the sign-bit check and bit roundoff techniques into the arithmetic of fixed-point multiplications to address overflow and roundoff issues within the convolution’s adding and multiplying operations. In addition, a hybrid Q-format multiplication module is developed to assess the proposed method from a hardware perspective. The experimental results prove that the hybrid multiplication with adaptive quantization on the tiny YOLO3’s weights and feature maps possesses a lower error rate than alternative fixed-point representation formats while sustaining the same object detection accuracy. Moreover, the fixed-point numbers represented by Q(6.9) have a suboptimal error rate, which can be utilized as an alternative representation form for the tiny YOLO3 algorithm-based neuromorphic processor design. In addition, the 8-bit hybrid Q-format multiplication module exhibits low power consumption and low latency in contrast to benchmark multipliers.
Introduction
The neuroscientists' efforts to explore the human brain's computational model lay a solid foundation for implementing intelligent perception and detection of electronic devices in the modern Internet of Everything society. In neuroscience, the communication theory of neuronal signals is vital for advancing the mathematical model and very large-scale integration (VLSI) circuit development of complex neural networks. Neurons mainly consist of dendrites, soma, axon hillock, axon, axon terminal, etc. The neuron is responsible for capturing and transferring signals across the entire body, while the synapse acts as the bridge between neuron communication. Specifically, as shown in Fig. 1 1 threshold, the dendrites transform the chemical signals released by another neuron into electrical impulses that are conveyed down the axon to the axon terminal. Researchers in mathematics and electrical engineering attempt to imitate the functioning of neurons and synapses by employing mathematical models and logic gates based on the information exchange theory from the following two points. (1) Functional emulation: utilizing the electronic components to emulate the neuron or synapse's function rather than its existing architecture, typically represented by the hardware accelerator of convolutional neural networks. (2) Neurobiological mimicry: mimicking the brain's models, such as the Hodgkin-Huxley model and signal transmission of neurons via integrated circuits, which holds excellent promise in imitating human brain learning. The memristor-based spiking neural networks are an emerging research subject in this domain.
Neuromorphic computing, also referred to as brain-inspired computing, is an interdisciplinary field combined with electronic engineering and neuroscience. Neuromorphic computing aims to mimic human beings' brain structures and functions by deploying silicon transistors. Originating in the 1980 s, it emulates the biological functions of the human brain using an electrical circuit [1]. Neuromorphic computing is distinguished from conventional computing with von Neumann architecture by its intimate relationship to the structure and parameters of neural networks and its use of advanced neural network models to imitate the processing processes of the human brain [2,3]. Deep neural networks (DNNs) have become the soul of neuromorphic computing in recent years with the emergence of machine learning. Spiking neural networks, in particular, play a crucial role in propelling neuromorphic computing forward, both in terms of the algorithm (neuron model) [4,5] and hardware (circuit architecture) [6]. Neuromorphic computing, based on a high-precision neural networks, new semiconductor materials [7,8], or optimal circuit architecture [9], is the crucial technology for achieving a neuromorphic processor design with low power consumption, high reliability, and low latency for modern industrial society. Conventional neuromorphic computing is constructed using the following assessment criteria and methods: • Lightweight Typically, high-performance DNNs are composed of sophisticated network architecture and a multitude of parameters, which poses substantial hurdles for the neuromorphic processor design with onchip memory. How to load entire weights into on-chip memory is the key difficulty to be addressed in neuromorphic computing. Current studies investigate lightweight neural networks that exploit parameter compression techniques such as weight/feature map sparsification and quantization [10]. • Low-latency Real-time industrial applications (e.g., autonomous driving and unmanned aerial vehicles) require a short response time for the neuromorphic processor; otherwise, it may pose substantial potential safety hazards to human beings. Employing parallel processing technologies [11] and approximate computing [12] can effectively reduce the processor's latency. • Energy-efficiency One of the goals of industrial 5.0 is to decrease carbon dioxide dissipation to prevent the depletion of natural resources. Neuromorphic computing intends to circumvent the von Neumann bottleneck in traditional processors, consuming large amounts of power by shuffling data between memory and processor. Temporal and spatial on-chip memory design [13] and emerging semiconductor materials [14] can efficiently reduce the processor's energy consumption.
As indicated in Table 1, a vast variety of neuromorphic chips, including TrueNorth [15], Tianjic [16], Loihi/Loihi2 [17,18], Neurogrid [19], etc., have developed in contemporary academia and industry. TrueNorth incorporates 4096 neuromorphic cores, which include 5.4 billion transistors, to achieve the functionality of the neuromorphic processor. However, the power consumption is only 63 milliwatts for real-time object detection with 400 Â 240 video inputs, which is mainly utilized for inference. Distinct from the TrueNorth chip, Loihi is a neuromorphic processor that combines inference and training functions with 128 neuromorphic cores (14 nm process). With the same processing technology (28 nm process) as TrueNorth, Tianjic is developed as an inference chip with 156 functional cores. Meantime, the rapid evolution of DNNs delivers great opportunities and challenges to neuromorphic processors in a variety of applications such as autonomous driving [20][21][22], 6 G network communication [23], intelligent medical diagnosis [24] and smart industrial automation [25] etc. Cutting-edge DNNs exhibit superior performance in various applications Fig. 1 Diagram of neuron and synapse. Information transfer occurs at the synapse, a junction between the axon terminal of the current neuron and the dendrite of the next neuron. Soma does not engage in the propagation of electrical signals, but it functions as the neuron's driving force to ensure its healthy operation by expanding the number of layers or deploying complex network architectures. Despite their higher performance, DNNs pose significant challenges for embedded hardware development in mobile and edge applications due to their high compute complexity, high energy consumption, and massive memory demands. Furthermore, the accuracy of DNNs tends to be redundant in practical applications since human capability for error-prone perception is limited. Therefore, compression of DNNs is imperative to facilitate the deployment of the neuromorphic processor in today's highly intelligent society.
The algorithm improvement and hardware approximation can accomplish DNNs compression. The essence of DNNs compression is to take advantage of approximate weights or feature maps, approximate arithmetic [26] or approximate circuit [27][28][29] to realize convolution operations. The weight or feature map sparsification intends to eliminate the redundant weights or feature maps that contribute little to the accuracy of DNNs. The typical sparsification approach is weight or feature map pruning, which can significantly diminish weights or feature maps. The weight pruning aims to remove the redundant weights while feature map pruning decreases both feature maps and weights [30]. Han et al. introduced the deep compression to the DNNs by deploying the connection pruning approach, which demonstrates that the connections can be reduced by 9Â -13Â [31]. Recently, other pruning techniques have been proposed such as random pruning [32,33], channel pruning [34,35] etc. However, it is essential to retrain the neural network after removing the unnecessary weights or feature maps with pruning method, which raises design challenges for neuromorphic processors. Single vector decomposition (SVD) of weights or feature maps, as another sparsification strategy, discards the weights or feature maps with small eigenvalues. Specially, the weight or feature map matrix is decomposed as the multiplications with two unitary matrices (left single vector and right single vector) and one diagonal vector that determines the weight or feature map matrix's eigenvalues. The corresponding weights or feature maps will be removed if the eigenvalues are smaller than a pre-defined threshold [36].
Since the SVD algorithm prefers large matrices such as the matrix in fully-connection layers, it has poor performance for object detection with tiny YOLO3. Another effective DNNs compression approach, named knowledge distillation, is to refine a compact student model from a complex teacher model [37]. The knowledge distillation consists of score-based [38] and probability-based [39] distillation according to the different loss function definition. The student model generally presents equal or even better performance than the teacher model if the gap between them is small enough [40,41]. Although knowledge distillation is promising in DNNs compression, it is still challenging to derive an effective student model from an intricate teacher model. Weight or feature map quantization, as an alternative approach of DNNs compression, is assumed to be the most promising technique in the area of neuromorphic design owing to the following benefits, • The quantization technique efficiently shortens the bit length of weights or feature maps, allowing it to utilize fewer logic gates to fulfill the arithmetic operation. • The lack of sufficient layout space for on-chip memory is a major design bottleneck for neuromorphic processors based on tiny YOLO3. Deploying a short-bit representation format can lessen the memory requirement for the pre-trained weights' storage, which in turn reduces neuromorphic processors' power consumption due to less external memory access.
Quantization can be achieved by the following two perspectives: (1) training the DNNs using quantized weights or feature maps, which is generally utilized for both the training and inference stages; (2) offline quantization of weights or feature maps, which mainly contributes to the inference stage. Many studies on the topic of training with low-precision bits have been published [42][43][44][45]. Our work concentrates primarily on the inference of tiny YOLO3 with low-precision fixed-point representation format (second category) since retraining the DNNs model is timeand power-intensive. In the literature of [46], the authors proposed an optimization algorithm based on quantization errors for determining the bit length of feature maps in each layer. [48]. The weights or feature maps can be quantized from 32-bit floating-point numbers to 16-bit, 8-bit, 4-bit, 2-bit, and even 1-bit fixed-point representation formats [49][50][51][52]. However, as the bit length of fixed-point representations decreases, the accuracy of neural networks drops, making it challenging to develop a high-performance neuromorphic processor with a long bit length representation. Therefore, exploring the low-bit representation of weights and feature maps while sustaining the algorithm's precision is meaningful for neuromorphic processor design. Since the representation range and resolution of fixed-point numbers are constrained by the length of integer bits and fraction bits, the correct bit length of weights or feature maps is crucial for determining whether a fixed-point number can accurately represent a floating-point value. The accuracy degrades when assigning the same bit length to the entire DNNs. This article proposes a neuromorphic processor-oriented hybrid multiplication with adaptive quantization for tiny YOLO3, and illustrates the addition and multiplication between two 16-bit fixed-point values for tiny YOLO3's convolution operation. Generally, using approximated fixed-point numbers to perform convolution often results in overflow problems and roundoff issues in some arithmetic operations, producing erroneous convolution results. Moreover, since the inputs of the convolutional layer in tiny YOLO3 are the previous layers' outputs except the first layer, it will introduce numerous errors to the entire neural network if the fixed-point numbers cannot correctly approximate the weights or feature maps. The proposed hybrid multiplication can effectively alleviate the overflow errors caused by addition or roundoff errors introduced by multiplication operations using approximated fixed-point weights and feature maps. In brief, the contributions of this study are briefly summarized as follows.
• This paper thoroughly illustrates the addition of a 16-bit fixed-point with adaptive quantization. The proposed sign-bit check approach can effectively reduce the overflow issues accompanied by the addition operation. • An optimal strategy of bit length adjustment is proposed to mitigate the roundoff errors in this article. Because the bit length of multiplying two fixed-point numbers is more than 16 bits, an appropriate bit length adjustment can adequately ensure the validity of approximation results. • An optimal and suboptimal representation formats of 16-bit fixed-point numbers has been attained for neuromorphic processor design by investigating the conversion error rate of data (feature maps and weights) and the accumulated calculation error of convolution. • A hybrid multiplication module is presented to assess the hardware cost of the adaptive quantization technique, and the experimental results prove that the proposed multiplication module has low power consumption and low latency in comparison with the benchmark multipliers.
The remainder of this paper is organized as follows. Section 2 offers the preliminaries of tiny YOLO3's convolution operation. The details of the proposed hybrid multiplication with adaptive quantization are illustrated in Sect. 3 which includes an adaptive qunatization algorithm, binary addition with sign-bit check, and binary multiplication with bit roundoff methods. Section 4 describes the experimental results and discussion regarding the hybrid multiplication with adaptive quantization, and the conclusion is presented in Sect. 5.
Convolution of tiny YOLO3
As shown in Fig. 2, the DNNs' convolution is calculated by multiplying between intra-channel elements of weights and feature maps with inter-channel elements, and accumulating the results along the depth direction. Specifically, the following expression defines the convolution (C) between weights (W) and feature maps (F), is the convolution result. w i 2 W and f i 2 F are the weights and feature maps in each intra-channel, respectively. From the layers, intra-channels, inter-channels, and depths of DNNs, it is possible to perform convolution operations with quantized weights and feature maps. Two main steps implement the convolution operation of tiny YOLO3: (1) feature matrix conversion (FMC) and (2) general matrix multiplication (GEMM). Concretely, the FMC converts the inputs to feature maps based on the window dimension of filters, and the convolution is achieved via an element-by-element multiply-accumulate operation (MAC) between the weights and feature maps. The tiny YOLO3 has 13 convolution layers and two types of filters (1Â1 and 3Â3 kernel size). It is required to convert inputs into feature maps using FMC for the filters with 3Â3 kernels, while it does not need to convert input vectors for the filters with 1Â1 kernel. As shown in Table 2, the eighth, tenth, eleventh, and thirteenth layers deploy filters with 1Â1 kernel while other layers use filters with 3Â3 kernel.
Since tiny YOLO3 employs one stride zero-padding, the width and height for the inputs and outputs of convolution are identical. The dimensions of inputs and filters determine the dismension of output feature maps. Assuming that the dimensions of inputs and filters are w  h  d (width, height and depth of input) and f w  f h  f d (width, height and depth of filter), respectively, the dimension of output FMC can be calculated by where d fmc is the dimension of FMC outputs. The depth of the feature map and the weight should be the same in order for the convolution operation to be implemented. Table 2 concisely summarizes the dimension of output feature maps in each convolution layer of tiny YOLO3, revealing that the maximum amount of data in feature maps is over 6 million (layer 2). A total of 23.765625 Megabytes memory is required if the 32-bit floating-point format represents these feature maps. However, the memory utilization will be halved if the 16-bit fixed-point format represent these feature maps. The fixed-point numbers are represented by Q-format, which is denoted by QðL FI Á L FR Þ or QðL FR Þ. The symbol ''Á'' indicates the radix point. L FI and L FR are the integer and fraction bit lengths of fixed-point representation, respectively.
The implementation of GEMM includes two steps: (1) element-by-element multiplication (MUL); (2) summation of the multiplication result (ADD). As illustrated in Fig. 3, each element in the filter is multiplied by each element in the first row of the feature map, and the result is stored in memory.
The second element of the filter is then multiplied by each element in the second row of the feature map, and the product is summed to each element in memory. The calculation process continues until the filter's last element is multiplied by every element in the last row of the feature map. Then, the addition operation is conducted using the result of the previous summation. The convolution process between the first filter and the feature maps is now complete. In general, the filters of tiny YOLO3 is a 4-dimension vectors (M  w  h  d), with each filter dimension specified as f n which equals w  h  d. According to the principle of matrix multiplication, the output dimension of GEMM is M  f k (refer to Fig. 3 Table 3 presents that the GEMM of tiny YOLO3 involves 8841794 16-bit fixed-point addition operations and 2782480896 16-bit fixed-point multiplication operations, which is the bottleneck for real-time object detection.
A simple way to convert a floating-point number to a fixed-point number is to multiply the floating-point number by the scaling factor, which is calculated by where INTðÞ indicates the function of rounding calculation result to the nearest integer number. X float and X fixed are the floating-point and fixed-point numbers, respectively. As an instance, the fixed-point number of À2:89037 can be derived by À2:89037 Qð2:13Þ ¼ INT À2:89037 float  2 13 À Á % À23678. Therefore, the floating-point number À2:89037 can be represented by the binary: 1010001110000010 2 . ''Appendix'' provides the pseudo codes for the format conversion between floating-point, fixed-point formats and their corresponding binary representations.
3 Hybrid Q-format multiplication with adaptive quantization proposal
Adaptive quantization for tiny YOLO3
As shown in Fig. 4, suppose different fixed-point representation formats are employed among DNNs' layers while designing a multi-layer based neuromorphic processor. In this instance, it is vital to independently control the different arithmetic logic units (ALUs) in each layer. Moreover, since the ALUs output in the previous layer is the current layer's input, a bit post-processing circuit of the feature maps is required to ensure that the two layers' data representation formats are consistent. The M ALU modules shown in Fig. 4 share the control signal, and each module can be directly attached without the bit post-processing circuit if each layer and channel of weights and feature maps adopt an adaptive fixed-point representation format. Hence, in order to tackle the aforementioned challenging issues, this paper proposes a fully adaptive quantization proposal to improve the neuromorphic processor design. Typically, the following inequality equation is used to limit the range of integer bit length (L FI ) for fixed-point values, where L b is the total bit length of a fixed-point representation, which is defined by, Eq. (1) constrains the length of an integer for positive and negative numbers, respectively. However, the above inequality equation will be trivial if the result of the logarithm operation is less than or equal to -1. As an illustration, the constrain of L FI becomes L FI ! 8 when X fixed equals 0.00390625. It can be observed from the above inequality equations that the output limit of the logarithmic operation that makes the expression meaningful is-1 because the length of integer bits should be equal or greater than 0 (L FI ! 0). This paper introduces an adaptive quantization (ADQ) method that flexibly determines the integer and fraction bits' length to solve this issue. The bits length of integer in fixed-point number X fixed is defined by the following equation, where r ¼ 2 À1 À 2 1ÀL b . Since the dynamic range of L b satisfies L b ! 2, it can be deducted that r ! 0. The symbol floor½ represents the floor function that provides the largest integer less than or equal to the input.
Binary addition with sign-bit check
When adding two fixed-point integers, the fundamental idea is to ensure that the addend's radix point aligns with that of the augend. An incorrect alignment between addend and augend will result in an erroneous calculation. Aligning two binaries with varying L FI will produce different bit lengths for both the integer and fraction component of the addend and augend. A sign extension method can be applied to circumvent the issue of inconsistent bit length between addend and augend. The specific implementation of sign extension is to add ''1'' before the sign bit of a negative number and ''0'' before the positive number sign bit. As negative numbers are stored in memory in the form of two's complement, the sign extension will not change the negative numbers. Similarly, extending ''0'' before a positive number has no impact on its value. It should be noted that the carrier generated in front of the sign bit will have an impact on the addition results between addend and augend. Suppose that two fixed-point numbers represented where maxðÞ is the function that searches for the maximum value from its elements. L ðcÞ FI and L ðcÞ FR are the bits length of integer and fraction part attained from addition operation. For instance, the addition result between two 16-bit fixed-point numbers represented by Q(0.15), Q(2.13) will be expressed as Q(2.15) format. Since the length of 18 bits Q(2.15) is inconsistent with that of 16 bits, the least significant two bits are usually discarded, and the addition result is practically represented by the Q(2.13) format.
The issue of bit overflow frequently occurs in binary addition, leading to inaccurate representations of addition results. The carryout of sign bits (most significant bit: Fig. 4 Challenges of neuromorphic processor's design with traditional quantization method. The quantization can shorten the bit length of weights by a few bits, thereby decreasing the memory utilization MSB) is closely associated with the location of the radix point. In other words, retaining or discarding the overflow bit modifies the length of integer and fraction bits in fixedpoint numbers. As we know, adding two numbers with different sign bits will not induce the overflow problem. Therefore, the first step in judging whether overflow occurs in the addition of two fixed-point numbers is to determine whether the two numbers' sign bits are consistent. Generally, overflow happens when the sign bit of two numbers is the same but the sign bit of the addend or augend differs from the MSB of the addition result. As shown in Fig. 5a, we propose a sign-bit check approach to solve the overflow issue.
In this case, the overflow bit is added before the MSB of the addition result, and the value of the overflow bit is consistent with the sign bit of the addend or augend. It is essential to increase the L FI since an extra bit is added before the radix point, L FI ¼ L FI þ 1. The pseudo-codes for the overflow check can be found in Algorithm 1. However, the overflow bits can be discarded directly if the addend or augend sign is the same as the MSB of the addition result. Because the fixed-point numbers are stored in the two's complement format, it is unnecessary to keep all the sign bit before the radix point. The L FI also has a close relationship with the number of the discarded sign bit. As shown in Fig. 5b, two additional sign bits can be discarded, and the following 16-bit binaries can be preserved to represent the addition result, which can effectively enhance the representation solution. In this case, only one bit needs to be reserved, and the other two bits can be removed, and the L FI is zero, L FI ¼ L FI À 2. Algorithm 1 shows the pseudo-codes of binary addition with overflow check technique.
In brief, the addition of two 32-bit floating-point numbers can be accomplished using the procedures below. (1) Quantized the 32-bit floating-point numbers to the 16-bit fixed-point numbers and align the radix point between addend and augend; (2) Extend the sign bit for the number with a small L FI and fill zeros to the empty bits for the number with large L FI ; (3) Implement the bit-to-bit addition between addend and augend. (4) Overflow check and integer or fraction bit length adjustment for the addition result. Appendix provides the pseudo codes of binary addition for fixed-point representations. Figure 6 illustrates the addition example of floating-point numbers (À0.746783, À2.89037) implemented by adaptive quantization method.
According to the approach mentioned above, the 32-bit floating-point numbers are quantized as the following 16-bit fixed-point numbers: À0:746783 À! According to the overflow check principle mentioned above, the overflow bit can be discarded since the sign bit of the addition result is consistent with the addend and augend. As shown in Fig. 6, the addition result is expressed as 100.010111001110001 if the overflow bit is discarded. In summary, the addition result is represented with Q(2.13) format as 1000101110011100 (scaling factor: 2 13
Binary multiplication with bit roundoff
In contrast to addition, multiplication does not need the alignment of the radix point; rather, the proper use of sign extension is vital to fixed-point multiplication. In addition, identifying the sign bit of the product is a crucial step in establishing the accuracy of fixed-point multiplication. There is no difference between binary multiplication and decimal multiplication except for the sign bit multiplication (MSB). The two's complement format represents the partial product for the sign bit multiplication if the multiplier sign is ''1''. In other words, the partial product is represented by the two's complement format if the multiplier is negative. Firstly, the binary multiplication calculates the partial product, followed by the addition of all the partial products to determine the final product. The integer and fraction bit length of multiplication results between In brief, the length of the product has three parts: the number of integer bits, fraction bits, and sign bits. Given that each signed fixed-point number has a sign bit, the last component of the above equation includes two sign bits. In practice, the multiplication between two N-bit numbers generates a 2N À 1 bits product. However, the Eq. 6 indicates that the bit number of products between two N-bit fixed-point values are 2N where two identical sign bit (with 1-bit sign extension) are included. Consequently, it is necessary to discard a sign bit by shifting the radix point one bit to the left.
Since the bit length of fixed-point multiplication between two N-bit fixed-point binaries is 2N, roundoff error inevitably governs the accuracy of the product. Therefore, it is vital to retain the significant bits and discard the non-dominant bits to increase the accuracy of convolution operations. To reduce the impact of roundoff errors on object detection, we propose the bit roundoff approach to discover the optimal bit sequence for a product. The bit roundoff strategy aims to enhance the opportunity of selecting more significant bits and removing redundant sign bits. It is worth mentioning that the bit roundoff is also associated with the L FI on account of the position change of the sign bit during the bit selection. The L FI with bit roundoff is defined as L r FI ¼ L FI À N discard , where L r FI is the updated integer bit length with bit roundoff method, and N discard is the number of discarded bits during bit roundoff calculation. As shown in Fig. 7, the bit length of the product between two 4-bit binaries is 8-bit.
Different locations of the multiplier or multiplicand's radix point result in the selection of distinct binary sequences. If the radix point of the multiplicand is fixed in the middle of the binary sequence (Q(1.2)) and the radix point of the multiplier is adjusted to Q(1.2), Q(2.1), and Q(3.0) in turn, different product sequences will be obtained, 0100 Qð0:3Þ 1100 Qð1:2Þ Â 1111 Qð1:2Þ , 0100 Qð0:3Þ 1100 Qð1:2Þ Â 1111 Qð2:1Þ , and 0100 Qð1:2Þ 1100 Qð1:2Þ Â 1111 Qð3:0Þ . As introduced before, the L FI ¼ 3; 4; 5 for the multiplication between Q(1.2) and Q(1.2), Q(1.2) and Q(2.1), and Q(1.2) and Q(3.0), respectively. As shown in Fig. 7, the N discard is 3, 4, and 4 for each computation, accordingly. Therefore, the products can be represented by Q(0.3), Q(0.3) and Q(1.2) representation formats if 4-bit memory is available to store the product. It is necessary to fill the product with zeros starting from the last bit if there are insufficient binaries to represent the product result due to sign bit discard. As an illustration, 1-bit with zero should be filled in the last bit of the product if 5-bit memory is available. The details of fixed-point multiplication will be explained using the same numbers as fixedpoint number addition (À0.746783 Â À2.89037). For example, the multiplication between 1010000001101001 and 1010001110000010 generates a 31-bit binary sequence; hence, all partial products will be extended to 32 bits using sign extension bits. Algorithm 2 shows the pseudo codes of bit roundoff for the multiplication of fixed-point numbers. The details of fixed-point multiplication will be explained using the same integers (À0.746783 Â À2.89037) as an example. As shown in Table 4, since the multiplication between 1010000001101001 and 1010001110000010 generates a 31-bit binary sequence, all partial products are extended to 32 bits using sign extension bits.
It is worth mentioning that the two's complement format represents the partial product for the row of 16Â because the multiplier is negative. In other words, the product between the MSB of the multiplier and each bit of multiplicand is converted to two's complement format, .
The leftmost two bits of the product are sign bits, and the residual sign bit (leftmost bit) can be eliminated by leftshifting the product one bit. Therefore, the multiplication result between 1010000001101001 and 01011111100 10111 is 01000101000100101010000010100100. The multiplication between Q(0.15) and Q(2.13) can be represented by Q(3.28). On account of the product's left shift, the L FR gains an extra bit while the L FI losses one bit. Therefore, the multiplication result can be expressed by Q(2.29). The rightmost 16 bits can be omitted if there is no overflow among the addition of partial product, and the multiplication result is 0100010100010010 Qð2:13Þ .
In summary, the multiplication of fixed-point numbers can be accomplished by the following steps: (1) Fill the empty bits of partial product with zeros. Since the bit position of the valid partial product begins from the corresponding multiplier position, it is required to fill the partial product's empty bits with zeros; (2) Calculate the partial product using the ''and'' bitwise operation and extend the sign bit; (3) Convert the binary representation to the format of two's complement. If the multiplier is negative, the partial product between the sign bit of the multiplier and each bit of multiplicand should be represented in two's complement format; (4) Sign extension. A 32-bit partial product is generated for the multiplication between two 16-bit fixed-point numbers. The bit length of the partial product is 16-bit. Therefore, it is essential to fill the remaining bits with the sign extension approach; (5) Compute the sum of all partial products and shift the product 1-bit to the left. The function mulFixed in Algorithm 3 illustrates details about binary multiplication of fixed-point representation.
Results and discussion
4.1 Algorithm verification Figure 8 provides a comprehensive statistical analysis of the adaptive quantization for tiny YOLO3's weights. The tiny YOLO3 has 8858734 weights, with maximum and minimum values of 400.63385009765625 and À17.461894989013672, respectively. It shows that a total number of 8850306 weights are represented by Q(0.15) format, which accounts for 99.9% of the weights in tiny YOLO3.
Evaluation of Conversion Errors The GEMM performs the convolution operation of tiny YOLO3 between feature maps and weights. Therefore, evaluating the conversion errors of these feature maps and weights is crucial. According to the dimension of feature maps in each convolution layer shown in Table 2, the percentage of Q-format representation for feature maps in each convolutional layer and weights are evaluated in this section. The detailed calculation method is illustrated in Algorithm 5. The density of feature maps (from layer 1 to layer 13) and weights are described in Fig. 9, which shows that most of the feature maps and weights are located in the range of -1-1 (accounts for 60 -90%).
In other words, the majority of feature maps and weights can be represented by Q(0.15) using adaptive quantization conversion. A small amount of data is represented by Q(1.14) and Q(4.11) formats. Totally, 20723456 feature maps and 8858734 weights (around 30 million parameters) are employed to evaluate the conversion between 32-bit floating-point to 16-bit fixed-point numbers with the adaptive quantization approach. The conversion error rate (error for each element in the corresponding convolutional layer) is utilized to assess the conversion error from 32-bit floating-point to 16-bit fixed-point numbers with the adaptive quantization algorithm. The conversion error rate (f) is defined as follows, where n is the number of floating-point representations converted by the adaptive quantization approach. Symbol k k 2 indicates the Euclidean norm. x ðiÞ fixed 2 X fixed and x ðiÞ float 2 X float are the fixed-point and floating-point numbers, respectively. If the length of the integer bit is sufficiently enough, the range of the fixed-point number's representation expands at the expense of resolution. Converting from floating-point to fixed-point values with a high resolution or wide dynamic range will always result in rounding errors. Therefore, it is imperative to adopt adaptive quantization to explore the optimal representation for fixed-point numbers. To better highlight the comparison results, the conversion error rates are transformed by log10 arithmetic operation, f À! log10ðfÞ. It is worth noting that the longer the hist bar, the smaller the conversion error rates. The conversion error rates of adaptive quantization are much smaller than any other Q-format representations for feature maps and weights of tiny YOLO3. To further explore a suitable representation format of weights and feature maps for neuromorphic processor design, Fig. 10 depicts the optimal and suboptimal L FI for all weights and feature maps. The experimental results prove that the adaptive quantization on 16-bit fixed-point numbers exhibits a minimum conversion error rate, and it can be considered an optimal representation for 32-bit floating-point numbers. Besides, the suboptimal solution represented by Q(6.9) has a relatively low accumulated error rate for all feature maps and weights of tiny YOLO3.
In addition, the accumulated errors of convolution results for each filter are calculated to evaluate the arithmetic errors of adaptive quantization on 16-bit fixed-point numbers. As mentioned before, since the number of convolutions is M Â f k in each layer, the total number of convolutions for accumulated error evaluation is 6164275. Furthermore, it shows that the relatively low conversion error rate of weights and feature maps is concentrated toward L FI ¼ 6 without using adaptive quantization. The convolution results for all different representation formats are demonstrated to search for the optimal representation format of tiny YOLO3. The optimal and suboptimal representation formats for all feature maps and weights are summarized in Fig. 10, which demonstrates that the adaptive quantization possesses optimal performance for the conversion from 32-bit floating-point to 16-bit fixedpoint numbers. Figure 11 shows the error rate of GEMM operation with adaptive quantization, which demonstrates that the representation format with adaptive quantization presents a low computation error rate.
Moreover, Fig. 12 provides the accumulated conversion error rate with different representation formats for feature maps and weights in different layers, illustrating that the representation format with adaptive quantization and Q(6.9) exhibits optimal and suboptimal solutions, Fig. 9 Density of feature maps (from layer 1 to layer 13) and parameters in tiny YOLO3 respectively. Figure 13 presents the recognition results of tiny YOLO3 with different representation formats.
The experimental result shown in Fig. 13a, b, c, d, l, m, n, o, and p illustrate that the objects cannot be detected with the representation of Q(0.15), Q(1.14), Q(2.13), Q(3.12), Q(11.4), Q(12.3), Q(13.2), Q(14.1), and Q(15.0), respectively. It can be observed that parts of the objects (compared with floating-point recognition result in Fig. 13r) are detected in Fig. 13e, f, j, and k. The representation formats with Q(6.9), Q(7.8), and Q(8.7) shows correct detection results. There is no doubt that the representation with adaptive quantization shows readily acceptable detection results (refer to Fig. 13q). The experimental results show that the adaptive quantization algorithm not only has the minimum conversion error rate and minimum convolution error in tiny YOLO3 but also offers the same detection result as the 32-bit floating-point numbers by using the 16-bit fixed-point representation format.
In addition, as shown in Fig. 14, the statistic of recognition results indicates that the conversion error dominates the recognition results with the decrease of L FI , and the rounding error becomes more and more significant with the decrease of L FI .
Numbers represented by large L FI can cover a wide representation range and have small conversion error while numbers represented by small L FI has a high resolution and small roundoff error in convolution computation. Therefore, investigating an effective method to balance the L FI is essential in designing a neuromorphic processor. Although the errors represented by Q(6.9), Q(7.8), and Q(8.7) are larger than those represented by adaptive quantization, they can also be utilized as alternatives to convert the floating-point numbers to fixed-point numbers in the neuromorphic processor design.
Evaluation of Optimal Representation Format for Tiny YOLO3 The Microsoft common objects in context (MS COCO) 2014 and 2017 validation datasets are employed to search for the optimal representation format of tiny YOLO3. During the training process, 35504 samples are extracted from COCO-2014 to train the neural network. Therefore, 5000 remaining samples are selected from COCO-2014 to avoid utilizing training data for verification of the neural network's performance. Similarly, the COCO-2017 dataset contains an additional 5000 samples available for tiny YOLO3's performance evaluation. Since the tiny YOLO3's training is based on the 32-bit floating-point representation format, the first task is to explore the maximum weight distribution in each layer. The tiny YOLO3 The distribution of feature maps corresponding to different input data varies diversely. Therefore, this paper explores an average reference integer to determine the fixed-point representation format for tiny YOLO3. The benchmark COCO-2014 (5000 samples) and COCO-2017 (5000 samples) validation datasets are selected to investigate the optimal representation format of tiny YOLO3. The feature maps in different layers are compared one by one to search for the maximum elements. The feature maps' density distribution is obtained using the kernel density estimation (KDE) approach. The experimental result shows that the maximum density distribution of feature maps with two different databases in each neural network layer is almost identical. The maximum feature maps are 86.405121 and 88.912598 with the COCO-2014 and COCO-2017 datasets, respectively. It can be inferred that the largest feature map is represented by the form of Q (7.8). Although the format of Q(7.8) can cover all the feature maps of the tiny YOLO3, the density of Q(7.8) is relatively low. In addition, the DNNs' precision represented by fixed-point format is determined by the fractional part's roundoff error and the integer part's overflow error. Thus, if the fixed-point representation can not cover all the feature maps using average or minimum fixed-point representation formats, the overflow error of the integer part will be generated. On the contrary, using the largest fixedpoint format to cover all the numbers will introduce the fractional part's roundoff error. Q(7.8) can be utilized as the fixed-point representation format of tiny YOLO3, but it can only be served as a suboptimal solution. The comprehensive distribution of maximum feature maps with both COCO-2014 and COCO-2017 datasets is illustrated in Fig. 15.
The two databases, COCO-2014 and COCO-2017, deliver the optimal reference integers for adaptive quantization(49.50958 and 49.58947) equivalently, and both integers can be represented by the fixed-point format of Q(6.9). From the above analysis results, it can be concluded that the optimal fixed-point representation format of tiny YOLO3 is Q(6.9).
Performance Analysis of tiny YOLO3 The mean average precision (mAP) is one of the standard criteria to evaluate the performance of DNNs. This experiment will compare the tiny YOLO3's performance with the optimal 16-bit fixed-point representation (Q(6.9)) and the 32-bit floatingpoint representation format. Its high accuracy benefits from 32-bit floating-point representation. However, it requires complicated circuit control and interface connections Fig. 11 Error comparison of GEMM with different L FI formats among different layers to realize a multi-layer neuromorphic processor-oriented design. The fixed-point quantization technique affords a reliable solution for the tiny YOLO3's hardware optimization to reduce memory utilization and circuit complexity. Similarly, the COCO-2014 and COCO-2017 datasets are deployed to assess tiny YOLO3's performance with Q(6.9) fixed-point representation format. Figure 16 illustrates the mAP of tiny YOLO3 using the Q(6.9) and 32-bit floating-point representation formats, which demonstrates that the Q(6.9) and 32-bit floating-points show almost the same mAP at each IoU threshold (floating-point: [32.48<EMAIL_ADDRESS>and (Q(6.9): [32.46, 36.28]@0.5).
In addition, the mAP differences between the Q(6.9) and 32-bit floating-point representation formats are in the range of ½À0:003; 0:002. The above result is adequate to validate that the Q(6.9) representation format has the equivalent mAP as the 32-bit floating-point representation form in the tiny YOLO3's performance evaluation. Meanwhile, the 16-bit fixed-point representation format can save half of the memory space and dramatically reduce the circuit design to realize its multi-layer neuromorphic processor.
Hardware verification
A hybrid multiplication module is designed to validate the hardware cost of the proposed method. Specifically, as shown in Fig. 17, weights and the feature maps represented by floating-point representation format are converted into integer binaries (X W INT ! X W B , X F INT ! X F B ) according to Algorithm 4, and their corresponding integer bit length (L W FI , L F FI ) are determined using Eq. 3. The proposed hybrid Q-format multiplication module embraces both the integer bit length and the N-bit fixedpoint binaries as the inputs. The magnitude of the variable N is designated by application requirement, and it can be 16-bit, 8-bit, or other values. The input bit length of L W FI and L F FI can be solved by log 2 ðNÞ if N is known. In this section, we develop a hybrid Q-format multiplication module that can accommodate varying bit lengths via the usage of a bit roundoff technique. Developing a uniformlength representation format for the multiplication module is crucial since most of the current neuromorphic processors deploy highly parallel general-purpose processing elements to emulate complicated DNNs' models, as illustrated in Table 1. The product's bit length of the multiplication module developed in this study is the same as the input bit length of the weights and feature maps, which are both N bits. As described in Sect. 3.3, since the multiplication of N-bit fixed-point binaries yields a 2N bits in length, the proposed module employs log 2 ð2NÞ to deduce the bit length of the product's L FI (L out FI ). The redundant extended sign bits can be eliminated without impacting the computation accuracy attributable to the bit roundoff block embedded into the multiplication module.
The post-synthesis of the proposed multiplication module utilizes Fujitsu 55 nm complementary metal-oxide semiconductor (CMOS) technology. Figure 18 depicts a post-synthesis simulation of hybrid Q-format multiplication, wherein the simulation's operating frequency and voltage are 100 MHz and 1.2 V, respectively. Table 4, the signed binary product between 16 0 hA069 and 16 0 hA382 is 32 0 b00100010100010010101 000001010010 ! 32 0 h22895052. Since the multiplication results between Q(0.15) and Q(2.13) can be represented by Q(3.28), both ''0''s at the MSB of 32 0 h22895052 are signed bits. By shifting one bit to the left, the redundant sign bit can be removed, thereby increasing the number's resolution, as an illustration, 32 0 b00.10001010001001010 1000001010010 32 0 b0.10001010001001010100000 10100100. Selecting the first 16-bit as the product yields 16 0 h4512. Since there is only 1-bit sign extension, the fixed-point representation format for Q(2.13) and Q(2.13) only demands to shift one bit to the left while the radix point is located after the fifth bit, as explained in the following expression, 16 0 hA069 Qð2:13Þ Â 16 0 hA382 Qð2:13Þ In the simulation, we synthesize five distinct types of multiplication to verify the module's hardware cost, including 32-bit  32-bit, 16-bit  16-bit, 8-bit  8-bit, 4bit  4-bit, and 2-bit  2-bit multiplication. Table 5 illustrates the post-synthesis results of the hybrid Fig. 16 Evaluation results of mAP with COCO dataset. diff(Q(6.9), Float) indicates the mAP difference between Q(6.9) fixed-point and floating-point representation formats Fig. 17 Hardware simulation flow with a hybrid multiplication module multiplication module's power consumption (dynamic power and static power), area, and delay. Figure 19 gives a detailed analysis regarding the postsynthesis results. Specifically, as shown in Fig. 19a-e, the internal power dominates the whole module power consumption which is 58.9%, 59.6%, 58.6%, 58.9%, and 63.7% for 32-bit  32-bit, 16-bit  16-bit, 8-bit  8-bit, 4bit  4-bit, and 2-bit  2-bit multiplication, accordingly. Comparatively, the ratios of leakage power and switching power for various sorts of multiplication are in the range of 1.1À5.5% and 30.8À40.1%, correspondingly. 32-bit  32bit multiplication consumes 3.37, 21.27, 51.74, and 191.2 times more area than 16-bit  16-bit, 8-bit  8-bit, 4-bit  4-bit, and 2-bit  2-bit multiplication, respectively. Figure 19f offers a normalized area comparison among various multiplications. 32-bit multiplication module (maximum power and area: 1805.5 lW and 658441 lm 2 ), as shown in Fig. 19g and h, totally requires 4.454 times, 32.63 times, 97.27 times, and 627.6 times as much energy as 16-bit, 8-bit, 4-bit, and 2-bit multiplication modules, respectively. The path between the L FI output registers and their pins causes a maximum delay of 65.31 ps for all types of multiplication modules (refer to Fig. 19i). Table 6 further contrasts the 8-bit hybrid Q-format multiplication (HQM) module with the benchmark multiplier in terms of power, delay, area, and power-delay product (PDP).
As shown in
The power consumption range of benchmark multipliers is [0.2 mW, 164.8 mW], which is [3.614, 2978.062] times more than the HQM's power consumption. The delay of the benchmark multipliers ranges from [0.62 ns, 16.69 ns], which is [9.493 to 255.55] times greater than the latency of our proposed multiplication module. Due to the adoption of different CMOS technologies, the synthesized area of HQM is [2.207, 97.705] times greater than those of the existing multipliers [316.81 lm 2 , 14024 lm 2 ]. Despite the increase in the circuit area, the PDP of the HQM is reduced by a factor of nearly [62.533, 56112.725] times compared to benchmark multipliers. In summary, the hybrid multiplier module proposed in this work offers considerably lower power consumption and latency characteristics than conventional multipliers.
Conclusion
In this paper, a neuromorphic processor-oriented hybrid multiplication strategy with an adaptive quantization method is proposed for the convolution operation of tiny YOLO3. The length of integer bits and the fraction bits of 16-bit fixed-point representations are adaptively determined based on the range of 32-bit floating-point numbers, overflow condition, and length of roundoff bits. The experimental result illustrates that the adaptive quantization on weights and feature maps maintains the same object detection accuracy while effectively reducing conversion 19 Post-synthesis result of hybrid Q-format multiplication module and roundoff errors from 16-bit fixed-point to 32-bit floating-point representations. In addition, the optimal representation formats (Q(6.9)) of 16-bit fixed-point values have been achieved as a reference for the neuromorphic processor design. Moreover, a hybrid multiplication module with low power consumption and low latency is also designed, laying a solid foundation for the development of neuromorphic processors.
Appendix: pseudo codes for simulation Algorithm 4 provides the pseudo-codes for the conversion from fixed-point number to binary.
Acknowledgements This work was supported in part by JSPS KAKENHI under Grant JP21K17719, and in part by New Energy and Industrial Technology Development Organization (NEDO) and Center for Innovative Integrated Electronic Systems(CIES) consortium.
Data availability The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Conflict of interest The authors declare that they have no conflict of interest with respect to the research, authorship and/or publication of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 11,502.4 | 2023-02-13T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Handling Handles II: Stratification and Data Analysis
In a previous work [1], we proposed an integrability setup for computing non-planar corrections to correlation functions in N=4 super-Yang-Mills theory at any value of the coupling constant. The procedure consists of drawing all possible tree-level graphs on a Riemann surface of given genus, completing each graph to a triangulation, inserting a hexagon form factor into each face, and summing over a complete set of states on each edge of the triangulation. The summation over graphs can be interpreted as a quantization of the string moduli space integration. The quantization requires a careful treatment of the moduli space boundaries, which is realized by subtracting degenerate Riemann surfaces; this procedure is called stratification. In this work, we precisely formulate our proposal and perform several perturbative checks. These checks require hitherto unknown multi-particle mirror contributions at one loop, which we also compute.
Introduction
Like in any perturbative string theory, closed string amplitudes in AdS 5 × S 5 superstring theory are given by integrations over the moduli space of Riemann surfaces of various genus. Like in any large-N c gauge theory, correlation functions of local single-trace gauge-invariant operators in N = 4 SYM theory are given by sums over double-line Feynman (ribbon) graphs of various genus. By virtue of the AdS/CFT duality, these two quantities ought to be the same. Clearly, to better understand the nature of holography, it is crucial to understand how the sum over graphs connects to the integration over the string moduli.
Our proposal in [1] provides one realization. It can be motivated as a finite-coupling extension of a very nice proposal by Razamat [2], built up on the works of Gopakumar et al. [3], which in turn relied on beautiful classical mathematics by Strebel [4], where an isomorphism between the space of metric ribbon graphs and moduli spaces of Riemann surfaces was first understood. 1 Let us briefly describe some of these ideas. Figure 1 is a very inspiring example, so let us explain a few of its features. The figure describes four strings interacting at tree level, i. e. a four-punctured sphere (in the figure, one of the punctures is at infinity). The black lines are sections of the incoming strings. Close to each puncture, the string world-sheet behaves as a normal single string, so here the black lines are simple circles. They are the lines of constant τ for each string. These lines of constant τ need to fit together into a global picture, as shown in the figure. Note that there are four special points, the red crosses, which can be connected along critical lines (the colorful lines), across which we "jump from one string to another". These critical lines define a graph. There is also a dual graph, drawn in gray. 2 This construction creates a map between the moduli space of a four-punctured Riemann sphere and a class of graphs, as anticipated above.
These cartoons can be made mathematically rigorous. For each punctured Riemann surface, there is a unique quadratic differential φ, called the Strebel differential, with fixed residues at each puncture, which decomposes the surface into disk-like regions -the faces delimited by the colorful lines [4] (see the appendices in [2] for a beautiful review). The red crosses are the zeros of the Strebel differential. The line integrals between these critical points, i. e. the integrals along the colorful lines are real, and thus define a (positive) length for each line of the graph. In this way the graph becomes a metric graph. (The sum over the lengths of the critical lines that encircle a puncture equals the residue of the Strebel differential at that puncture by contour integral arguments.) By construction, the critical lines emanating from each zero have a definite ordering around that zero. This ordering can equivalently be achieved by promoting each line to a "ribbon" by giving it a non-zero width; for this reason the relevant graphs are called metric ribbon graphs.
Such metric ribbon graphs, like the one on the right of Figure 1, also arise at zero coupling in the dual gauge theory. There, the number associated to each line is nothing but the number of propagators connecting two operators along that line. These numbers are thus integers in this case, as emphasized in [2]. Note that the total number of lines getting out of a given operator is fixed, which is the gauge-theory counterpart of the above contour integral argument.
As such, it is very tempting to propose that we fix the residue of the Strebel differential at each puncture to be equal to the number of fields 3 inside the trace of the dual operator. 4 Then there is a discrete subset of points within the string moduli space where those integer residues are split into integer subsets, which define a valid gauge-theory ribbon graph. By our weak-coupling analysis, it seems that the string path integral is localizing at these points. Note that the graphs defined by the Strebel differential change as we move in the string moduli space, and that all free gauge-theory graphs nicely show up when doing so, such that the map is truly complete. The jump from one graph to another is mathematically very similar to the wall-crossing phenomenon within the space of 4d N = 2 theories [12].
What about finite coupling? Here it is where the hexagons come in. The gray lines in Figure 1 typically define a triangulation of the Riemann surface (since the colored 2 In this example, both the graph and its dual graph are cubic graphs, but this is not necessarily true in general. 3 The "number of fields" is inherently a weak-coupling concept, which could be replaced by e. g. the total R-charge of the operator. 4 Note that until now the value of the residue remained arbitrary. Indeed, the map between the space of metric ribbon graphs Γ n,g and the moduli space of Riemann surfaces M n,g conveniently contains a factor of R n + as M n,g × R n + Γ n,g , so we can think of the space of metric ribbon graphs as a fibration over the Riemann surface moduli space. Fixing the residues of the Strebel differential to the natural gauge-theory values simply amounts to picking a section of this fibration. dual graph is a cubic graph). The triangular faces become hexagons once we blow up all punctures into small circles, such that small extra segments get inserted into all triangle vertices, effectively converting all triangles into hexagons. In order to glue together these hexagons, we insert a complete basis of (open mirror) string states at each of the gray lines. The sum over these complete bases of states can be thought of as exploring the vicinity of each discrete point in the moduli space, thus covering the full string path integral.
For correlation functions of more/fewer operators, and/or different worldsheet genus, the picture is very similar. What changes, of course, is the number of zeros of the Strebel differential, 5 that is the number of hexagon operators we should glue together. In the example above, we had four red crosses, that is four hexagons. This number is very easy to understand. Topologically, a four-point function can be thought of as gluing together two pairs of pants, and each pair of pants is the union of two hexagons. To obtain a genus g correlation function of n closed strings, we would glue together 2n + 4g − 4 hexagons. We ought to glue all these hexagons together and sum over a complete basis of mirror states on each gluing line. Each hexagon has three such mirror lines, as illustrated in Figure 1, and each line is shared by two hexagons, so there will be a (3n + 6g − 6)-fold sum over mirror states. 6 This is admissibly a hard task, but, until now, there is no alternative for studying correlation functions at finite coupling and genus in this gauge theory. So this is the best we have thus far. 7 For higher genus -i. e. as we venture into the non-planar regime -there is a final and very important ingredient called the stratification, which appeared already in the context of matrix models [15,16], and which gives the name to this paper. It can be motivated from gauge theory as well as from string theory considerations. From the gauge theory viewpoint, it is clear that simply drawing all tree-level graphs of a given genus, and dressing them by hexagons and mirror states cannot be the full story: As we go to higher loops in 't Hooft coupling, there will be handles formed by purely virtual processes, which are not present at lower orders. So including only genus-g tree-level graphs misses some contributions. One naive idea would be to include -at a given genus -all graphs which can be drawn on surfaces of that genus or less. But this would be no good either, as it would vastly over-count contributions. The stratification procedure explained in this paper prescribes precisely which contributions have to be added or subtracted, so that -we hopeeverything works out. From a string theory perspective, this stratification originates in the boundaries of the moduli space. We can have tori, for example, degenerating into spheres, and to properly avoid missing (or double-counting) such degenerate contributions, we need to carefully understand what to sum over. In more conventional string perturbation theory, 5 The zeros of the Strebel differential may vary in degree. The number of zeros equals the number of faces of the (dual) graph, whereas the sum of their degrees equals the number of hexagons. 6 Note that we should also sum over the lengths associated to the gluing lines. These lines always connect two physical operators, with the n constraints that the sum of lengths leaving each puncture equals the length (charge) of the corresponding physical operator, such that one ends up with a (2n + 6g − 6)dimensional sum, which is the appropriate dimension of the string moduli space. For instance, for n = 4 and g = 0 we have a two-fold sum, which matches nicely with the two real parameters of the complex position of the fourth puncture on the sphere, once the other three positions are fixed. 7 Of course, there are simplifying limits. In perturbation theory, most of these sums collapse, since it is costly to create and annihilate mirror particles. Hence, the hexagonalization procedure often becomes quite efficient, see e. g. [13]. At strong coupling, the sums sometimes exponentiate and can be resummed, see e. g. [14]. And for very large operators, the various lengths that have to be traversed by mirror states as we glue together two hexagons are often very large, projecting the state sum to the lowest-energy states, thus also simplifying the computations greatly, as in [1].
we are used to continuous integrations over the moduli space, where such degenerate contributions typically amount to measure-zero sets, which we can ignore. But here -as emphasized above and already proposed in [2] -the sum is rather a discrete one, hence missing or adding particular terms matters.
All in all, our final proposal can be summarized in equation (2.2) below, where the seemingly innocuous S operation is the stratification procedure, which is further motivated and made precise below, see e. g. (2.17) for a taste of what it ends up looking like.
In the end, all this is a plausible yet conjectural picture. Clearly, many checks are crucial to validate this proposal, and to iron out its details. A most obvious test is to carry out the hexagonalization and stratification procedure to study the first non-planar quantum correction to a gauge-theory four-point correlation function, and to compare the result with available perturbative data. That is what this paper is about.
Developing the Proposal
In the following, we introduce our main formula and explain its ingredients in Section 2.1. In the subsequent Section 2.2, we explain the summation over graphs at the example of a four-point function on the torus. Section 2.3 and Section 2.4 are devoted to the effects of stratification.
The Main Formula
Recall that in a general large-N c gauge theory with adjoint matter, each Feynman diagram is assigned a genus by promoting all propagators to double-lines (pairs of fundamental color lines). At each single-trace operator insertion, the color trace induces a definite ordering of the attached (double) lines. By this ordering, the color lines of the resulting double-line graph form well-defined closed loops. Assigning an oriented disk (face) to each of these color loops, we obtain an oriented compact surface. The genus of the graph (Wick contraction) is the genus of this surface. Counting powers of N c and g 2 YM for propagators (∼g 2 YM ), vertices (∼1/g 2 YM ), and faces (∼N c ), taking into account that every operator insertion adds a boundary component to the surface, absorbing one power of N c into the 't Hooft coupling λ = g 2 YM N c , and using the formula for the Euler characteristic, we arrive at the well-known genus expansion formula [17] for connected correlators of (canonically normalized) single-trace operators O i : Here, G 1,...,n (λ) is the correlator restricted to genus-g contributions. Via the AdS/CFT duality, the surface defined by Feynman diagrams at large N c becomes the worldsheet of the dual string with n vertex operator insertions.
The purpose of this paper is to give a concrete and explicit realization of the general large-N c genus expansion formula (2.1) for the case of N = 4 super Yang-Mills theory. The proposed formula is based on the integrability of the (gauge/worldsheet/string) theory, and should be valid at any order in the 't Hooft coupling constant λ. The general formula reads H a . (2.2) Let us explain the ingredients: The operators Q i we consider are half-BPS operators, which are characterized by a position x i , an internal polarization α i , and a weight k i , Here, Φ = (Φ 1 , . . . , Φ 6 ) are the six real scalar fields of N = 4 super Yang-Mills theory, and α is a six-dimensional null vector. We start with the set Γ of all Wick contractions of the n operators in the free theory. Each Wick contraction defines a graph, whose edges are the propagators. We will use the terms "graph" and "Wick contraction" interchangeably. By the procedure described above, we can associate a compact oriented surface to each Wick contraction, and thereby define the genus g(Γ ) of any given graph Γ . Importantly, the edges emanating from each operator have a definite ordering around that operator due to the color trace in (2.3). 8 Next, we promote each graph Γ to a triangulation Γ in two steps: First, we identify ("glue together") all homotopically equivalent (that is, parallel and non-crossing) lines of the original graph Γ . The resulting graph is called a skeleton graph. We can assign a "width" to each line of the skeleton graph, which equals the number of lines (propagators) that have been identified. Each line of the skeleton graph is called a bridge b, and the width of the line is conventionally called the bridge length b . There is a propagator factor d b b for each bridge. By definition, each face of a skeleton graph is bounded by three or more bridges. In a second step, we subdivide faces that are bounded by (m > 3) bridges into triangles by inserting (m − 3) further zero-length bridges (ZLBs). Using the formula for the Euler characteristic, one finds that the fully triangulated graph Γ has 2n + 4g(Γ ) − 4 faces.
For each bridge b of the triangulated skeleton graph Γ , we integrate over a complete set of states ψ b living on that bridge, and we insert a weight factor W(ψ b ). The weight factor measures the charges of the state ψ b under a superconformal transformation that relates the two adjacent triangular faces; it thus depends on both the cross ratios of the four neighboring vertices, and on the labels of the state ψ b . The worldsheet theory on each bridge is a "mirror theory" which is obtained from the physical worldsheet theory by an analytic continuation via a double-Wick (or 90 degree) rotation. States in this theory are composed of magnons with definite rapidities u i ∈ R and bound state indices a i ∈ Z ≥1 . A complete set of states is given by all Bethe states, where each Bethe state is characterized by the number m of magnons, their rapidities {u 1 , . . . , u m }, their bound state indices {a 1 , . . . , a m }, and their su(2|2) 2 flavor labels (A,Ȧ). The integration over the space M b of mirror states hence expands to (2.4) where µ a i (u i ) is a measure factor,Ẽ is the mirror energy, b is the length of the bridge b, and the exponential is a Boltzmann factor for the propagation of the mirror particles across the bridge.
Finally, each face a of the triangulated skeleton graph Γ carries one hexagon form factor H a , which accounts for the interactions among the three physical operators Q i , Q j , Q k as well as the mirror states on the three edges b 1 , b 2 , b 3 adjacent to the face. It is therefore a function of all this data: (2.5) The hexagon form factor is a worldsheet branching operator that inserts an excess angle of π on the worldsheet. It has been introduced in [9] for the purpose of computing planar three-point functions, and has later been applied to compute planar four-point [5,6] and five-point functions [8]. Our formula (2.2) is an extension and generalization of these works to the non-planar regime. Notably, all ingredients of the formula (2.2) (measures µ a (u), mirror energiesẼ a (u), and hexagon form factors H) are known as exact functions of the coupling λ, and hence the formula should be valid at finite coupling. 9 The hexagon form factors are given in terms of the Beisert S-matrix [20], the dressing phase [21], as well as analytic continuations among the three physical and the three mirror theories on the perimeter of the hexagon [9]. Unlike the general genus expansion (2.1), the formula (2.2) nicely separates the combinatorial sum over graphs and topologies from the coupling dependence, since the sum over graphs only runs over Wick contractions of the free theory. At any fixed genus, the list of contributing graphs can be constructed once and for all. The dependence on the coupling λ sits purely in the dynamics of the integrable hexagonal patches of worldsheet H and their gluing properties.
Finally, we have the very important stratification operation indicated by the operator S in (2.2). The basic idea already anticipated in the introduction is that the sum over graphs mimics the integration over the string moduli space, which contains boundaries. At those boundaries, it is crucial to avoid missing or over-counting contributions, specially in a discrete sum as we have here. 10 Despite its innocuous appearance, it is perhaps the most non-trivial aspect of this paper and is discussed in great detail below; the curious reader can give a quick peek at (2.17) below.
In the remainder of this paper, we will flesh out the details of the formula (2.2), test it against known perturbative data at genus one, and use it to make a few higher-loop predictions.
Polygonization and Hexagonalization
The combinatorial part of the prescription is to sum over planar contractions of n operators on a surface with given genus. We refer to this step as the polygonization. This task can be split into three steps: (1) construct all inequivalent skeleton graphs with n vertices on the given surface (excluding edges that connect a vertex to itself), (2) sum over all inequivalent labelings of the vertices and identify each labeled vertex with one of the operators, and (3) for each labeled skeleton graph, sum over all possible distributions of propagators on the edges (bridges) of the graph that is compatible with the choice of operators, such that each edge carries at least one propagator. Maximal Graphs on the Torus. In the following, we will construct all inequivalent graphs with four vertices on the torus. To begin, we classify all graphs with a maximal number of edges. All other graphs (including those with genus zero) will be obtained from these "maximal" graphs by deleting edges. The maximal number of edges of a graph with four vertices on the torus is 12. Graphs with 12 edges cut the torus into 8 triangles.
For some maximal graphs, the number of edges drops to 11 or 10, such graphs include squares involving only two of the four vertices. Once we blow up the operator insertions to finite-size circles, all triangles will become hexagons, all squares will become octagons, and more generally all n-gons will become 2n-gons. We classify all possible maximal graphs by first putting only two operators on the torus, and by listing all inequivalent ways to contract those two operators. This results in a torus cut into some faces by the bridges among the two operators. Subsequently, we insert two more operators in all possible ways, and add as many bridges as possible. We end up with the 16 inequivalent graphs shown in Table 1. Let us explain how we arrive at this classification: Two operators on the torus can be connected by at most four bridges. It is useful to draw such a configuration as follows: , (2.6) where the box represents the torus, with opposing edges identified. The four bridges cut the torus into two octagons. Placing one further operator into each octagon and adding all possible bridges gives case 1.1 in Table 1. When both further operators are placed in the same octagon, there are two inequivalent ways to distribute the bridges, these are the cases 1.2.1 and 1.2.2 (here, the fundamental domain of the torus has been shifted to put the initial octagon in the center). Since each edge in general represents multiple propagators, we also need to consider cases where the two further operators are placed inside the bridges of (2.6). Placing one operator in one of the bridges and the other operator into one of the octagons gives case 1.3 in Table 1. Placing both operators in separate bridges gives cases 1.4.1 and 1.4.2. Placing both operators into the same bridge yields cases 1.5.1, 1.5.2, and 1.5.3. Finally, placing the third operator inside one of the octagons and the fourth operator into one of the bridges attached to the third operator results in case 1.6. Next, we need to consider cases where no two operators are connected by more than three bridges (otherwise we would end up with one of the previous cases). Again we start by only putting two operators on the torus. Connecting them by three bridges cuts the torus into one big dodecagon, which we can depict in two useful ways: = . (2.7) In the right figure, opposing bridges are identified, and we have shaded the two operators to clarify which ones are identical. Placing the two further operators into the dodecagon results in the the three inequivalent bridge configurations 2.1.1, 2.1.2, and 2.1.3 in Table 1.
Placing one operator into one of the bridges in (2.7) results in graph 2.2. We do not need to consider placing both operators into bridges, as the resulting graph would not have a maximal number of edges (and thus can be obtained from a maximal graph by deleting edges). Finally, we have to consider cases where no two operators are connected by more than two bridges. In this case, it is easy to convince oneself that all pairs of operators must be connected by exactly two bridges. We can classify the cases by picking one operator (1) and enumerating the possible orderings of its bridges to the other three operators (2,3,4). It turns out that there are only two distinguishable orderings (up to permutations of the operators): (2,3,2,4,3,4) and (2,3,4,2,3,4). In each case, there is only one way to distribute the remaining bridges (such that no two operators are connected by more than two bridges): .
These are the graphs 3.1 and 3.2 in Table 1. This completes the classification of maximal graphs. In Appendix B.1, we discuss an alternative way (an algorithm that can be implemented for example in Mathematica) of obtaining the complete set of maximal graphs for any genus and any number of operator insertions.
Non-Maximal Polygonizations. In the above classification of maximal graphs, each edge stands for one or more parallel propagators. In order to account for all possible ways of contracting four operators on the torus, we also have to admit cases where some edges carry zero propagators. We capture those cases by also summing over graphs with fewer edges. All of these can be obtained from the set of maximal graphs by iteratively removing edges. When we remove edges from all maximal graphs in all possible ways, many of the resulting graphs will be identical, so those have to be identified in order to avoid over-counting.
Hexagonalization. The next step in our prescription is to tile all graphs of the polygonization with hexagon form factors, which we refer to as the hexagonalization of the correlator. For many of the maximal graphs, the hexagonalization is straightforward, as every face has three edges connecting three operators, giving room to exactly one hexagon. But some maximal graphs, and in particular graphs with fewer edges, include higher polygons, which have to be subdivided into several hexagons. A polygon with m edges (and m cusps) subdivides into m − 2 hexagons, which are separated by m − 3 zero-length bridges (ZLBs). In this way, the torus with four punctures always gets subdivided into eight hexagons. 11 Later on, each of these hexagons will be dressed with virtual particles placed on the mirror edges or bridges which will generate the quantum corrections to the correlator under study, and which we refer to as sprinkling. The general counting of loop order involved in a general sprinkling is illustrated in Figure 2.
Let us illustrate the hexagonalization with an example. Take the maximal graph 1.1 of Table 1, and remove the horizontal lines in the middle, as well as the diagonal lines connecting the lower operator with the lower two corners. The resulting graph is depicted in Figure 3. It has eight edges that divide the torus into four octagons. Each octagon gets subdivided into two hexagons by one zero-length bridge, as shown in Figure 4. In this case, the hexagonalization meant nothing but reinstating the deleted bridges as ZLBs. We can now draw the hexagon decomposition in a way that makes the hexagonal tiles more explicit. This results in the hexagon tiling shown in Figure 5.
Dressing a skeleton graph such as the one in Figure 3 with ZLBs is not unique: Each octagon has two diagonals that we could choose to become ZLBs. The final answer will be independent of this choice. This property of the hexagonalization is called flip invariance [5]. Hence we can choose any way to cut bigger polygons into hexagons.
Ribbon Graph Automorphisms and Symmetry Factors.
When we perform the sum over all graphs and all bridge lengths on the torus (or higher-genus surface), we need to multiply some graphs by appropriate symmetry factors. The graphs we have been classifying are ribbon graphs. In order to understand the symmetry factors, we will take a closer look at the formal definition of these ribbon graphs. A ribbon graph is an ordinary graph together with a cyclic ordering of the edges at each vertex. 12 More formally, ribbon graphs are defined through pairing schemes: Let V be a collection of non-empty ordered sets V j , g n 1 l 1 +n 2 l 2 +n 3 l 3 g (n 3 n 2 n 1 ) 2 n 3 > n 2 + n 1 g 1 n 3 n 2 + n 1 and n 1 + n 2 + n 3 odd g 0 n 3 n 2 + n 1 and n 1 + n 2 + n 3 even , where n 3 is the largest number (or tied for largest).
= O(g 9+9+16 ) when all mirror bridges have zero length To estimate at which loop order a given sprinkling pattern will start contributing, we can focus on each hexagon. We absorb in each hexagon one half (i. e. the square root) of the measures and mirror particle propagation factors of the three adjacent mirror edges. We can then estimate the loop order of a given populated hexagon by noting that this object has residues where particles decouple among themselves. For example, the middle hexagon in the bottom picture must cost no coupling since it contains residues where all particles annihilate, leaving an empty hexagon whose expectation value is just 1. In other words, in this example, what costs (a lot!) of loops is to create the particles in the surrounding hexagons; once they are created, they can freely propagate through the middle hexagon (e. g. following the interior of the dashed regions) and that costs no coupling at all. The general loop counting is presented for completeness at the top. See also [22]. each vertex V j represents one of the operators, and the V ji label the (half-)bridges attached to operator j. The degree j is the number of bridges attached to the operator. P defines a ribbon graph, but also specifies a marked beginning of the ordered sequence of edges (bridges) attached to each vertex. Pairing schemes are promoted to ribbon graphs by the natural action of the group of orientation-preserving isomorphisms Here, n k is the number of vertices of degree k, m is the maximal degree, S n k permutes vertices of the same degree, and (Z/kZ) n k rotates vertices of degree k. Each orbit G.P of the group action defines a ribbon graph. In other words, a ribbon graph Γ associated with a pairing scheme P is the equivalence class of P with respect to the action of G.
Typically an element of the group (2.9) maps a given pairing scheme P to a different pairing scheme P (by permuting vertices and/or shifting the marked beginnings of the ordered sequences of edges/bridges at each vertex/operator). However, some group elements may map a pairing scheme P to itself. If Γ is a ribbon graph associated with a pairing scheme P , then the subgroup of (2.9) preserving P is called the automorphism group Aut(Γ ) of Γ . 13 Assigning a positive real number to each edge of a ribbon graph promotes it to a metric ribbon graph. The number assigned to a given edge is called the length of that edge. Therefore, a graph with assigned bridge lengths is a metric ribbon graph (with integer edge lengths). The notion of automorphism group extends to metric ribbon graphs in an obvious way.
In the sum over graphs and bridge lengths, we need to divide each graph with assigned bridge lengths (metric graph) by the size of its automorphism group. These are the symmetry factors mentioned at the beginning of this paragraph.
Let us illustrate the idea with an example. Consider the following rather symmetric ribbon graph with eight edges, with all bridge lengths set to one: In the left picture, the graph is represented by an arbitrarily chosen pairing scheme, where the beginnings/ends of the edge sequences at each vertex are indicated by the small blue cuts. The second picture shows the pairing scheme obtained by applying an isomorphism g ∈ G that cyclically rotates all vertices by two sites. In the second step, we shift the cycles along which we cut the torus in order to represent it in the plane. As a result, we see that the pairing scheme after applying g is the same as the original pairing scheme on the left. Thus this graph has to be counted with a symmetry factor of 1/2 (there is no other non-trivial combination of rotations that leave the graph invariant, and hence the automorphism group has size 2). If we increase the bridge length on two of the edges to two, we find the following: As can be seen from the pictures, applying the same group element to the original pairing scheme results in a different pairing scheme that cannot be brought back to the original by any trivial operation. In this case, the automorphism group is trivial, and the graph has to be counted with trivial factor 1.
The symmetry factors can also be understood from the point of view of field contractions: When writing the sum over contractions as a sum over graphs and bridge lengths, we pull out an overall factor of k 4 that accounts for all possible rotations of the four single-trace operators. For some graphs and choices of bridge lengths, non-trivial rotations of the four operators can lead to identical contractions, which are thus over-counted by the overall factor k 4 . This can be seen explicitly in the above example (2.10). Dividing by the size of the automorphism group exactly cancels this over-counting.
Stratification
The fact that we are basing the contribution at a given genus g on the sum over graphs of genus g is of course natural from the point of view of perturbative gauge theory: Each graph with assigned bridge lengths is equivalent to a Feynman graph of the free theory. Summing over graphs of genus g and over bridge lengths (weighted by automorphism factors) is therefore equivalent to summing over all free-theory Feynman graphs of genus g. All perturbative corrections associated to a given graph are captured by the product of hexagon form factors as well as the sums and integrations over mirror states associated to that graph. It is clear that this prescription cannot be complete, as it does not include loop corrections that increase the genus of the underlying free graph. It also omits contributions from disconnected free graphs that become connected after adding interactions. In other words, it does not include contributions from handles or connections formed purely by virtual processes. We can include such contributions by drawing lower-genus and disconnected graphs on a genus-g surface in all possible ways, and tessellating the genus-g surface into hexagons including the handles not covered by the lower-genus graph. Weighting such contributions by the same genus-counting factor N 2−2g−n as the honestly genus-g graphs, we include all virtual processes that contribute at this genus. In other words, the sum over graphs in (2.2) has to be replaced as where Σ g is the set of all graphs, connected or disconnected, of genus g or smaller. For graphs whose genus is smaller than g, the symbol Γ ∈ Σ g has to carry not only the information of the graph itself, but also of its embedding in the genus-g surface. The embedding can for example be encoded by marking all pairs of faces of the graph to which an extra handle is attached.
While this prescription solves the problem of capturing all genus-g contributions, it also spoils the result by including genuine lower-genus contributions. Namely, the loop expansion of the hexagon gluing (sum over mirror states) will also include processes where one or more extra handles (those not covered by the graph) remain completely void. Such void handles can be pinched. Pinching a handle reduces the genus, hence such contributions do not belong to the genus-g answer. However, we can get rid of these unwanted contributions by subtracting the same lower-genus graphs, but now drawn on a surface where a handle has been pinched. Pinching a handle reduces the genus by one, leaving two marked points on the reduced surface. For an n-point function, we hence have to subtract all n-point graphs drawn on a genus (g − 1) surface with 2 marked points. Such contributions naturally come with the correct genus-counting factor N 2−2(g−1)−(n+2) = N 2−2g−n . Hence we have to refine (2.12) to (2.13) where Σ 2 g−1 is the set of all graphs of genus (g − 1) or smaller embedded in a genus (g − 1) surface, with two marked points inserted into any two faces of the graph (or both marked points inserted into the same face). This subtraction correctly removes all excess contributions from the first sum that have exactly one void handle. In contrast, the excess contributions with two void handles are contained twice in the subtraction sum, once for each handle that can be pinched. We have to re-add these contributions once by further refining (2.13) to (2.14) where now Σ 4 g−2 is the set of all graphs of genus (g − 2) or smaller embedded in a genus (g − 2) surface, with two pairs of marked points inserted into any four (or fewer) faces of the graph. This procedure iterates, leading to the refinement RHS of (2.14) → (2.15) Under the degenerations discussed thus far, the Riemann surface stays connected. There are also degenerations that split the Riemann surface into two components by pinching an intermediate cylinder. Also these degenerations have to be subtracted in order to cancel unwanted contributions (that originate from disconnected propagator graphs, or from purely virtual "vacuum" loops). Such degenerations split a Riemann surface of genus g with n punctures into two components with genus g 1 and g 2 that contain n 1 and n 2 punctures, such that g 1 + g 2 = g and n 1 + n 2 = n. Each component carries one marked point that remains from pinching. Such contributions also come with the correct genus-counting factor (2.16) Again, the pinching process can iterate, splitting the surface into more and more components. 14 We will comment on this type of contributions at the end of Section 5 and in Appendix F.
Summing all possible degenerations with their respective signs, we arrive at the following final formula, which is a further refinement of (2.15): (2.17) Here, c counts the number of components of the surface, and the sum over τ runs over the set of all genus-g topologies with c components and n punctures: where (g i , n i , m i ) labels the genus, the number of punctures, and the number of marked points on component i. Finally, we sum over the set Σ τ of all graphs Γ (connected and disconnected) that are compatible with the topology τ and that are embedded in the surface defined by τ in all inequivalent possible ways (Γ may cover all or only some components of the surface).
In the rightmost expression, we have defined the stratification operator S, which implements the refinement of adding and subtracting graphs on surfaces of genus ≤g with marked points as just explained. It appears intricate as it stands, but we will see below that it turns out less complicated than it looks.
We motivated this proposal from gauge theory considerations. We could have arrived at the very same expression by following string moduli space considerations as explained in the introduction, by carefully subtracting the boundary of the discretized moduli space [15]. 15 Example. Let us illustrate the above construction with an important example. Consider the correlator for four equal-length single-trace operators Q 1 , . . . , Q 4 that are chosen such that the fields in Q 1 cannot contract with the fields in Q 4 , and the fields in Q 2 cannot contract with the fields in Q 3 . Correlators of this type are studied throughout the rest of this paper. For such correlators, there is only one planar graph: .
component is a pair of pants (sphere with three punctures and/or marked points). This bound is saturated when we perform the reduction starting from a maximally disconnected planar graph that is embedded on the surface in a disk-like region (i. e. without any windings). For even n, a maximally disconnected planar graph has n/2 components, each consisting of two operators connected by a single bridge. In this case, the maximal degeneration consists of spheres that contain either one component of the graph and one marked point, or no part of the graph and three marked points. For odd n, a maximally disconnected planar graph has (n − 1)/2 components, where one of the components is a triangular three-point graph (because every operator has at least one bridge attached). In this case, the maximal number of degenerations is 3g + n − 4, resulting in 2g + n − 3 surface components. 15 The map between the moduli space and metric ribbon graphs induces a cell decomposition on the moduli space. The highest-dimensional cells are covered by graphs with a maximal number of edges. Cell boundaries are reached by sending some bridge length to zero. (The neighboring cell is reached by flipping the resulting ZLB and making its length positive again.) The moduli space M g,n itself also has a boundary, which is reached when a handle (cylinder) becomes infinitely thin. In terms of ribbon graphs, this boundary is reached when all bridges traversing a cylinder reduce to zero size. The minimal number of bridges traversing a cylinder is two, hence the moduli space boundaries have complex codimension one. The highest-dimensional cells (bulk of the space) have complex dimension 3g + n − 3, which explains the maximal number of iterated degenerations. The alternating sign in (2.17) is also natural from this point of view.
At genus one, stratification requires that we include contributions from this graph drawn on a torus in all possible ways. An obvious way of drawing the planar graph on the torus is (the torus is drawn as a square, opposing sides of the square have to be identified) 1 2 3 4 . (2.20) Pinching the handle of the torus leads back to the original graph drawn on the plane, with two marked points remaining where the handle got pinched: . (2.21) According to the stratification prescription, the contribution from (2.20) has to be added, whereas the contribution from (2.21) (right-hand side) has to be subtracted in the computation of the genus-one correlator. Of course there are many more ways to draw the planar graph on a torus. Finding all such ways amounts to adding an empty handle to the planar graph in all possible ways. This in turn is equivalent to inserting two marked points into the planar graph in all possible ways, which mark the insertion points of the added handle. In other words, we can find all ways of drawing the planar graph on the torus by drawing graphs of the type shown on the right-hand side of (2.21). The two marked points can either be put into faces of the original graph, as in (2.21), but they can also be put inside bridges-a bridge stands for a collection of parallel propagators, hence it can be split in two by an extra handle. Going through all possibilities, we find the seven types of contributions listed in Table 2.
In the table, we have listed unlabeled graphs, which have to be summed over inequivalent labelings. One may wonder why we have not included a variant of case (1) where the two marked points are "inside" the planar graph. In fact, this other case is included in the sum over labelings of case (1): Putting the two marked points "inside" the graph is equivalent to turning the graph (1) "inside out", which amounts to reversing the cyclic labeling of the four operators. Similarly for case (3), the case where the exterior marked point sits inside the central face is included in the sum over labelings.
We will see below that mirror particle contributions may cancel propagator factors of the underlying free-theory graph. We therefore have to also sum over graphs containing propagators that are ultimately not admitted by the external operators. From an operational point of view, this is equivalent to only restricting the operator polarizations at the very end of the computation. For operators of equal weight but generic polarizations, the only planar four-point graph besides (2.19) is the "tetragon graph" Putting this graph on a torus in all possible ways, we find eight inequivalent cases, listed in Table 3 and labeled (8)- (15). For the graph (2.22), all faces are equivalent. Therefore, it is clear that all ways of placing one or two marked points into the several faces are equivalent (up to operator relabelings). Therefore, we include only one representative of all these variants. As for the cases listed in Table 2, the stratification prescription requires that the unprimed contributions should be added, while the primed contributions should be subtracted. Thus far, we have accounted for pinchings where the handle of the torus becomes infinitely thin. However, for cases (1), (7), (8) and (11) there is another way to pinch, where one separates the whole torus from the graph, leaving an empty torus with one marked point, and the graph on a sphere with one marked point inside the face that previously contained the torus. These cases are labeled (1 ), (7 ), (8 ) and (11 ) in Table 2 and Table 3, and have to be subtracted as well.
For connected graphs, these two types of degenerations are all that can occur at genus one, since these are the only types of degenerations a torus admits, as illustrated in Figure 6 and Figure 7. Disconnected graphs do not contribute to any computation in this paper, and hence are not considered here.
To summarize, the effect of stratification at genus one, for correlators of the type considered here, is that the sum over genus-one graphs has to be augmented by a sum over the unprimed graphs (with positive sign) and a sum over the primed graphs (with negative sign) of Table 2 and Table 3: where S (i) , S (i ) , and S (i ) stand for the full contributions (sums over bridge lengths and mirror states) of the respective graphs. Note that, by construction, the genus-one stratification formula (2.23) is sufficiently general to hold for half-BPS operators Q i of arbitrary polarizations α i (but equal weights k i ).
Subtractions
We now explain how to compute the contributions from graphs associated with the degenerate Riemann surfaces, namely (i )'s and (i )'s in Table 2 and Table 3.
Marked Points as Holes in Planar
Diagrams. The first step of the computation is to better understand what the marked points (⊗'s in Table 2) represent. For this purpose, it is useful to look at the corresponding Feynman graphs in the double-line notation. An example Feynman diagram that contributes to a stratification subtraction is depicted in Figure 6. Although drawn on a torus, it is essentially a planar diagram, and therefore corresponds to a degenerate Riemann surface. After the degeneration of the torus (see Figure 6(b)), the pinched handle becomes two red regions as shown in Figure 6(c), which are the faces of the original planar diagram. We thus conclude that, at the diagrammatic level, inserting two marked points on the sphere amounts to specifying two holes/faces of all planar Feynman graphs. For a planar graph G with F faces, there are Binomial(F, 2) = F (F − 1)/2 different ways of specifying two holes in two different faces of the graph. Thus the contribution of a graph with two marked points in different faces (denoted by G 2⊗ ) is given in terms of the contribution of the original graph G as where F is the number of faces in G. This provides a clear diagrammatic interpretation of the marked points, but it does not immediately tell us how to compute them using integrability, since one cannot in general isolate the contributions of individual Feynman diagrams in the integrability computation. To perform the computation, we need to relate them to yet another object that we discuss below.
The key observation is that the same factor F (F − 1)/2 appears when we shift the rank of the gauge group: Consider the planar Feynman diagram G in U(N c ) N = 4 SYM, and change the rank from N c to N c + 1. Since each face of the planar diagram gives a factor of N c , the shift of N c produces the following change in the final result This offers a reasonably simple way to compute the contribution from the degenerate Riemann surface: Namely we just need to 1. Take the planar result and shift the rank of the gauge group from N c to N c + 1.
2. Expand it at large N c and read off the 1/N 2 c correction. With this procedure, one can automatically obtain the correct combinatorial factor without needing to break up the planar results into individual Feynman diagrams.
Before applying this to our computations, let us add some clarifications: Firstly, when we shift N c to N c + 1, we keep the Yang-Mills coupling constant g YM fixed, not the 't Hooft coupling constant λ = g 2 YM N c . Put differently, we must shift the value of λ when we perform the shift of N c . Secondly, the planar correlators to which we perform the shift must be unnormalized: If we normalize the planar correlators so that the two-point function is unit-normalized, the shift of N c will no longer produce the correct combinatorial factor dependent on F .
It is now straightforward to evaluate the contribution from degenerate Riemann surfaces explicitly. The planar connected correlator for BPS operators of weights (lengths) k i admits the following expansion where c is a coefficient independent of N c and λ, and We thus conclude that the correlator G 2⊗ {k 1 ,...,kn} with two extra marked points inserted into two different faces in all possible ways is given by . (2.28) Once we get this formula, we can then normalize both sides, since the normalization for BPS operators does not depend on λ. So far, we have been discussing the degeneration in which a handle degenerates into a pair of marked points. The other type of degeneration, in which the surface is split in two by pinching an intermediate cylinder, is exemplified in Figure 7. As shown in this figure, this type of degeneration produces a single marked point on the planar surface. Therefore, the analogue of (2.24) in those cases reads where again F is the number of faces in the Feynman graph G. The combinatorial factor F in this case can also be related to the shift of N c ; namely it corresponds to the O(1/N c ) term in the expansion (2.25). We therefore conclude that the correlator with a single extra marked point is given by . (2.30) In total, the subtraction for a correlator on the torus at order O(λ ) is given by where (subtraction) denotes the subtraction piece while (planar) is a planar correlator. Figure 6, there is a yet another class of degenerations which produces a sphere with a single marked point. They correspond to the diagrams shown in (a) which degenerates into (c) depicted above. The red region in (c) corresponds to a marked point.
Decomposition into Polygons at One Loop. The formula above computes the full k-loop subtraction all at once. However, it is practically more useful to decompose the subtraction into the contributions associated with individual tree-level diagrams, so we can observe cancellations with other contributions more straightforwardly. This can be done rather easily by generalizing the argument we just presented: As shown in Table 2, the degeneration of a Riemann surface with a tree-level graph leads to polygons (i. e. faces) with one or two marked points. 16 To evaluate these polygons, we just need to keep in mind that each polygon admits the expansion The overall factor N c comes from the fact that the edges of the polygon constitute a closed index-loop. Although we do not normally associate such a factor with each polygon, here it is crucial to include that factor 17 to count the faces correctly.
The rest of the argument is identical to the one before: Shifting N c to N c + 1 and reading off the 1/N c and 1/N 2 c terms, we get . (2.33) Here (polygon) ⊗ and (polygon) 2⊗ denote the contributions from a polygon with one or two marked points respectively. Using the fact that the O(λ 0 ) term for each polygon is just unity 18 , one can also write an explicit weak-coupling expansion as These formulae will be used intensively below.
Worldsheet Interpretation. Let us end our discussion on the subtraction by mentioning the worldsheet interpretation of the marked points. This is more or less obvious from the way we performed the computation: Shifting the rank of the gauge group from N c to N c + 1 amounts to adding a probe D3-brane in AdS. It is well-known that the probe brane sitting at some finite radial position z describes the Coulomb branch of N = 4 SYM, in which the gauge group is broken from U(N c + 1) to U(N c ) × U(1). In our case, we are not breaking any conformal symmetry, and therefore the probe brane must sit at the horizon of AdS (z = ∞ in Poincaré coordinates). This suggests that the marked points that we have been discussing correspond to boundary states describing the probe brane at the horizon. Furthermore, our computation (2.30) implies that the n-point tree-level string amplitude with an insertion of a hole is related to the same amplitude without insertion as 19 It would be interesting to verify this prediction by a direct worldsheet computation. Let us finally add that, although the argument above gives a worldsheet interpretation of the marked points, it does not explain why such boundary states are relevant for the analysis of the degenerate worldsheet. It would be desirable to find a worldsheet explanation for this, which does not rely on the Feynman-diagrammatic argument presented in this section.
Dehn Twists and Modular Group
The backbone of our formula (2.2) is a summation over (skeleton) graphs. When we construct the complete set of graphs on a surface of given genus, we implicitly identify as identical. This makes perfect sense from a weak-coupling perturbative point of view: Wick contractions only carry information about the ordering of bridges around each operator, not on the particular way in which the graph is embedded in a given surface. Hence the two graphs (2.36) are identical as Feynman graphs. Modding out by such twists is also natural from the string-worldsheet perspective. The summation over graphs represents the integration over the moduli space of complex structures of the string worldsheet. The "twists" mentioned above are called Dehn twists. More formally, a Dehn twist is defined as an operation that cuts a cylindrical piece (the neighborhood of a cycle) out of a Riemann surface (the worldsheet), performs a 2π twist on this piece, and glues it back in, see Figure 8. Such Dehn twists leave the complex structure of the Riemann surface invariant, and hence should be modded out by when integrating over the moduli space. In fact, Dehn twists are isomorphisms that are not connected to the identity. They form a complete set of generators for the modular group (mapping class group) for surfaces of any genus and with any number of operator insertions (boundary components). 20 Since all Dehn twists act as identities in the moduli space as well as on Feynman diagrams, it is natural to mod out by Dehn twists in all stages of the computation. While modding out by Dehn twists is natural and straightforward in the summation over free-theory graphs (as we have been doing implicitly), it has non-trivial implications for the summation over mirror states, especially for the stratification contributions. By their nature, all stratification contributions contain non-trivial cycles that do not intersect with the graph of propagators: For the terms that get added, non-trivial cycles can wind the handles not covered by the graph, and for the terms that get subtracted, non-trivial cycles can wind around the isolated marked points (see Figure 8 for examples). Obviously, performing a Dehn twist on a neighborhood of such cycles neither alters the graph itself, nor its embedding in the surface. But once we fully tessellate the surface by a choice of zero-length bridges (and dress them with mirror magnons), such Dehn twists will alter (twist) the embedding of those bridges (ZLBs) on the surface. For example, the two graphs are related by a Dehn twist on a vertical strip in the middle of the picture, which only acts on the zero-length bridges (dashed lines). Since we anyhow do not sum over different ZLB-tessellations, but rather just pick one choice of ZLBs for each propagator graph, it looks like such twists need not concern us. However, notice that one can always transform a Dehn-twisted configuration of ZLBs back to the untwisted configuration via a sequence of flip moves on the ZLBs. As long as all participating mirror states are vacuous, these flip moves are trivial identities. However, as soon as we dress the ZLBs (and other bridges) with mirror magnons, flip moves will non-trivially map (sets of) excitation patterns, i. e. distributions of mirror magnons, to each other. Hence we have the situation that a given distribution of mirror magnons on a fixed choice of ZLB-tessellation might secretly be related to another distribution (or set of distributions) of magnons on the same, but now Dehn-twisted ZLB-tessellation. Since part of our interpretation of the sums over mirror magnons is that they probe the neighborhood of the discrete point in the moduli space represented by the underlying propagator graph, it seems natural to identify distributions of mirror magnons that are related in the way just described. We are therefore led to add the following element to our prescription: Among all mirror-magnon contributions that are related to each other via Dehn twists followed by sequences of bridge flips, take only one representative into account. In other words, all mirror-magnon contributions that are related to each other via Dehn twists and sequences of bridge flips are identified. (2.38) The one-loop evaluation of all relevant stratification contributions in Section 5 will lend quantitative support to this prescription.
Multi-Particles and Minimal Polygons
We think of a polygon as the inside of the face of a larger Feynman diagram, with the outer edges being propagators in that diagram. Depending on whether we blow up the physical operators or not, the same polygon can be either thought of as an n-gon (with n mirror edges), or a 2n-gon (with n mirror edges and n physical edges), as illustrated in Figure 9c. When we do blow up the physical operators we speak of hexagonalizing the polygon, otherwise we say that we triangulate it. In the hexagonalization picture, every other edge of each hexagon is formed by a segment (in color space) of a physical operator.
In the triangulation picture, the physical operators sit at the cusps of the triangles. Of course, both pictures describe the very same thing, as indicated in Figure 9c. There can be non-zero-length bridges in the interior of the polygon, as indicated in Figure 9b. When computing the expectation value of a polygon, we triangulate/hexagonalize it and insert mirror particles at all the mirror edges. When these edges are such Figure 9: (a) An example of a minimal polygon. A minimal polygon is by definition a polygon that when triangulated/hexagonalized only contains zero-length bridges. This means that all internal mirror edges contribute at one-loop order if one inserts a mirror particle on them. It can be hexagonalized in several different ways, and all ways of doing so should give the same integrability result when summing over mirror particles. (b) A general polygon may have zero-length and non-zero-length bridges, and it can be divided into minimal polygons. Inserting mirror particles in non-zero-length bridges is more costly at weak coupling. (c) Two different ways of defining a polygon with physical operators on its edges. It is possible to shrink the operators to points or to blow them up to finite size. In the first case the surface is triangulated (only mirror edges), and in the second case it is hexagonalized (as many physical as mirror edges).
non-zero-length bridges, this is more costly at weak coupling, as indicated in Figure 9b, so the expectation value of such polygons breaks down into polygons where all internal bridges have zero length. We call such polygons minimal polygons. For large bridges, this decomposition holds up to a large number of loops. In this paper, we focus only on such minimal polygons, such as the one in Figure 9a. A minimal polygon can be hexagonalized in different ways, as illustrated in Figure 9a, and an important consistency condition is that all these tessellations ought to give the same result. Three further examples are illustrated in Figure 10. The first was considered in [5], the second in [8], and the third will be discussed later in this paper. and R-symmetry, minimal polygons can only be functions of spacetime cross ratios and cross ratios formed out of the internal polarizations. In this paper, we focus on four-point functions, and will use the familiar variables
Variables
For cross ratios of the internal polarizations, we similarly choose In the following, we will consider more general minimal polygons that depend on n external operators. However, we will restrict all operators to lie in the same plane, in spacetime as well as in the internal polarization space, as this is sufficient for our purposes. For every choice of four operators, we can form spacetime and polarization cross ratios exactly as in (3.1) and (3.2), and an n-point polygon in these restricted kinematics depends on (n − 3) sets of such cross ratios. 21 21 In the plane, distances factorize as x 2 ab = x a,bxa,b , and the R-charge inner products do the same, y a · y b = y a,bȳa,b . As such, when we will deal with functions of cross ratios made out of four physical and R- Figure 11: A tessellation of the dodecagon can contain paths where a mirror particle propagates through four different hexagons, as illustrated in the last graph in the second line. In another tessellation, a particle can propagate for at most three hexagons, as illustrated in the second example. Equating both, we can read off the larger propagation (three-particle) contribution from the smaller ones (two-particle and one-particle), as shown in (3.5).
One-Loop Polygons and Strings from Tessellation Invariance
To fully compute a 2n-gon vacuum expectation value, we should insert any number of mirror particles at all hexagon junctions and integrate over their rapidities. At one-loop order, things simplify: According to the loop-counting shown in Figure 2, we only need to sum over multi-particle strings which are associated to paths that connect one hexagon to another, never passing twice through the same hexagon. To construct the corresponding multi-particle string, we insert exactly one mirror particle whenever the path intersects a mirror edge. In sum, the one-loop 2n-gon is obtained by picking a tessellation at one's choice, and summing over all multi-particle one-loop strings on that tessellation. See Figure 11 for an example.
Each mirror edge joins two hexagons into an octagon involving four operators. Hence two cross ratios are associated to each mirror edge in a natural way.
When dealing with such quantities we often use the obvious short-hand notation f (z) to indicate f (z,z, α,ᾱ), see for example (3.8) below. : The corresponding polarization cross ratios are defined accordingly. With these definitions, we denote the contribution of a multi-particle one-loop string traversing n mirror edges as where the variables z i parametrize the cross ratios associated to the n mirror edges as in (3.3), and we are suppressing the obvious dependencies onz i and the polarization cross ratios. By exploiting the above-mentioned invariance under tessellation choice, one can determine the contribution from any multi-particle string M (n) from the knowledge of the oneand two-particle contributions alone. As an illustration, consider the dodecagon example in Figure 11. In the second tessellation, only two-particle strings appear, while for the first tessellation, the sum includes a contribution with three particles. Equating both sums, we can relate the three-particle contribution to the one-and two-particle strings as Here, the variables z 1 , z 2 , and z 3 parametrize the cross ratios associated to the three mirror edges of the first tessellation in Figure 11 (from right to left). Hence, M (1) (z 1 ) equals the first contribution in Figure 11, M (1) (z 2 ) equals the second contribution, and so on. 22 In the above expression, it is implicit that the other, suppressed variables undergo the same substitutions as the z i variables, e. g.
where we have, by slight abuse of notation, used (α i ,ᾱ i ) to parametrize the polarization cross ratios. Using the explicit known results for one and two particles [5,8] 22 A convenient choice of operator positions to obtain the arguments of all contributions is we find for the three-particle one-loop string: The cross ratios appearing in the argument of the three-particle contribution are defined as in (3.3). Here, the main building block function m(z) is given by with the one-loop conformal box integral The building block function m(z) satisfies the following important identities: Note that there is another type of three-particle contribution besides the one discussed above. It appears in an "alternating" tessellation of the same dodecagon: The "alternating cusp" three-particle string can be derived in the same way as the "common cusp" string by equating the alternating tessellation to one of the two tessellations shown in Figure 11. By playing with tessellations of higher 2n-gons in a similar way, we can derive, in the fashion described above, all multi-particle one-loop contributions, and therefore also all higher polygon one-loop expectation values in terms of contributions involving only one-particle and two-particle strings. Writing the latter in terms of the building block function m(z) via (3.7), the resulting expression for a general 2n-gon, for instance, is remarkably simple and reads We illustrate the formula in Figure 12 for the example of a decagon. In writing (3.13), we cyclically identified the operator labels, namely n + 1 ≡ 1 mod n. The sum runs over all possible pairs of non-consecutive edges at the perimeter, [i, i + 1] and [j, j + 1]. 23 Roughly speaking, the sum in (3.13) corresponds to a summation of all possible gluon-exchange diagrams that one can draw inside the n-point graph. 24 This general result can actually be proved by induction, as illustrated in Figure 13.
Tests and Comments
We conclude this section with some further checks and comments.
Flip Invariance
We have assumed tessellation invariance to derive the 2n-gon formula (3.13). Consistently, the result makes no reference to a particular tessellation, hence it is manifestly invariant under tessellation choice.
Order Invariance
We can think of each multi-particle string contribution as a mirror-particle propagation. The direction of propagation ought to be irrelevant, provided we properly read off the cross ratios for the associated process as in (3.3). This translates into which we can indeed verify using the explicit formulas. 24 This does not mean that each m(z) is given by the corresponding gluon-exchange diagram, since m(z) should also know about the scalar contact interaction. What is true is that each m(z) contains the corresponding gluon-exchange contribution. The correspondence between the function m(z) and perturbation theory was made more precise in [7]: m equals a YM-line exchange in an N = 2 formulation of N = 4 SYM. We will explore this point further in Appendix E. ( Figure 13: Proof of (3.13) by induction for an even number of external edges. For an odd number, a proof can be found in a similar way. The combination in the first line amounts to the statement that all strings in such symmetric tessellations can probe zero, one, or two outer triangles. In order to probe more than two triangles, the string would have to bifurcate. All possible strings are of course contained in the first sum, but there is an obvious over-counting, which is removed by the last two terms.
Reduction to Known 2n-Gons
For the octagon (n = 4), there are two different pairs of non-consecutive edges; [1,2], [3,4] and [4,1], [2,3]. It is easy to see that these two contributions lead to m(z) and m(z −1 ) respectively. Therefore, we recover the previous result [5]. Similarly, one can check that our formula reproduces the result for the decagon (n = 5). In this case, there are five different pairs of non-consecutive edges, and they correspond to the five terms in the decagon [8] represented in Figure 12:
OPE Limit
Starting from the dodecagon, one should be able to recover the result for the decagon by taking the limit z 3 → 0. This can be easily seen by using the properties (3.11). Since the result is manifestly flip-invariant, any OPE limit is essentially equivalent and has a good behavior.
x 1 x 2 x 3 x 4 x 5 x 6 x 64 x 64 x 65 Figure 14: A dodecagon and its cross ratios. Collapsing x i+1 → x i eliminates a slice -a hexagon -in the figure. The double limit x i+2 → x i+1 → x i reduces a 2n-gon to a 2(n − 2)-gon. Mirror-state propagations in such polygons are reduced accordingly. From a form factor point of view, the corresponding sums collapse into the coinciding rapidity region.
Extremal and Next-to-Extremal Correlators
The n-point extremal and next-to-extremal correlators have non-renormalization properties [25]. Using our conjectural form of the 2n-gon contribution, one can verify that the one-loop corrections are zero for those kinds of correlators, see Appendix E for details of the planar case.
Decoupling Limit
We can reduce multi-particle strings to strings involving less steps by collapsing hexagons in the tessellation. For example, if we take x 4 → x 3 in Figure 14, we reduce the dodecagon to a decagon, and correspondingly the three-particle contribution reduces to a two-particle contribution. If we further send x 5 → x 4 → x 3 , we reduce it further to an octagon, and we end up with a single-particle contribution. When taking these limits, some cross ratios diverge and others vanish. For example, x 4 → x 3 corresponds to z 1 /z 2 → 0 with z 1 z 2 = −w 1 fixed. In this limit, we nicely find indeed 16) in perfect agreement with the above expectations. From the integrability/form-factor point of view, this limit corresponds to the so-called decoupling limit, where consecutive rapidities are forced to become equal, and the corresponding hexagons collapse into measures and disappear. 25 Similarly, we find x i+2 x i and many other similar relations at higher points.
Pinching at One Loop
Another nice limit of any polygon is the one where cusps i and i + 2 go to the same position. When doing so, they pinch the edge ending at cusp i + 1 and basically remove it, as illustrated in Figure 15. This limit removes all traces of the operator which got sandwiched between cusps i and i + 2, This identity is actually quite powerful and very useful for us. For four-point functions, for instance, all cusps are located at one of the four possible space-time insertions, so there will naturally be many repetitions of labels, which can be reduced with this rule. (3.18).
One-Loop Octagons
Below, we will need the expressions for one-loop octagons, hence we will quote them here. The one-loop octagon was computed in [5]. Due to the dihedral symmetry of the one-loop polygons (3.13), permutations of the four corners generate only three independent functions, corresponding to the orderings 1-2-4-3, 1-2-3-4, and 1-3-2-4 of the four operators around the perimeter of the octagon. Permutations of the four operators are generated by the following variable transformations: Using the identities for the conformal box integral, as well as the identity (3.11) for the building block function m(z), we find for the three independent functions:
Integrability
At this point, we have derived the multi-particle contributions at one-loop order, starting from the one-and two-particle contributions using flip invariance. An obvious follow-up question is whether the result agrees with the integrability computation. In fact, we Figure 16: "Loops" and "spirals" naively start contributing at tree and one-loop order, by the loop counting of Figure 2. They appear very difficult to evaluate from hexagons. compute the three-particle contribution using integrability in Appendix D, using the weakcoupling expansions of Appendix C, and it agrees with the result of this section. This lends additional support for the correctness of the 2n-gon formula (3.13). The multi-particle integrands are huge and complicated, and we were not able to compute the multi-particle contributions in general. It would be interesting to study these integrands systematically.
Beyond Polygons
While we can compute any one-loop string that is bounded by a polygon via the formula (3.13), there are further excitation patterns that, by the loop counting shown in Figure 2, could contribute at one-loop order. Namely, all stratification graphs (Table 2 and Table 3) contain non-trivial cycles that do not intersect the graph. Hexagonalizing the surface with zero-length bridges, strings of excitations can wrap the cycle to form "loops" or "spirals", see Figure 16. These types of contributions seem very difficult to compute from hexagons. At the same time, it appears very plausible that they are related to simpler configurations by Dehn twists. Since we are not able to honestly evaluate these contributions, we will have to resort to a (well-motivated) prescription to avoid them. We will come back to this point in Section 5.
Data
Let us now introduce the data which we will later use to check our proposal. Computing correlators in perturbation theory is a hard task in the planar limit, and an even harder task beyond the planar limit, hence there is not that much data available. We will use here results from the nice works of Arutyunov, Penati, Santambrogio and Sokatchev [26,27], who studied an interesting class of four-point correlation functions of single-trace half-BPS operators (2.3). The authors of [26,27] studied the case where all operators have equal weight k. In this case, the contributions to the correlator can be organized by powers of the propagator structures They further specialized to operator polarizations tree takes the form The functions F k,m constitute the quantum corrections that multiply the respective propagator structures, and they only depend on the conformally invariant cross ratios (3.1). Expanding in the coupling, we finally isolate the functions F ( ) k,m against which we will check our integrability computations in later sections. The one-loop and two-loop contributions F (1) k,m (z,z) and F (2) k,m (z,z) have been computed in [26,27] at the full non-planar level. Two key ingredients appear in their result. The first one are the conformal box and double-box functions which we can represent pictorially as (4.7) At two loops, C c as well as three other color factors C a , C b , and C d appear. The one-loop correlator is expressed in terms of a single color factor C 1 . The various color factors differ from (4.6) only in the distribution of structure constants f ab c on the four single-trace operators. Due to supersymmetry, the loop correction functions can be written as 28 (4.8) In terms of color factors and box integrals, the functions F k,m read [26,27] where all color factors C i depend on k and m. We have used the shorthand notation In order to compare with our integrability predictions, we need to explicitly evaluate the color factors. This turns out to be a fun yet involved calculation, which we did in two steps. First, we have explicitly performed the contractions with Mathematica for different values of k and m; for some coefficients up to k = 8, for others up to k = 9. Expanding the color factors to subleading order in 1/N c , the results for the subleading color coefficients are displayed in Table 4. Depending on the algorithm, the computation can take very long (up to ∼1 day on 16 cores for a single coefficient at fixed k and m) and becomes memory intensive (up to ∼100 GB) at intermediate stages. 29 The leading coefficients are straightforwardly computed [26,27]. Secondly, we used the fact that by their combinatorial nature, it is clear that the various color factors should be polynomials in k and m (up to boundary cases at extremal values of k or m). By looking at all ways in which the propagators among the four operators can be distributed on the torus, one finds that the polynomial can be at most quartic. 30 Any closed formula for these color factors therefore has to be a quartic polynomial in k and m. A general polynomial of this type has 15 coefficients. Matching those against the (overcomplete) data points in Table 4 yields the desired formulas for the color factors. The color factor (4.6), for instance, takes the relatively involved form for an SU(N c ) gauge group, while the last line would be absent for the U(N c ) theory. Further details and explicit expressions for all relevant color factors are presented in Appendix A. Putting all these ingredients together, we finally obtain the desired one-loop and two-loop expressions shown in Table 5. We show the result for gauge group U(N c ), since this is [26,27], explicitly expanded to include the first non-planar correction, which can be directly matched against our integrability computation. Leading terms of order N −2 c form the planar contribution, whereas terms of order N −4 c constitute the first non-planar correction. All dependence on k and m is explicitly shown, via r = m/k − 1/2. The variables s, t, and s ± , as well as the various combinations of double-box functions F (2) are defined in (4.15), (4.11), and (4.16). We show the result for gauge group U(N c ), since this is what we will match with our integrability computation. We have highlighted the box integrals (red), the planar terms (purple) as well as terms that only contribute at extremal values of m (blue). The expression for such boundary terms for F (2) is deferred to Table 6.
k,m at extremal values of m, see Table 5. Here what we will compare to with our integrability computation. Corresponding expressions for gauge group SU(N c ) as well as further details are given in Appendix A. The expressions in Table 5 are written in terms of the variables z,z, and k, as well as the combinations Besides the box integrals (4.4), (4.5), and (4.11), the following combinations of double-box integrals occur: B,± = |z| 2 F 1−z . (4.16) We have suppressed the arguments (z,z) of all box functions for brevity.
The formulas are written such that crossing invariance is manifest: The crossing transformation 29 Very likely, the performance can be greatly improved by using more specialized and better-scaling tools such as Form. 30 This fact is best understood by looking at Table 8 and (6.10) below.
and hence crossing invariance of G k (4.2) is equivalent to Because of the transformations and F (1) → sF (1) , F (2) → sF (2) , F 1−z → sF as well as the fact that all functions (4.16) with +/− subscript are even/odd under crossing x 1 ↔ x 4 , it is clear that the expressions in Table 5 are indeed crossing invariant.
Remark.
One immediate observation is that (up to an overall numerical prefactor) the coefficient of the double-box integral F (2) (z,z) in the two-loop function F k,m equals the coefficient of the single-box integral F (1) (z,z) in the one-loop function F (1) k,m . As we shall see below, this fact has a straightforward explanation from the perspective of the integrability computation. In short, the one-loop function is a sum of terms where only a single polygon (surrounded by non-zero-length bridges) is excited. At two loops, the term proportional to F (2) (z,z) stems from the same sum of terms, where now the single polygon is excited to two loops. This pattern likely extends to higher loops.
Contribution from Stratification
Here, we want to evaluate the stratification contributions at genus one listed in Table 2 and Table 3 at one-loop order. That is, we want to evaluate the contributions S (i) , S (i ) , and S (i ) in (2.23). As we have seen in Section 3.1, the one-loop expression for any hexagonalization is given by the sum over all "one-loop strings", where every one-loop string is a path that starts inside any hexagon, ends in any other (or possibly the same) hexagon, and that crosses any number of zero-length bridges, but no non-zero-length bridge. Every crossing of any bridge by the path creates one excitation on that bridge. For every closed, simply connected polygon, the number of such one-loop strings is finite. For the graphs in Table 2, it is clear that a one-loop string can wind a cycle of the torus (or a marked point) any number of times, and hence there is an infinite number of one-loop strings. For example, the following magnon-patterns all start contributing at one-loop order (for the loop-counting, see Figure 2): Here, each of the red dots stands for a mirror magnon, and we have also indicated (in gray) a path that connects them. At present, we do not have the technology to compute one-loop strings that form closed cycles, or that cross any edge more than once (we call such strings "spirals"). However, it is reasonable to assume that almost all one-loop string contributions will either be projected out by our Dehn-twist prescription (2.38), or cancel between the torus contributions (i) and their pinched degenerations (i ) and (i ) shown in Table 2 and Table 3. Our working assumption is that all one-loop strings that either form closed loops, or cross any bridge more than once, will either be projected out by Dehn twists, or cancel with the stratification subtractions (or sum to zero). We will therefore not take such contributions into account.
Another limitation that we are facing is the mapping among magnon configurations under flipping zero-length bridges. Even after dropping one-loop strings that cross bridges more than once, there remain configurations that look related through Dehn twists and bridge flips (for example all contributions in (5.1)). Flipping any number of zero-length bridges should leave the total contribution of the graph invariant, but it will non-trivially map magnon configurations to each other. This map is technically quite involved, and we have not evaluated it except in the simplest cases (a single magnon on a single bridge) [5]. What we will assume is the following identification: Consider a one-loop string of excitations traversing an otherwise empty handle across a number of zero-length bridges. Imagining the string of excitations as a continuous path, performing a Dehn twist on such a handle adds a cycle to the path (string of excitations), as well as to all zero-length bridges that also traverse the handle. Subsequently performing flip moves of these zero-length bridges, we can restore the graph of zero-length bridges to what it was before the Dehn twist. Effectively, this operation adds a cycle to the path (string of excitations), and otherwise leaves the graph invariant. Among all one-loop excitation strings related by such operations, we only take one representative into account. For example, all one-loop strings shown in (5.1) are related by this operation, and hence we would take only one of them into account. Even though we cannot prove that all one-loop strings related under this operation indeed map to each other one-to-one under Dehn twists and flip moves, we will see in all examples below that one-loop strings related in this way indeed contribute identical terms.
To summarize, we will evaluate the stratification contributions at one loop using the following prescription: • Add up all one-loop strings that do not form closed loops and that do not cross any bridge more than once (in the same direction). 31 • Among all remaining excitation patterns, identify those that are related to each other via Dehn twists that act on the path that constitutes the one-loop string, but leaves the configuration of zero-length bridges invariant.
We cannot rigorously show that our prescription is correct, but we will see below that it produces the right answer. Given the limitations in our present computational ability, it is the best we can do.
In the following, we will consider the unprimed contributions (1)-(14) of Table 2 and Table 3. The primed contributions (i ) and (i ) that have to be subtracted were evaluated in Section 2.4. In order to evaluate the cancellations among primed and unprimed contributions, we will use the identities given in (2.34) that we reproduce here: They immediately imply that at tree level the contributions (i) and (i ) (and (i ) for i = 1, 7, 8, 11) of Table 2 and Table 3 perfectly cancel each other separately for each i = 1, . . . , 14. The first non-trivial effect of stratification therefore occurs at one loop, and we will evaluate the various contributions in the following, starting with the simplest case. (5), the only non-vanishing contributions can come from excitations of the two octagon faces that involve all four operators. But these faces are exactly replicated in case (5 ), and hence the contributions S (5) and S (5 ) perfectly cancel each other. This cancellation relies on the fact that polygons with one marked point at tree level equal the same polygons without insertions as shown in (5.3).
Contribution (6).
This contribution works the same as contribution (5): The only non-vanishing one-loop contributions come from excitations in one of the two faces that involve all four operators, which are exactly replicated in contribution (6 ), and therefore perfectly cancel.
Contribution (7)
. Due to the identity (5.3) for a polygon with two marked points, and the fact that a polygon with only two different operators receives no loop corrections, contribution (7 ) vanishes. By the same arguments as for cases (5) and (6), the contributions S (7) and S (7 ) perfectly cancel each other at one-loop order.
Contributions (8)-(12).
For the cases (8) to (12), all faces involve at most three out of four operators. Therefore, we do not expect corrections at one-loop and the result is simply the tree level one. This in turn will be canceled by the subtractions. Contribution (4). Next, we will consider case (4) of Table 2. Picking an operator labeling, and shifting the fundamental domain of the torus on which the graph is drawn, we can depict this contribution as Here, we have also indicated a choice of zero-length bridges across the handle not covered by the graph. Similar to case (5), we do not have to consider one-loop excitations of the other faces, as these are replicated in the pinched graph (4 ), and thus manifestly cancel. Inside the face that wraps the torus, any non-vanishing one-loop excitation string will have to involve hexagons that touch all four operators. We have picked a tessellation that isolates operators O 3 and O 4 as much as possible, such that any potentially non-zero string will have to connect the hexagon that involves operator O 3 with the hexagon that involves operator O 4 . The only potentially non-zero excitation strings that do not cross any bridge more than once are exactly the two leftmost contributions of (5.1): Here, each of the red dots stands for mirror particles, and we have also indicated (in gray) the path that connects them. The left excitation pattern is equal to the one-loop (clockwise) polygon(1, 2, 4, 2, 1, 3), which vanishes by pinching (all other one-loop excitation patterns in this polygon vanish, since they involve at most three out of the four operators): The excitation pattern shown on the right of (5.5) is related to the one on the left by a Dehn twist according to our working prescription (5. 2), hence we should not take it into account. We can still evaluate this contribution in order to check the consistency of our prescription. And indeed, the right one-loop string again equals the (Dehn-twisted) oneloop polygon(1, 2, 4, 2, 1, 3) and thus vanishes by pinching. Stratification requires that we subtract the contribution of graph (4 ) in Table 2, which is obtained from (4) by pinching the handle not covered by the genus-zero graph. In fact, because two-operator polygons receive no loop corrections, the two-operator polygons with insertions of a single marked point also receive no loop corrections, and hence we trivially find that S (4) − S (4 ) = 0.
Contribution (13). The case (13) will produce a vanishing contribution exactly by the same argument as in the previous case (4).
Contribution (14). Let us consider the case (14) of Table 3. We again pick a tessellation of the empty handle that isolates two operators as much as possible (in this case O 2 and O 3 ): Since a one-loop string can only be non-vanishing when it involves hexagons that together touch all four operators, the two string configurations above are the only potentially non-zero contributions. The other faces involve three operators and hence contribute at tree level only. They in turn will be canceled by the subtraction S (14 ) . In addition to the excitation patterns shown above, we could have also considered other string configurations that could potentially contribute at one loop. But it is easy to see that these would unavoidably involve placing two excitations in the same bridge, forming a path that crosses that bridge twice in the same direction. By our prescription, we do not take these cases into account. The contributions (a) and (b) above are related by Dehn twists according to our prescription (5.2). Consistently, it is simple to see that they produce identical results. Namely, both cases evaluate to (1 loop) = polygon (1, 2, 4, 3) . (5.8) The subtraction S (14 ) does not produce any contribution at one-loop, as all of its polygons involve only three operators. As a final step, we need to perform a sum over all nonequivalent labels of the vertices. As the graph is drawn on a torus, there are twelve inequivalent labelings (the same graph on a sphere has only two inequivalent labelings): Labeling: 1243 2134 1342 2431 1234 2143 1432 2341 1324 3142 1423 3241 Propag.: We therefore find that S (14) = 0, and hence trivially S (14) − S (14 ) = 0 − 0 = 0. This case is different from all previous (and subsequent) cases in that the cancellation occurs among graphs with different labelings and bridge lengths.
Contribution (15). Picking a tessellation for graph (15) of Table 3, we find, similar to the previous cases, only two potentially non-zero one-loop contributions compatible with the first rule of (5.2): . (5.14) Again we are dropping the string configurations involving two excitations placed on the same bridge according to our prescription (5.2). For contribution (a) we find: Table 2) equals twice the same one-loop octagon with no insertion by (5.3), it is clear that (3) and (3 ) perfectly cancel each other: Contribution (2). Let us now list the possible one-loop excitation patterns for the stratification graph (2). Picking an operator labeling as well as a tessellation, we find:
Contribution (1).
Let us finally turn to case (1). Picking a particular tessellation, we find the following potentially non-zero excitation patterns: The contributions (b) and (c) require some comments: These contributions include two excitations on a single zero-length bridge. Even though we have thus far discarded excitation patterns with more than one excitation on any bridge, we want to argue that we should still include these contributions. All patterns with multiple excitations on a single bridge that we have excluded thus far had the form of a string of excitations that crossed a single bridge twice in the same direction. For the cases (b) and (c) in (5.29), the string of excitations crosses a bridge twice, but in opposite directions. As indicated at the beginning of this section, we postulate that such excitation patterns should be included. Next comes the question of computing these contributions. Because the excitation pattern spans such a large part of the graph, it cannot be localized inside a compact polygon. For case (b), the best we can do is to cut out the inside of the square formed by the propagator bridges, and to cut along the horizontal zero-length bridge that connects O 4 to itself: , which equals half the contribution of the planar graph, just as (a)=(a f ) did. Applying the same analysis to excitation pattern (c), but now flipping the horizontal bridge that connects O 4 to itself, we find that also the contribution (c) equals half the contribution of the planar graph. In total, under the above flip-invariance assumption, we thus find that the non-trivial part (without considering the internal polygon) of the stratification contribution (1) equals 3/2 times the contribution of the planar graph, or, equivalently, 3 times the contribution of the one-loop octagon. By the identities (5.3), we find that the non-trivial part of contribution (1 ) evaluates to the one-loop octagon, and the non-trivial part of contribution (1 ) gives two times the planar octagon. Hence in the sum, we find that S (1) − S (1 ) − S (1 ) = 0.
Summary and Result.
We have demonstrated in the preceding paragraphs that almost all stratification contributions S (i) , S (i ) , and S (i ) are either zero, or directly cancel each other. We should stress that all cancellations among primed and unprimed contributions hold at the level of individual graphs with assigned bridge lengths and operator labelings: There is a one-to-one map between the bridges of graphs S (i) , S (i ) , and S (i ) for fixed i. Therefore, for all graphs (i) and for any labeling of its operators as well as any distribution of propagators on the bridges of that graph (i. e. any choice of bridge lengths), there is a corresponding operator labeling and distribution of propagators on the bridges of the associated pinched graph (i ) (and (i )). Hence the cancellations trivially extend to the full sum over all operator labelings and bridge lengths, for any value of the weight k.
The only remaining non-zero contributions from stratification at one-loop order are the terms (5.20) and (5.21), which both evaluate to (− polygon (1, 2, 4, 3)). We immediately note that their sum equals minus the one-loop contribution of the simple planar graph (2.19) on the sphere, which evaluates to 2 × polygon (1, 2, 4, 3). Also, because the stratification contribution stems from graph (2) in Table 2, it is clear that the sum over operator labelings and bridge lengths produces the same answer for the stratification as for the planar graph. We therefore conclude that the genus-one stratification contribution (2.23) at one-loop order equals minus the planar correlator, where we have decomposed the correlator as gives no contribution at one loop, because all of its faces are hexagons framed by non-zerolength bridges. In order to evaluate the stratification result or, equivalently, the planar one-loop correlator (5.33), we have to sum over inequivalent operator labelings and bridge lengths. In this case, there are only three distinct labelings. Using the operator lineup in (5.32) and going clockwise (or equivalently going upwards in (5.19)), we have the possible orderings 1-2-4-3 (used above in the derivation of (5.20) and (5.21)), 1-4-2-3, and 1-2-3-4. Making use of the dihedral symmetry of the polygon function (3.13), summing over bridge lengths, and inserting the respective propagator factors, we thus find (1, 2, 3, 4) , (5.35) where the sums run over p = 1, . . . , k −1, because all bridges in the graph must be occupied by at least one propagator. Writing the internal polarization cross ratios α,ᾱ (3.2) in terms of the propagator structures X, Y , and Z (4.1) via Plugging these expressions into (5.35), we recover the result for the planar one-loop correlator with the universal polynomial factor R due to supersymmetry [28] R We have computed the stratification contribution for arbitrary polarizations α i . In order to compare to the data presented in Section 4, we might take the Z = 0 limit of the result. This computation shows the importance of summing over all tree level graphs, even those containing Z propagator structures, and only at the end take the particular limit Z → 0 for comparison with the available data. The reason is that, as we dress such graphs with mirror particles, the overall dependence on the propagator structures can be different from what it was at tree level. This comes about due to the fact that the one-loop correction to the polygon carries itself a dependence on the R-charge cross ratios, see the expression (3.9) of the building block for the one-loop polygons. As a consequence, the dependence on Z of the tree-level configurations might get canceled at one-loop order, resulting in a contribution which is relevant to match the Z = 0 data. Let us consider one further example for illustration. Take the following graph: where we have explicitly drawn the propagators, assigned labels to the vertices, and indicated the two faces in two different shades of gray. This graph amounts to the following one-loop contribution c polygon(4, 1, 3, 2) + polygon(4, 2, 3, 1, 3, 1, 3, 1) .
(5.41)
After replacing the explicit expression for the corresponding polygon (3.13), we arrive at the result 42) which, after setting Z = 0, results in a non-zero contribution.
Comparison with Perturbation Theory. We have seen above that the only nontrivial stratification contribution to the correlator stems from graph (2). More specifically, its origin are the contributions (a) and (b) in (5.19). 32 We will see that this matches beautifully with the expectation from gauge theory. Stratification is supposed to reproduce perturbative contributions to the genus-one correlator that stem from planar graphs in the free theory. At fixed k and m, that is at fixed propagator structure X m Y k−m , there is only one planar graph: We are looking for one-loop decorations of this graph that contribute to subleading order in 1/N 2 c (i. e. at genus one). All one-loop processes are N = 2 YM (super-gluon) lines 33 between either two vertical or two horizontal propagators: Of course there are many more ways to connect the YM lines to two vertical propagators, but one can easily see that all contributions except the ones shown cancel each other, due to these relative signs. The third and fourth figure in (5.44) have genus one, hence they are suppressed by one factor of 1/N 2 c compared to the first two figures (which are planar). Also, the third and fourth figure carry a relative sign, since one structure constant is flipped compared to the first two figures. Hence we find and hence can be associated to graphs of the type (2) in Table 2.
Disconnected Graphs. Before ending this section, let us finally comment on a small subtlety: In addition to the graphs considered so far, one can in principle consider disconnected graphs drawn on a torus. Here, either both components can be planar, or one of them may have genus one. Clearly, by 1/N c power counting, without interactions, neither case contributes to the same order as non-planar connected four-point graphs. However, much like the secretly planar graphs, we cannot simply discard them, since they can become of the same order in 1/N c at high enough loop order, once they are dressed by a sufficient number of gluon propagators. Therefore, when performing the stratification procedure, we do need to include them in principle. Unfortunately, at the time of writing this article, we have not succeeded in evaluating the contributions from these graphs if both components are planar, 34 owing to the existence of so many zero-length bridges. We thus assumed that their contributions at one loop vanish, once the subtraction and the Dehn twist are taken into account. We should nevertheless stress that this is a reasonable assumption: Firstly, in perturbation theory, it is clear that such graphs cannot give rise to non-planar contributions at one loop. This implies that the contribution from such disconnected graphs will be canceled by the subtractions, as was the case for (some of) the secretly planar graphs that we discussed in this section. (From a perturbation-theory point of view, one can actually argue that even the planar contribution from such graphs is zero. See the discussion in Figure 17.) Secondly, although we could not compute the contribution from disconnected graphs on a torus, we could show, using the stratification and the Dehn twist, that the contributions from disconnected graphs on a sphere vanish at one loop. This will be demonstrated in Appendix F. Let us also emphasize that, although the computation is sometimes hard, the proposal we made is quite concrete and can be tested if one has infinite computational ability. It would be an important future task to complete the computation and prove or disprove the cancellation that we assumed.
Stratification Summary and Discussion. We carefully analyzed fourteen contributions listed in Table 2 and Table 3, adding all the secretly planar graphs and subtracting all pinched surfaces. At the end of a laborious analysis, the punch line is amazingly simple: These terms almost cancel each other completely. (Only contribution (2) in Table 2 ends up not canceling!) In the end, the result is simply minus one times the planar result. In the light of such a simple result, one might wonder if all this stratification business, with all these involved considerations on boundaries of moduli space subtleties are a huge overkill. Could it be that, even at higher loops, the stratification ends up boiling down to some simple terms proportional to lower-genus contributions?
Definitely not! On the contrary, at sufficiently high loops, the stratification is in fact the most important contribution, since, for any given size of the external operators, the tree-level skeleton graphs only exist up to some fixed genus order. So higher-genus contributions are actually given uniquely by the stratification procedure. Therefore, if we consider the full 1/N c expansion, the stratification contributes to all corrections and is the sole contributor starting at some genus order. As an example, for k = 2, we can only draw planar skeleton graphs, hence all higher-genus corrections to this correlator -starting already with the torus -will come uniquely from the stratification procedure! Given the simplicity of the final one-loop result (5.38), and the importance of the stratification at higher loops and higher genus, it is absolutely critical to streamline its analysis. For that, we will likely need to better understand the nature of the various exotic contributions, such as the spirals and loops discussed above.
Finite k Checks
We now proceed to test the integrability predictions against the data described in Section 4, starting with a few examples for finite k. At finite k, the relevant graphs are typically far from the maximal ones. As described earlier, they can be obtained by successively removing edges from the maximal graphs until each operator is connected by at most k bridges, discarding the duplicate ones on the way. On top of this, we should sum over all inequivalent labelings of the vertices and sum over all bridge length assignments such that each operator is connected by exactly k propagators. The statistics of the polygonization procedure for the five lowest k cases is summarized in Table 7. It is apparent that the number of graphs grows very quickly both with k and with the genus, and therefore we have resorted to a Mathematica code to generate them.
k = 2, 3
In the simplest k = 2 example, it turns out that one cannot draw any graph with the topology of a torus, since each operator will be connected by at most two bridges. The single connected graph with this constraint is depicted in (2.19). Therefore, the whole contribution should come from the stratification result (5.38), which in this case simply reads For the case of k = 3, we already encounter non-planar graphs, as depicted in Figure 18. After assigning labels to the vertices and lengths to the bridges compatible with the operators' R-charges, one generates 32 distinct configurations as indicated in the corresponding entry of Table 7. Regardless of the assignments, the graphs (a), (b) and (d) of Figure 18 produce a vanishing contribution. The vanishing of the cases (a) and (b) can be anticipated only by successive use of the pinching limit of the polygon as illustrated in the expression (3.17). For example, consider the case (a) and label the vertices from 1 to 4 in a clockwise order starting from the top left operator. There is a single face corresponding to an icosagon (20-gon) bounded by the bridges. Taking into account the order of the vertices along this boundary, we have the one-loop contribution given by polygon (1, 2, 1, 3, 4, 2, 1, 2, 4, 3). We now apply several pinching limits to reduce that sequence down to polygon(1, 3) which would correspond to a two-point function, and that is zero by supersymmetry.
The graph (d) is decomposed into a hexagon and an octadecagon, and both vanish once we use the corresponding one-loop expression as given in (3.13).
The only non-trivial graph is (c), which produces a non-zero result. However, after summing over all labelings, those contributions simply cancel out. Therefore, the non-planar graphs do not contribute, and once again we expect to obtain the final result simply from the stratification contribution (5.38), which reads in this case For comparison with perturbative data, we now consider the case Z = 0. We find that for the two cases considered here: and this perfectly matches with the data shown in Table 5.
k = 4
The case k = 4 is significantly more involved than the previous ones. The number of non-planar graphs is 57, and they give 441 distinct physical configurations when operator labelings and bridge lengths are chosen. Let us consider one example in detail. Among the 441 graphs with assigned labels and bridge lengths, we have the following example where each solid line now corresponds to a propagator. This graph is decomposed into two polygons: An octagon (dark gray) and a hexadecagon (light gray). Accounting for the corresponding propagators, we have that this contribution is given by We can now simply use the expression for the corresponding polygon using (3.13) to get the final result. Alternatively, we observe that using the pinching limit, the hexadecagon degenerates into an octagon as follows, Now plugging in the corresponding expression for the one-loop octagon from (5.37), we get that this graph produces (6.7) All other graphs are equally straightforward to compute as this example. Upon summing over the 441 graphs and adding the stratification contribution (5.38), we recover the prefactor R and the final result is given by (1) . (6.8) After setting Z = 0 and comparing with the data of Table 5 for k = 4, we find again a perfect agreement.
k = 5
We have extended our analysis to the case k = 5, which involves 2760 distinct graphs. The procedure is no different from the previous cases, and we simply display here the result from the summation over all those genus-one graphs, together with the stratification contribution. Once again, we recover the universal prefactor R (5.39) and the outcome reads (1) .
When Z = 0 we again recover the perturbative result of Table 5.
To summarize the findings of this section: By summing over genus-one graphs and adding the stratification contribution determined in Section 5, we computed the four-point correlator for a generic polarization of the external BPS operators. We compared these results with data for the particular polarization studied in literature, namely when Z = 0, and found a perfect match in all cases, which strongly corroborates our proposal. The Z = 0 results are simple predictions of the hexagonalization procedure, which would be nice to check against a direct perturbative computation.
k 1: Leading Order
Another interesting case that we will focus on in the following are contributions F k,m where both m and (k − m) are large, that is we look at the limit k 1 with 0 < m/k < 1. In this regime, the four operators are connected by a parametrically large number O(k) of propagators. This implies that graphs where the propagators connecting any two operators are distributed on as many bridges as possible outweigh all other graphs by combinatorial factors. In other words, graphs where any bridge is only filled with a few (or zero) propagators are suppressed by powers of 1/k. Namely, the sum over distributions of n propagators on j bridges at large n expands to n 0 ≤n 1 ,...,n j ≤n Table 1 contribute to the leading order in 1/k, since all other graphs have fewer bridges. For these graphs, every face has room for exactly one hexagon, and thus all mirror magnons live on bridges with a large number O(k) of propagators, which means that all quantum corrections are delayed. However, in this work, we consider operator polarizations with (α 1 · α 4 ) = (α 2 · α 3 ) = 0, which do not admit propagator structures of the type Z ≡ (α 1 · α 4 )(α 2 · α 3 )/x 2 14 x 2 23 , see (4.2). In other words, there are no contractions between operators 1 and 4, and no contractions between operators 2 and 3. Hence, even at large k, the dominant graphs will leave room for zero-length bridges and thus admit quantum corrections already at one-loop order.
Before diving into the computation, let us quote the leading and first subleading terms in 1/k of our data from Table 5 for reference (subleading terms are shown in gray): Polygonization: Maximal Cyclic Graphs. Since there are no contractions between operators 1 and 4 and between operators 2 and 3, we need to consider graphs where the four operators are cyclically connected, as in 1-2-4-3-1 (later we will see that non-cyclic graphs are also important). We can obtain all possible graphs of this type by deleting bridges from the maximal graphs listed in Table 1. Among all cyclically connected graphs, we only consider graphs where as many bridges as possible are filled. We will call those "maximal cyclic graphs". These will be the only graphs that contribute at leading order in 1/k. All further graphs only contribute to subleading orders in 1/k, and can be obtained by setting further bridge lengths to zero. Starting from any of the 16 cases listed in Table 1, we can obtain cyclic graphs by grouping the four operators into two pairs and deleting all bridges that connect the members of either pair. Doing this in all possible ways for all the 16 graphs, and discarding non-maximal 35 as well as duplicate graphs, we end up with the complete set of maximal cyclic graphs A through Q displayed in Table 8 ways of deleting bridges (keeping the diagonal ones) lead to equivalent configurations: , → , (6.14) which we recognize as case B. The derivation of the further cases C through Q from the maximal graphs in Table 1 is listed in Appendix B.2. The large-weight limit brings about another simplification: In Section 6.1 above, we saw that magnons carrying non-trivial R-charges may cancel Z propagator structures (4.1) such that the final result is free of Z's. Such cancellations cannot occur here, since all graphs of Table 8 dissect the torus into four octagons separated by large bridges. Such octagons do not leave enough room for Z propagator cancellations. 36 Hence we do not have to include graphs containing Z propagators.
Looking at the cases A through Q, we find that the bridge configurations of the cases A, C, D, E, F, H, I, J, N, and K imply a constraint on m: Either m = 0, or m = k. Hence, even though no further bridges can be added to these graphs (under the cyclicity constraint), these cases are suppressed at large m and (k − m), and only the cases B, G, L, M, P, and Q remain (these were called B, A, C, D, E, and F in our previous publication [1]). Table 9: All inequivalent operator labelings for the graphs that contribute to leading order in 1/k, together with their combinatorial factors according to (6.10). The order of the labels runs clockwise, starting at the top left operator in the graphs of Table 8.
For these graphs, we now have to consider all possible operator labelings, taking care that some seemingly different labelings in fact produce identical bridge configurations. In addition, each labeled graph comes with a combinatorial factor from the distribution of propagators on the various bridges according to (6.10). We list all inequivalent labelings for the relevant graphs as well as their combinatorial factors in Table 9. For case P, all operator labelings are equivalent. Beyond that, it has an extra symmetry: Every pair of operators is connected by a pair of bridges. Exchanging the members of all pairs simultaneously amounts to a cyclic rotation of the four operators and thus leaves the configuration invariant. This operation is an example of a graph automorphism, see the last part of Section 2.2, in particular (2.10). The naive sum over bridge lengths gives a combinatorial factor m 2 (k − m) 2 , which thus has to be corrected by a factor of 1/2.
Sprinkling: One and Two Loop Check.
The previous maximal cyclic graphs polygonalize the torus into four octagons each, generating some toroidal polyhedra. We represent their corresponding nets in Table 10 for easier visualization. The one-loop and two-loop computations can then be performed straightforwardly from a single particle sitting in the single ZLB of each octagon. Such contributions can be easily computed to any desired loop order using the ingredients of Appendix D (at one loop we can simply use the polygon function of Section 3.1). At one loop this is the only particle configuration contributing. At two loops, we have to consider in addition two virtual particles in different octagons, which essentially amounts to the one-loop octagon squared. The contribution of two virtual particles inserted in the same octagon turns out to be delayed to four loops as shown in Appendix D. The final step is then to sum over the labelings of the vertices, weighted by the combinatorial factors arising from the different ways of distributing the propagators among the bridges. Table 9 contains the details of these combinatorics. We have performed this calculation in [1] and found a perfect agreement with the large k data (6.11) and (6.12).
Sprinkling: Three Loop Prediction. As far as we know, there is no available nonplanar perturbative data at three loops. The planar case, however, was computed in [29].
Here, we are going to make a prediction for the three-loop result at leading order in large k using integrability. In principle, one can keep going and make predictions for arbitrary Table 10: After completing the relevant skeleton graphs B, G, L, M, P, and Q with the missing ZLBs, we obtain complete hexagonalizations of the four-punctured torus. The outcome is that each configuration is decomposable into 8 hexagons, or 4 minimal octagons, using the terminology of Section 3.1. The distinct octagons are colored in white and gray. The colored edges correspond to the physical ones, with the operators being labeled as A, B, C, and D. The subscript in each edge label indicates to which of the eight hexagons the respective edge belongs. Later on, we will specify the labels A, B, C, and D of the operators. For each hexagonalization, the dashed lines correspond to the ZLBs, while the solid gray bridges have non-zero lengths.
order in g 2 , and it would be very interesting to try to re-sum the series. At three loops, one has the following possible contributions: 1. Three-loop correction of the one-particle octagon.
2. Two mirror particles inserted at different octagons.
3. Three mirror particles inserted at three different octagons.
4. Multiple mirror particles inserted in the same octagon.
One can show that contribution 3 is only present for case P, because all other cases only have two octagons involving four operators, and an octagon involving only three (or two) operators vanishes as the relevant cross-ratio for gluing is either 0, 1, or ∞. The contribution 4 kicks in only at four loops -the case of two mirror particle in the same edge is computed in the Appendix D -and thus is not relevant here.
The new ingredient for the three-loop computation, when compared to the two-loop calculation performed in [1], is the one-particle mirror contribution in a ZLB expanded to three loops, given by (see Appendix D for details): is the three-loop ladder integral defined as The one-loop and two-loop ladder integrals F (1) and F (2) are defined in (4.4) and (4.5) (see also the expression (D.22) in terms of polylogarithms), and the cross ratios z,z are defined in (3.3).
Using the hexagonalized graphs of Table 10, the combinatorial factors of Table 9, and adding all mirror particle corrections, one arrives at the three-loop prediction (6.17)
leading-order computation, the graphs obtained from the leading-order graphs by deleting one bridge, and the "deformed" graphs which are graphs having one pair of propagators of type Z.
Leading Cyclic Graphs. The graphs B, G, L, M, P, and Q used in the leadingorder computation also contribute at subleading order in large k. The integrability contribution is computed exactly as in the leading-order case, in particular one uses the same set of hexagons of Table 10, however one considers the subleading contribution to the combinatorial factors given in (6.10), with n 0 = 1. Recall that to obtain a final term with the propagator structure X m Y k−m at one-loop order, it is necessary to consider also the neighboring tree-level graphs with propagators X m−1 Y k−m+1 and X m+1 Y k−m−1 . This follows because the mirror particles carry R-charge and they can change the propagator structure of a tree-level graph [1], which is seen explicitly by the ratios of X, Y , and Z propagator factors in the prefactor of (6.15) rewritten via (5.36): z +z − α +ᾱ 2 1 + zz αᾱ One important remark is that, differently from the leading case, where the combinatorial factor of each graph is universal, in the subleading case the combinatorial factor changes when considering the neighboring graphs. As an example, Table 11 shows the combinatorial factors relevant for case B.
Subleading Cyclic Graphs. In addition to the leading-order graphs, there will be contributions from cyclic graphs that are obtained from the cases B, G, L, M, P, and Q by removing one of their bridges. Deleting a bridge in all possible ways, and identifying identical graphs, we find seven inequivalent subleading cyclic graphs, see Table 12. The number of inequivalent labelings is indicated in the parentheses below each graph. The hexagonalization of the subleading cyclic graphs can be obtained from the hexagonalization of the leading cyclic graphs given previously by replacing the corresponding line that was deleted in the process by a zero-length bridge. The final step is to add the mirror particles. In this case, we have one-, two-and three-particle contributions, because there are four hexagons sharing bridges of zero length in a sequence. Thus at one-loop order for the integrability computation, one uses the expressions for both the octagon and the dodecagon of (3.13). In addition, the relevant combinatorial factors can be read from the leading term of formula (6.10). The seven inequivalent graphs that are obtained by deleting one bridge from graphs B, G, L, M, P, or Q of Table 8. These graphs contribute at subleading order in k. The parentheses show the number of inequivalent labelings that each graph has.
Deformed Graphs. At subleading order in large k, there is room for so-called deformed graphs. The mirror particles carry R-charge, in other words they depend on the R-charge cross ratios α andᾱ, as seen for example in (6.18). Hence the final R-charge structure of a graph depends not only on the tree-level propagators, but also on the mirror particles. For example, graphs that include a propagator of the type Z ≡ (α 1 ·α 4 )(α 2 ·α 3 )/x 2 14 x 2 23 can give a final term free of Z's after the inclusion of the mirror corrections, which is thus compatible with our chosen polarizations (4.2), and gives a non-zero contribution in the limit Z → 0. We have already encountered the same phenomenon when we performed checks for finite k. In the sum over graphs, we hence must include graphs with Z propagators.
Graphs containing one or more propagators of type Z (and otherwise only large bridges filled with many propagators) will be called deformed graphs. At one loop, the relevant deformed graphs include only one pair of Z propagators connecting two disjoint pairs of operators. We can classify all such graphs by starting with the set of maximal graphs listed in Table 1, declaring two of the bridges to become Z propagators, and deleting other bridges such that the graph becomes subleading in k. In the limit of large k that we consider, extremal graphs with m = 0 or (k − m) = 0 will not contribute. Taking into account that one of the bridges attaching to each operator in Table 1 will become a Z propagator, this means that we only need to consider the graphs 1.2.1, 1.2.2, 1.5.3, 2.1.1, 2.1.2, 2.1.3, 3.1, and 3.2. Starting with these, and deleting bridges / replacing bridges by Z propagators, we arrive at the set of inequivalent deformed graphs shown in Table 13. Alternatively, the graphs in Table 13 can be obtained by starting with the graphs B, G, L, M, P, and Q of Table 8, and inserting Z propagators as well as deleting one bridge in all possible ways.
After having determined all deformed graphs, the next step is the hexagonalization. This is done by adding bridges of zero length to the graphs, and dividing them into eight hexagons. Due to the flip invariance of the mirror particle corrections, any different set of zero-length bridges will give the same final result. In the case of the deformed graphs, the multi-particle contribution will show up, and we use the expression for the octagon, decagon, and dodecagon of (3.13). In order to perform the integrability computation for the deformed graphs, one uses that α andᾱ are determined by the equations (5.36 limit Z → 0 is only taken after adding the mirror-particle corrections to a graph. Similar to the case of the leading cyclic graphs, to get a final term proportional to X m Y k−m at one-loop, one has to consider the set of graphs corresponding to the the tree-level terms X m−1 Y k−m Z and X m Y k−m−1 Z. Most of the graphs of Table 13 give a vanishing contribution. One example of a non-vanishing graph is Summary. The subleading integrability result is obtained by summing three different kinds of contributions which were described above. The final result agrees with the perturbative data. It is possible to use the same steps to compute the predictions for the remaining orders in k.
Conclusions
We performed detailed tests of our proposal on the application of the hexagon formalism to non-planar correlators at weak coupling. The basic strategy is the same as in the planar case; we first draw all possible tree-level diagrams on a given Riemann surface, dissect them into hexagonal patches, and glue those patches back together by summing over complete sets of intermediate (mirror) states. The key new idea that is essential in the non-planar case is the procedure called stratification: We first computed all contributions coming from tree-level graphs drawn on a torus, including the graphs that are actually planar. After doing so, we subtracted the contributions from degenerate Riemann surfaces, which in turn can be computed by taking the planar results and shifting the rank of the gauge group. The procedure was tested against available perturbative data, and the results agree perfectly. What we developed in this paper may be viewed as a bottom-up approach to construct a new way of performing string perturbation theory, based on the triangulation of the worldsheet. The central object in our formalism is the hexagon, which is a branch-point twist operator on the worldsheet. The idea of using the twist operator for constructing higher-genus surfaces is not new; it was one of the motivations for Knizhnik to conduct detailed studies of twist operators [30]. It also showed up in other important contexts such as the low-energy description of matrix string theory [31]. In this sense, the hexagon formalism is yet another instance of "old wine in new bottles", which we have been encountering multiple times in recent years. 37 There are several obvious next steps. It would be important to extend the computation to higher loops, both in λ and in 1/N c . Also desirable would be to tie up several loose ends in our arguments: For instance, in the discussion in Section 5, we estimated the contribution from certain magnon configurations (5.30) by claiming that they are related to simpler configurations via Dehn twists and flip transformations. It would be nice to perform a direct computation of such configurations and show the flip invariance explicitly.
One practical obstacle for doing such computations is the complexity of the multiparticle integrands. Even for the two-and three-magnon contributions at one loop which were studied in this paper, the integrands are horrendously complicated. Given the simplicity of the final answer, it would be worth trying to find a better way to organize the integrand. This will eventually be crucial if we were to perform more complicated and physically interesting computations, such as taking the strong-coupling limit and reproducing the supergravity answers. Another strategy is to avoid dealing with the complicated integrand for now, and look for simplifying limits. In flat space, it was shown by Gross and Mende that the high-energy string scattering takes a remarkably simple and universal form [32]. The results were later used by Mende and Ooguri, who succeeded in Borel-resumming the higher-genus contributions in the same limit [33]. In our context, the analogue of the high-energy limit would be played by large operator lengths (charges). As already observed in this paper, taking the large-charge limit simplifies the computation drastically. It is therefore interesting to analyze the limit in more detail, and possibly try to re-sum the 1/N c corrections [18,19]. It would be even more exciting if we could make a quantitative prediction for the non-perturbative corrections by analyzing the large-order 37 Other instances are the conformal bootstrap and the S-matrix bootstrap.
behavior of the 1/N c expansion [34], which one could test against the direct instanton computation [35]. 38 The relation between the summation over graphs and the integration over the moduli space of Riemann surfaces deserves further study. As mentioned in the introduction, one big puzzle in this regard is the fact that the summation over graphs is discrete, while the moduli space is continuous. In the study of simple matrix models, such a discretization of the moduli space was attributed to the topological nature of the dual worldsheet theory. We should however note that the discretization could take place even in non-topological worldsheet theories, namely in the light-cone quantization of the DLCQ background [31,36]. This is in fact closer to our context since, in the generalized light-cone gauge, the lengths of the string becomes proportional to the angular momentum in S 5 , which takes discrete values. To make more progress on these points, it would perhaps be helpful to study the recently proposed worldsheet action for the DLCQ background [37], which is suited for quantization in the conformal gauge, and clarify how the conformal-gauge computation reproduces the light-cone gauge expectation that the moduli space gets discretized.
As a final remark, let us emphasize that the results in this paper are just the first steps in the application of integrability to non-planar observables: Firstly, it would also be interesting to understand other non-planar quantities, such as non-planar anomalous dimensions of single-trace operators, and anomalous dimensions of double-trace operators. See [7] for an important initial attempt. 39 Secondly, although it is remarkable that integrability can reproduce non-planar quantities, the computation performed in this paper is almost as complicated as the direct perturbative computation, and as we include more and more mirror particles, we face the integrand challenges alluded to above. Is there something better we can do? Can we reformulate this formalism, for instance, by combining it with the quantum spectral curve [39]? In fact, there are already two data points which indicate that the quantum spectral curve could be useful for analyzing correlation functions [40]. Whatever the upgraded formalism will be, we expect that the results in this paper will be useful in finding it.
A Details on Non-Planar Data
In (4.8)-(4.10), we represented the quantum corrections F k,m (4.3) to the four-point correlator G k (4.2) in terms of the conformal box (4.4) and double-box functions (4.5), as well as color factors C 1 k,m and C i k,m , i ∈ {a, b, c, d}. In the following, we will explain the color factors and their evaluation in more detail. We will also give further expressions for F k,m as well as F k,m . The expressions depend on the choice of gauge group, and we will present results for both U(N c ) and SU(N c ). and the two-loop color factors are [27]: Here, tr((a 1 . . . a k )) ≡ tr(T (a 1 . . . T a k ) ) denotes a totally symmetrized trace of adjoint gauge group generators T a , without 1/n! prefactor. In the above formulas, 0 ≤ m ≤ k − 2 for C 1,b,c,d k,m , and 0 ≤ m ≤ k − 3 for C a k,m , whereas C a k−2 ≡ 0. Pictorially, we can represent the color factors as where the big circles are the operator traces, the dots are structure constants, the thin lines are single color contractions, and the thick lines are multiple color contractions. For C 1 , the horizontal thick lines stand for (m + 1) propagators, while the vertical thick lines stand for (k − m − 1) propagators. For the two-loop color factors C a , C b , C c , and C d , the horizontal lines stand for m propagators and the vertical lines stand for (k − m − 2) propagators.
Expanding the color factors to subleading order in 1/N c (4.12), the leading coefficients (4.13) are straightforwardly computed [26,27]. The subleading coefficients • C are much harder to obtain. Their computation is outlined in the following. The fusion and fission rules follow from the completeness relation We set γ = 1 to match the normalization of [26,27]. The structure constants are normalized to such that
Results of Contractions.
We have explicitly performed the contractions in (A.1) and (A.2) with Mathematica for various different values of k and m, for some coefficients up to k = 8, for others up to k = 9. The results for the subleading color coefficients are displayed in Table 4 (page 40). Depending on the algorithm, the computation can take very long (up to ∼1 day on 16 cores for a single coefficient at fixed k and m) and becomes memory intensive (up to ∼100 GB) at intermediate stages.
For gauge group SU(N c ), we find In all cases, there are more data points than degrees of freedom in the quartic polynomial. Moreover, one can convince oneself that the difference between U(N c ) and SU(N c ) gauge groups should not depend on m, and should be at most quadratic in k. We can thus be fairly confident that the results are correct for general k and m.
Analytic Check. We can perform an analytic check of the expressions (A.11)-(A.15) by studying the limit of large k with 0 < m/k < 1 fixed and finite. As outlined in the previous paragraph, we can organize the contractions in the color factors (A.1) and (A.2) at the first subleading order in 1/N 2 c as a sum over graphs on the torus. At leading order in large k, only graphs with a maximal number of bridges will contribute, all other graphs will be combinatorially suppressed due to (6.10). The contributing graphs are exactly the ones listed in Table 8. For each of those graphs, we have to sum over all inequivalent labelings of the four operators, over all possible combinations of non-zero bridge lengths on the edges of the graph, and over all possible insertions of f ab e f cde terms (expanded as in (A.10)). For each fixed configuration of bridge lengths, the sum over all planar contractions compatible with those bridge lengths (from the total trace symmetrizations) gives a factor k 4 from cyclic rotations of the four operators, times a factor (m + 1) , which cancel the combinatorial denominators in (A.1) and (A.2). We will now go through the graphs of Table 8 and find the number of inequivalent labelings as well as the combinatorial factors from the summation over bridge lengths. The insertions of f ab e f cde terms will be considered below. Case P: For this bridge configuration, all operator labelings are equivalent. There is one more symmetry: Every pair of operators is connected by two bridges. Exchanging all such bridge pairs simultaneously leaves the configuration invariant (the operation is equivalent to a specific rotation of each operator, see also (2.10)).
The resulting over-counting in the naive sum over bridge lengths needs to be compensated by a factor of 1/2. For large m and k, the (naive) sum over bridge lengths gives m 2 (k − m) 2 .
Case Q. As for Case P, all operator labelings are equivalent. This graph has no additional symmetry though. The sum over bridge lengths gives a factor m 2 (k −m) 2 .
Now we come to the insertion of f ab e f cde factors (called "f 2 " in the following). The f 2 factors either attach to three of the four operators (for C a and C c ), or to all four operators (for C 1 , C b and C d ). The bridge configurations A through Q all decompose the torus into four octagons. One octagon of case B and two octagons of case G involve only two of the four operators, hence they cannot accommodate an f 2 factor. All other octagons involve either three or all four operators. For all cases and all operator labelings, inserting an f 2 term into an octagon that involves only three operators produces a zero, since either none of the four trace terms in (A.10) contributes, or all of them contribute and sum to zero. Thus all non-trivial contributions have both f 2 factors inserted into octagons that involve all four operators. In all such insertions, only one of the four trace terms of (A.10) contributes, and the signs of those terms of the two f 2 factors always multiply to +1. The combinatorial factors from inequivalent f 2 insertions for the relevant cases are: Cases B, G, L, M: In these three cases, there are two four-operator octagons. For C a,b,d , the two f 2 cannot be inserted into the same octagon, hence there are only two inequivalent ways to distribute the f 2 factors. For C c , the two f 2 factors can also be inserted into the same octagon, hence there are four ways to distribute the f 2 terms.
Case P: In this case, each of the four octagons involves all four operators. Again, the two f 2 can be inserted into the same octagon for C c , but not for C a,b,d . Hence, there are 16 ways to distribute the f 2 terms for C c , but only 12 ways to do so for C a,b,d .
Case Q: In this case, each of the four octagons involves only three of the four operators, and hence there are no non-trivial f 2 insertions.
Summarizing the above, at large k withm = m/k fixed and 0 <m < 1, we find the combinatorial structure displayed in Table 14. Multiplying all factors and summing all cases, we find: Case bridges Labelings f 2 : . . . . . .
One can indeed see that the above formulas reproduce the leading terms of (A.11)-(A.15). This match is an important cross-check both of the results (A.11)-(A.15), and of the classification of torus contractions in Table 8. Table 5 (page 41, with the definitions (4.15) and (4.16)). For gauge group SU(N c ), we find where we have suppressed the arguments (z,z) of all box and double-box functions, and where (crossing) stands for s times the whole preceding expression with the replacements Using the transformations (4.19) and (4.20), it is easy to verify that the expressions above are invariant under crossing x 1 ↔ x 4 . Due to supersymmetry, the quantum corrections to the correlator Q 1 . . . Q 4 contain a universal prefactor R (5.39) that is usually pulled out, In the bulk of this work, we have rather used the expansion (4.2) without R factored out, because it is better suited for comparison with our integrability-based computation. The relation between the different expansion coefficients F k,m and F k,m is shown in (4.8). For completeness, we also state the perturbative results for F k,m . For gauge group U(N c ), the expressions are: whereas for gauge group SU(N c ): . (A.31) and note the definitions (4.15) and (4.16). It is easy to see that the above formulas obey crossing symmetry: Under the crossing transformation and hence crossing invariance of (A.27) is equivalent to Remark. From the above expressions, we note that which is equivalent to the equality of coefficients in front of F (1) and F (2) in Table 5 remains true for any N c . This equality of the coefficients (up to overall numerical factors) of the ladder integrals F ( ) at any -loop order can be understood from integrability: This term stems from the one-particle contribution which at loops is proportional to F ( ) . The prefactor of the single-particle excitation is given purely by graph combinatorics, which is independent of the loop order.
B.1 Bottom-Up Construction of All Graphs
In Section 2.2, we have manually classified all maximal graphs on the torus (displayed in Table 1 on page 8). All other graphs can be obtained by deleting bridges from these maximal graphs. Here we want to outline an algorithm that produces all graphs, maximal and non-maximal. The algorithm can be used for any genus and for any number of operators, but it can become very time consuming.
The main step of the algorithm takes a list of graphs, and adds to it all graphs obtained by inserting another bridge (that is homotopically inequivalent to all previous bridges) into any of the graphs already in the list. The new bridge may attach to an operator in between two existing bridges, or it may split an existing bridge in two. Graphs related by rotations or relabelings of the operators or bridges are identified. Duplicate graphs as well as graphs exceeding the wanted genus are discarded. This step is iterated, starting with the "empty" graph with n vertices (operators) and no bridges. The algorithm stops once the iteration step generates no new graphs, and will produce all inequivalent graphs with n vertices whose genus is equal or lower than the wanted genus. The maximal graphs are the ones that exceed the wanted genus when any possible bridge is added.
B.2 Cyclic Graphs from Maximal Graphs
As explained in Section 2.2, all cyclic graphs are obtained from the set of maximal graphs in Table 1 by grouping the four operators into pairs and deleting edges that connect the members of each pair. In the following, we list the descendance of the cases C through Q from the maximal graphs 1.3 through 3.1. We have only kept inequivalent cyclic graphs, and have discarded cases that have a non-maximal number of bridges (the latter can all be obtained by deleting further bridges from the following maximal graphs): Figure 19: The contribution of two particles in the same l = 0 mirror edge. There are two hexagons involved, and we call them left and right hexagons. By an explicit calculation, one verifies that it only contributes at four loops and beyond.
D.1 One-Particle Contribution with l = 0
Consider the hexagons H 1 formed by the operators O 1,4,3 and H 2 formed by the operators O 1,2,4 , as on the left in Figure 19. The integrand for the one-particle mirror contribution for gluing the edge 1-4 with l 14 = 0 was given in [5]. It reads where (the cross-ratio z and the R-charge cross-ratio α are defined in (3.1) and (3.2)) Using the weak coupling expansions given in (C.1) and (C.2), one can find the integrand up to order g 6 . The integral is done by residues and one gets the one-and two-loop results used in our first paper [1] and the three-loop result given in (6.15).
D.2 Two Particles in the Same l = 0 Mirror Edge
This subsection is devoted to the computation of the two-particle contribution in the same mirror edge shown in Figure 19. It will be shown in particular that it contributes only at four loops. Recall that the a-th mirror bound state X a is composed from the tensor product of two factors belonging to the a-th antisymmetric representation of su(2|2). A basis for this representation is (α i = 3, 4) where (φ 1 , φ 2 , ψ 3 , ψ 4 ) form an su(2|2) fundamental multiplet, a is called the bound state index, and the dots stand for permutations. As discussed in [5,8], the basis above has to be modified for the hexagonalization procedure to reproduce the perturbative data. It is necessary to add so-called Z-markers to some of the basis states, and the prescription used here follows from the one given in the appendix A of [8]. The addition of Z-markers has two consequences: They give a contribution to the weight factors, and when one moves and removes them using the rules given in [9], one can get factors of momenta. Note that a rigorous explanation for the Z-marker prescription is still lacking. The dressing of the basis states is as follows (the bar denotes antiparticles) where X I a (u) is a mirror magnon with bound state index a and rapidity u, with I being a (flavor) index for the a-th bound state representation, and γ denotes the mirror transform that transports excitations from one edge of the hexagon to the next. The values of t I u and t J v depend on the field content of the bound-state basis elements, and whether one is considering the "+" or "−" dressing. The rules to find the values of the t i are: "+" dressing : where undotted/dotted labels are left/right su(2|2) fundamental indices, and the prescription is to average over the two different dressings at the end of the calculation. Within a hexagon form factor h, one can move all Z-markers to the left, and then remove them via the rules (see Appendices C and F in [9]) χZ e ip Zχ , h|Z n Ψ = z n h|Ψ , where χ is a fundamental magnon, and Ψ is a generic spin-chain state. When removing all Z-markers in this way, it is possible to show that for any value of t I u and t J v , all momentum factors e ip cancel each other.
The hexagon form factors are matrices in flavor space. In what follows, we are going to work in the string frame, where the non-vanishing components of the one-particle hexagon form factors are The contribution from two particles in the same zero-length bridge is the result of the following integral where the µ's are the measure factors, and the W's are weight factors associated to the particles whose origin is a PSU(2, 2|4) transformation that aligns the frames of the two hexagons [5]. In order to simplify the calculation of the matrix part (flavor sums), it is convenient to use the following identity to have both hexagon form factors with the same crossed arguments: where the superscript c indicates that the indices A andȦ of the excitations are swapped.
The precise values for the signs can be deduced from the crossing rules [9,41] for fundamental magnons χ. In particular, one has (−1)Ī = (−1) scalarsĪ +ḟĪ , (D.11) withḟĪ the number of fermionic dotted indices. The weight factor W was computed in [5], and it was rewritten in [8] taking both the Z-markers prescription and the "+" and "−" dressings into account as where the angles were defined in (D.2), and the eigenvalues L I and R I of the generators L and R can be deduced from the action of these generators on the fundamental excitations given by (the dotted indices have opposite eigenvalues) As a consequence of the formulas above, one has Here, F ab contains the matrix part and the flavor-dependent part of the weight factor. It is given by with S being the mirror bound state S-matrix [8]. In principle, one can use the unitarity of the S-matrix to simplify the expression above, however one has to check that the weight factors do not spoil this simplification. Indeed, the S-matrix has a block-diagonal decomposition [42,43,8], and fixing the indices J a and I a , one can show that the resulting states, after the action of the S-matrix, have a non-vanishing inner product only with definite weight-factor eigenstates, so indeed unitarity can be used. As an example, let us select a particular value of J a and I a to have the case Ia of [8], i. e. we have for some k and l that As a consequence of the equation above, all final states have precisely two φ 1 's and the same total number of ψ 1 's and ψ 2 's. Thus they have non-zero inner products only with definitive weight-factor eigenstates, and this selects only a particular non-trivial set of values for J b and I b . Using the unitarity of the S-matrix, we have where the factor of 1/2 is present because we are averaging between the "+" and "−" dressings, and the T ± are twisted transfer matrices given by Notice that the twisted transfer matrices are defined by Substituting the expression for F ab in (D.14), it only remains to evaluate the integral.
Using the weak-coupling expansions given in Appendix C, it is easy to see that this contribution contributes only at four loops. The integral is easily evaluated by residues and at order g 8 it gives M (2) same edge (z, α) where F (1) , F (2) and F (3) were given in (4.4), (4.5), and (6.16). Another representation for F (L) is
D.3 The Three-Particle Contribution
Next, we compute the three-particle contribution, shown in both Figure 14 and Figure 20, using integrability. Note that this is a particular kind of three-particle contribution, as one can flip the line connecting the operators at positions x 1 and x 5 , such that it connects the operators at position x 6 and x 4 instead. These two kinds of three-particle contributions are related by flipping invariance, and it is possible to deduce one from the other.
To compute the three-particle contribution, it is necessary to evaluate four hexagon form factors h, to use three weight factors W for gluing the hexagons together, and to sum over three mirror bound-state basis elements X I , whose bound state indices are going to be denoted by a, b, and c. The three-particle contribution is given by the integral [9]. The hexagons are glued together using three weight factors W (not shown in the figure). We have chosen to rotate some of the particles of the second and third hexagons by sequences of mirror transformations. The figure in the middle represents the contractions of the flavor indices, and the white circle with four lines denotes a mirror bound state S-matrix. The last figure schematically shows the sum over the indices denoted by circles and squares. The sum is not a straight trace, but rather is weighted by the three weight factors W. We restrict ourselves to operators that lie in a common plane; in this case the weight factors are diagonal in the mirror state space. The result of the last figure is proportional to the three-particle matrix part. Note that it involves two mirror bound state S-matrices, and, unlike in the two-particle contribution, the sum represented by the red lines includes elements that are non-diagonal in the su(2|2) preserved by the hexagon.
A naive basis, i. e. without the Z-markers, for the a-th mirror bound state X is given in (D.3). The dressing of the states by Z-markers with exponents t i appearing below are found using the rules of (D.5) . Notice the values of t I 1 , t J 2 and t K 3 depend on the field content of the bound state basis elements, and on whether one is considering the "+" or the "−" dressing. We have (D.24) Moving all Z-markers to the left and removing them, one gets some non-trivial factors of momenta that will contribute to the integrand. The expression above is equal to A mirror particle-antiparticle pair is always created on a mirror edge shared by two hexagons. The particle is absorbed by one of the hexagons, the antiparticle is absorbed by the other hexagon. The weight factor originates in the symmetry transformation needed to bring both hexagons to the same frame-this transformation acts non-trivially on the mirror particles as one moves them from one hexagon to the other. The expression for the weight factor was given in (D.12), here we give its expression for the case with more cross ratios 27) and the charges of the fundamental excitations under the generators L and R are given in (D.13). In the expression (D.23), the hexagon form factors corresponding to the left and right hexagons only have one excitation. These hexagons have a trivial dynamical part and they contribute only with a possible sign that can be computed using a combination of the one-particle hexagon form factors given in (D.7). In addition, they imply that the excitations with rapidities u 1 and u 3 are both composed of transverse excitations only, and that their states are not changed by the scattering with the particles with rapidity u 2 . As a matter of choice, we are going to mirror-rotate the two middle hexagon form factors before evaluating them. One has for the non-zero cases with fJ the number of undotted fermionic indices in the setJ, and Notice that an important property of the dynamical factor of the hexagons that will be used below is Collecting the expressions above, we have The mirror bound-state S-matrix using the "hybrid" convention 41 was derived in [8] by adapting the derivation of the physical bound-state S-matrix of [43]. The S-matrix has a block-diagonal form, and it can be organized into three cases, depending on the values (2, 1, 0, −1, −2) of the following charge (the superscripts 1 and 2 denotes the first and the second bound state being scattered) A basis for each case can be found in [8], and they are functions of two parameters k and l that are related with the number of fields ψ 2 within the bound states. The S-matrices are denoted by H, Y , and Z for the cases I, II, and III respectively. Notice that the sum in F abc has many terms, and each term involves a product of two S-matrix elements that, because of the sum in J corresponding to the u 2 rapidity, can be diagonal or non-diagonal, see Figure 20. Some of the terms do not contribute at one-loop order, and to select the ones that do contribute, one has to analyze the dependence of the S-matrix elements on g 2 . Using the results of [8], we have in a particular basis (D. 35) As an example, let us evaluate one of the contributing terms of F abc , namely the term proportional to α 1 α 2 α 3 (where α i are the internal cross ratios, defined as in (3.3)). One can show that this term is obtained using the + dressing and it only involves diagonal S-matrices elements. We have at one-loop order mirror contributions were determined, and it is now possible to compute the contributions from all graphs without making any assumption. It will be shown that this will imply a refinement of the prescription of the sum over graphs of [5].
E.1.1 The Case of n 20 Operators
The connected planar one-loop correlation function of n BPS operators of lengths k i was computed perturbatively in [44]. The result is where the summation over i, j, l, p is to be understood as follows. For every set of four different indices {i, j, l, p}, one has only three different terms in the sum, precisely ijlp, iljp and ijpl. In addition, Disk means the tree-level correlation function with all the Wick contraction lines contained inside a disk, and with the operators listed in the first argument being inserted in the boundary of the disk, respecting their cyclic order, and the operators in the second argument inserted inside the disk. In evaluating the function Disk, one also does not consider disconnected graphs. Notice that the four operators at the boundary of the disk are already connected to each other by interaction lines lying outside of the disk, and this has to be taken into account when classifying the disconnected graphs.
As an example, the graph where all the operators at the boundary of the disk are not contracted with the ones inside the disk is disconnected. Finally, using the definition of m given in (3.9), with the cross ratios z ijlk being defined as Notice that the function D is invariant under both a reflection and a cyclic rotation of its indices due to the properties of the function m given in (3.11). This is consistent with the fact that there are only three terms in the summation (E.1) for every set of four indices.
Here, we are going to consider the restriction of the general formula (E.1) to k i = 2 for all i = 1, . . . , n. Moreover, in order to compare the perturbative result with the integrability result, it is enough to consider the contribution to the sum in (E.1) coming from a definite set of four indices, say {1, 2, 3, 4} for definiteness. For this set of indices, we have that the sum on the right-hand side gives RHS of (E. As an example application of the formula above, let us consider the following five-operator case The next step is to compare the above result with the integrability calculation. The integrability result can be obtained by using the formula for the 2n-gon given in (3.13). The argument of the function m appearing in that formula is given by the cross ratios . (E.9) where i and j labels the operators in the polygon with i = j, i + 1 = j and i = j + 1 modulo n. The cross ratios z ijkl were defined in (E.3) and they are related with z i,j by z i,j = z i,j+1,i+1,j . (E.10) The connected tree-level graphs of n length-two BPS operators consist of polygons with n vertices. The integrability computation, assuming that disconnected tree-level graphs give zero contribution, consist in using the 2n-gon formula of (3.13) for the tree-level connected polygons. Note that the internal and the external polygons give the same result, hence one gets a factor of two. The terms proportional to m(z 1423 ) are generated by polygons where (i, i + 1) = (1, 2) and (j, j + 1) = (3, 4) for some i and j, as consequence of the relation (E.10). Summing over all possible polygons, it is not difficult to see that the integrability result agrees with the perturbative result of (E.8). The argument is similar for the other terms m(z ijkl ), and this proves the equality of both computation methods.
E.1.2 The Case of n Arbitrary BPS Operators
We have shown above that the integrability result for the n-point function of length-two BPS operators agrees with the perturbative answer. The perturbative result was computed using the general result for correlation function of BPS operators described by Drukker and Plefka [44]. In principle, one can use a similar procedure as above for proving the equality for general BPS operators. However, the combinatorics for the general case are more complicated, and one has to take into account nontrivial cancellations between terms with different D(z ijkl ). We are going to argue that the integrability result agrees with the perturbative result by using the N = 2 off-shell superfield formulation of N = 4 SYM as discussed in [45,7]. The N = 4 supermultiplet decomposes into a N = 2 supermultiplet and a hypermultiplet. When computing a correlation function of BPS operators, it is possible to restrict the polarization vectors to a certain subspace, and treat the external operators as containing only hypermultiplets. The polarization vectors Y (we called these α i for most of this work) were parametrized as a function of a complex parameter β i in [7] as follows Notice that the polarizations above give, for a generic value of the parameters β i , a non-zero inner product between two arbitrary polarizations vectors. This property of the polarizations is important for the integrability computation, since when some of the inner products are zero, it is necessary to consider deformed graphs, as for example in the subleading computation of Section 6.2.2. By a direct computation one has The one-loop correlation functions in the N = 2 superfield formalism are computed by inserting N = 2 YM lines in all possible ways in all tree-level graphs, see [7] for details. Take two edges of a tree-level graph, one connecting the operators O i and O j and the other connecting the operators O k and O l . Deleting two propagators and respecting the cyclic order, one inserts the following function for computing the one-loop correction: where T ij;kl is a color factor, d ij = y 2 ij /x 2 ij , and Figure 21: The two kinds of non-1EI graphs that appear in the computation of a fivepoint function of three length-two and two length-three BPS operators. The perturbative result was reproduced by an integrability calculation in [8] without considering these graphs, in other words their contributions must vanish. In this paper we have computed the n-particle contribution, and show that they indeed give a zero contribution.
Defining the following cross ratios (similar definitions apply to the R-charge cross ratios it is possible to rewrite (E.13) as where the function m(z) is defined in (3.9), and it is the same function appearing in the formula of the 2n-gon. It is possible to get rid of the minus sign above by using the last of the m function identities given in (3.11), and by changing variables z ijkl = (1 − z i,k ). In the integrability calculation, one hexagonalizes the tree-level graphs and corrects the tree-level result by adding the mirror-particle contributions. It follows from using (E.16) that disconnected tree-level graphs give perturbatively zero contribution at one-loop order, and the only tree-level graphs that one has to consider are connected graphs that decompose the sphere into a set of polygonal faces. It is hard to prove using integrability that disconnected graphs give a zero contribution at one loop, as the calculation involves loops and spirals. See Appendix F for details. However, using our prescription, it is possible to argue that they vanish, and all the integrability contributions from any planar graph can be calculated using the 2n-gon formula. Due to the fact that the same function m appears in the 2n-gon integrability formula and in the perturbative building block F ij;kl defined above, it is easy to see that the integrability result agrees with the perturbative result for general n-point functions of BPS operators. In particular, this implies the nonrenormalization of the extremal and next-to-extremal correlation functions by integrability, as mentioned in Section 3.2.
E.2 On Non-1EI Graphs
The connected graphs were classified into two types in [5]: The one-edge irreducible (1EI) graphs and the non-1EI graphs. By definition, 1EI graphs are graphs that do not become disconnected when a set of lines connecting any two operators are cut. Typically, non-1EI graphs have more zero-length bridges, hence their calculations using integrability are 2 3 5 4 1 Figure 22: One non-1EI graph contributing to the five-point function of four lengthtwo and one length-four BPS operators. Contrary to the original expectation, this graph gives a non-zero contribution from the integrability calculation. This requires a refinement of the prescription for summing over graphs of [5].
harder because they involve more multi-particle contributions. In this work, we have computed these integrability contributions, and we are in a position to evaluate all non-1EI planar graphs without making any assumption about them. We start by showing that the non-1EI graphs not considered in the analysis of the five-point function of three length-two and two length-three BPS operators done in [8] do indeed vanish. The graphs are shown in Figure 21. Considering that the five-point function lies in a plane, there are four spacetime cross ratios characterizing it (similarly for the R-charge cross ratios). They are given by Using the properties of the functions m given in (3.11), it is possible to show that indeed the graphs of Figure 21 give a zero one-loop contribution, and the comparison between integrability and the perturbative data of [8] is correct. In [5], the prescription for summing over graphs was to not include non-1EI graphs in the summation, because they were expected to vanish. Using this prescription, the fourpoint functions of arbitrary BPS operators and some five-point functions were computed using integrability, and the result agreed with perturbation theory. Nevertheless, the general case for n-point functions is more complicated even at one loop. In Figure 22, we show an example of a non-vanishing one-loop non-1EI graph for the case of four length-two and one length-four BPS operator, as one can see by computing the graph using the 2n-gon expression of (3.13) (it gives two times the one-particle contribution of the square). This result contradicts the assumptions of the prescription that has to the refined. The correct prescription is to sum over all graphs including both 1EI and non-1EI graphs. This gives the correct result for arbitrary one-loop planar correlation functions of BPS operators, as argued in the previous subsection using YM insertion lines. In the left figure, we depict a disconnected graph drawn on a sphere, including its hexagonalization. The solid lines are non-zero-length bridges, while the dashed lines denote zero-length bridges. The red dots are a combination of magnons that can potentially contribute at one loop. On the right, we drew a corresponding subtraction graph which is given by two disconnected spheres, each with one marked point.
F Contributions from Disconnected Graphs
In this appendix, we discuss disconnected planar graphs on the sphere, and argue that their contribution to the planar four-point function vanishes at one loop (in agreement with perturbation theory). In fact, there is only one disconnected planar four-point graph; it is depicted in Figure 23, including its hexagonalization. Much like the secretly planar graphs discussed in the main text, this graph corresponds to a degenerate Riemann surface, namely a sphere which splits into two connected components. We therefore need to consider Dehn-twist identifications as well as the subtraction of the degenerate case in order to correctly evaluate its contribution. As shown in the figure, the graph has a cycle formed by the zero-length bridges, and one has to identify magnon configurations that are related by Dehn twists performed on this cycle. As in the case of secretly planar graphs discussed in the main text, we conjecture that the net effect of the Dehn twist is to identify configurations that include closed loops of magnon with the analogous configurations without any loops. For the tessellation we chose, the only configuration that does not contain a loop (and that "feels" all the four-operators) is the one depicted in Figure 23 (on the left). The contribution from this configuration is given by polygon(1, 3, 1, 2, 4, 2), which evaluates to zero owing to the pinching rule.
Having evaluated the contribution from the disconnected graph on the sphere, the next task is to evaluate the subtraction, which comes from two spheres, each with two operator insertions and a single marked point (on the right in Figure 23). As discussed in Section 2.4, their contributions are related to the one without marked points by a shift of the gauge group rank. Since the (planar) two-point functions do not receive loop corrections, this immediately shows that the contribution from the subtraction is zero for our case.
Therefore, in summary, we have (0 − 0) = 0, which shows that the disconnected graphs do not contribute at one loop, as claimed in the beginning of this section. | 39,004 | 2018-09-24T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Fast and reliable pre-approach for scanning probe microscopes based on tip-sample capacitance
a fast and reliable pre-approach. The absolute tip-sample capacitance shows a generic behavior as a function of the distance, even though we measured it on several completely different setups. Insight into this behavior is gained via an analytical and computational analysis, from which two additional advantages arise: the capacitance measurement can be applied for observing, analyzing, and fine-tuning of the approach motor, as well as for the determination of the (effective) tip radius. The latter provides important infor- mation about the sharpness of the measured tip and can be used not only to characterize new (freshly etched) tips but also for the determination of the degradation after a tip-sample contact/crash.
Introduction
Although Scanning Probe Microscopes (SPMs) have clearly demonstrated their power and are used in many different fields , their usability is still an issue. For example, when comparing to an electron beam technique that can quickly deliver an image of the surface, the user of an SPM has to bring the tip into close vicinity to the sample (pre-approach), thereby avoiding a resolution destroying tip-sample contact ( tip crash ). This requires a careful approach system, which can last even up to ∼100 min depending on the microscope, especially if the microscope does not provide optical access. Ideally, one would like to have a fast, robust, and general solution for the approach metrology that can be used in any type of SPM, independently of the design. In this paper we demonstrate a straightforward solution for all SPMs that work with a (semi)conductive tip and sample: the tip-sample distance can accurately be measured via the tip-sample capacitance and this can be used for a quick and robust pre-approach. We also demonstrate that this technique can be applied in tuning-fork based Atomic Force Microscopes (AFMs). Please note that a special class of SPM, the Scanning Capacitance Microscope (SCM), uses the capacitance variation even for imaging and/or spectroscopy [26][27][28][29] .
For Scanning Tunneling Microscopes (STMs) with optical access, the total approach duration is often decreased to acceptable times by using the distance between the tip and its reflection in the sample during a manual pre-approach. In this way the tip-sample distance can be safely decreased to 60 μm, before the user switches to any type of automatic approach. 1 However, a fast and reliable manual pre-approach is not always possible, as design aspects of particular SPMs prevent the implementation of optical access (and even cameras). Typical examples are low-temperature STMs, where a closed cryostat, or at least heat shields, are required [30][31][32] . A solution for these microscopes is the implementation of absolute position readouts, which is often realized by measuring the capacitance between two cylinders that move with respect to each other. However, the position of the tip with respect to the sample remains still unknown, especially after a sample or tip exchange. As a result the (first) approach with a new tip and/or sample usually takes a long time, as one uses the automatic approach right from the beginning to surely prevent a tip-sample contact.
Finally, there are microscopes which can neither implement an optical access nor a capacitive (or any other) readout system [23] . For such systems, a pre-approach based on the tip-sample capacitance, as described in this paper, clearly decreases the total approach time by about a factor of ten.
Faced with the problem that the exact surface position is unknown up to mm after a cleaving process of the sample in a cryogenic dipstick setup, Schlegel et al. [33] found an elegant solution for their pre-approach by measuring the second derivative of the tip-sample capacitance during their approach. Their solution circumvents the determination of the absolute capacitance, which is far from trivial, due to its extremely small value.
In this paper we set the next step and demonstrate that the tip-sample distance can accurately be measured by determining the absolute tip-sample capacitance. This enables not only the application of a quick and robust pre-approach, but delivers in addition a tool for an in situ tip-shape and sharpness characterization as well as for measuring and fine-tuning the performance of the coarse-approach motor. Finally, we also demonstrate that this technique can be applied in tuning-fork based Atomic Force Microscopes (AFMs).
We note here that our results combine partially wellestablished knowledge of different fields: electronics, nanoscale and tip-sample capacitance research, electronic tip-shape modeling, scanning capacitance microscopy, and scanning tunneling microscopy. To comprehensively provide the necessary background information, we review the most important aspects thereby giving credits to the different fields.
In the first section of the paper we present an overview on how to accurately measure absolute capacitances in the femtofarad (fF) and attofarad (aF) regime. We show that there is no need for special electronics. Moreover, it will become clear that, by default, all STMs are optimized for tip-sample capacitance measurements. This insight can be already deduced from Ref. [34] in 2006, in which the authors achieved aF resolution (although not on an absolute scale).
In the second part, we describe measurements on various STMs and one AFM ranging from homebuilt to commercially available systems. To demonstrate the accuracy of this technique, we use a precise automated capacitance bridge. It is remarkable that the tance is within tunneling range; if this is not the case, the routine will be repeated, (2) with a fully working feedback, the tip-sample distance is reduced continuously, until a tunneling current is detected. Please note that the second method is significantly faster, but often leads to a (not recognized) tip-sample contact when using analog feedback controllers. The reason for this is the integrator in the feedback. This integrator, usually realized as a capacitor, is fully charged to the power supply voltage (here assumed to be positive) during this process. As it integrates the error signal, a reduction of this charge requires a negative voltage of the error signal, which is delivered only if the tip is closer to the sample than the requested tunneling current set point. This means that, although the tip is already in tunneling conditions, the capacitor is still between zero and full positive voltage, leading to a further approach. Often this electronic circuit is not fast enough to prevent a tip-sample contact. same bridge has been used by Kurokawa et al. [35] to study the influence of the tip shape on the tip-sample capacitance already in 1998. However, we also show that less expensive solutions work as well, depending on the specific information one would like to extract (e.g. only the utilization as a pre-approach).
We will show that all measurements have a generic curve, if one plots the capacitance versus the tip-sample distance: it consists of a linear part for large distances and a steep increase for small distances. Similar observations have been obtained before [29,[34][35][36][37] . However, in addition, we show that the absolute capacitance values are in the same order of magnitude (hundreds of fF), although measured with different tips and even on completely different microscopes! In the last part we elaborate on the generic aspect of the tipsample capacitance versus distance curve to receive detailed information on the tip geometry. As the tip-sample capacitance determines the resolution in scanning capacitance microscopy, one can find experiments [29,34,35,37,38] , analytic descriptions [26,[34][35][36][37][38][39] and finite element models [28,40] in the literature dating back even to 1988 [26] . The growing complexity of the analytical description originates from the desire to explain all measured curves with a general equation. However, the tip geometry is not known and has to be assumed. Only Kurokawa et al. [35] measured experimentally their tip shapes with electron microscopy to combine this information further with their model. Building on the earlier work we performed finite element as well as analytical calculations with the practical aim to disentangle the parameters of the geometric tip shape from the measured curves. We show that it is possible to determine the tip radius and sharpness in situ in the microscope, which provides an ideal tool for the user to judge the quality of the tip e.g. after an undesired tip-crash . The comparison of our finite element analysis results shows good agreement with the ball model [26] and its later refinement with a dihedral approximation [39] . However, it also becomes clear that the most simple model, the ball model of Kleinknecht et al. [26] , fits the data best and is, therefore, in practice the most effective one to use.
Subfemtofarad capacitance measurement principles
Using the tip-sample capacitance for the pre-approach requires the capability to measure capacitances with a resolution smaller than one femtofarad. To demonstrate that the capacitance between the tip and the sample delivers an accurate, absolute measure for the tip-sample distance, we measured even with aF resolution. This has been achieved earlier by Fumagalli et al. [34] , however, only on a relative scale.
Measuring capacitances within the femtofarad range is not difficult, provided it is performed carefully. There is various commercial electronics available that is suitable for measuring in this capacitance range; usually higher-end electronics allow more accurate and absolute measurements. As most SPMs are not designed for high-frequency applications, we limit ourselves to frequencies below 10 kHz.
It is crucial that the electronic connections leading to the capacitor are separately shielded, as one has to prevent the measurement of so-called stray capacitances . For example, two conductors that see each other have a stray capacitance which leads to an extra capacitance added to the capacitance of interest. Note that two signal wires close to each other easily have capacitances of hundreds to thousands of femtofarads per centimeter [41] .
The above explains why it is usually impossible to determine the tip-sample capacitance with a hand-held multimeter: due to stray capacitances, one measures values larger than a picofarad, although one expects (and we will show) that the tip-sample capacitances are in the femtofarad regime. The additional capacitance comes from the signal that goes via the shieldings of the Working principles of capacitance measurements: a) Schematic of a capacitor with stray capacitances. The low impedance of the current measurement causes the stray capacitances to be negligible. Ideally, the shielding should be connected to ground at one single point in the setup, preferably shortly after the current measurement. b) The resolution and accuracy can be enhanced by using a reference capacitor and a Lock-In. Matching this reference with the unknown capacitor results in a vanishing current, which describes the principles of a capacitance bridge.
conductors, see Fig. 1 a. The proper and ideal solution is to apply an alternating-current (AC) signal to one side of the capacitance, and measure the capacitive current with an amplifier that has a low input impedance on the other side of the circuit. A currentto-voltage (IV) converter is the most suited amplifier for this purpose. Please note that a dedicated IV-preamplifier (PreAmp) is inherently installed in every STM. This naturally makes an STM an ideal tool for measuring the tip-sample capacitance. The low input impedance of the PreAmp ensures that the potential difference between the input of the amplifier and the shielding is minimal such that parasitic currents are minimized. The advantage of the PreAmp has also been noticed by Fumagalli et al. [34] .
When the signal from the IV-converter is compared to the reference voltage ( V ref ) by using quadrature measurements (Lock-In), the out-of-phase component ( Y ) gives a measure for the capacitance: Here, G is the gain of the IV-converter and f the frequency of the reference signal, assuming that the frequency, at which the capacitance measurement is performed, is well below the bandwidth of the IV-converter. This concept for measuring the tip-sample capacitance has been applied by Lee et al. [42] , Pingree et al. [43] , and Fumagalli et al. [34] . The reproducibility of the above described measurement depends on (possible) changes in the setup, like the (dis)appearance of ground loops. The application of a reference capacitor offers not only a solution for this inaccuracy, it even enables the determination of absolute capacitance measurements. The solution involves the incorporation of the reference capacitor into the electronic measurement circuit in such a way that physical replugging of the cables is not necessary, although the reference capacitor can be turned on and off. An elegant way is applying the inverted reference voltage over the reference capacitor, before it is added to the signal right in front of the PreAmp, see Fig. 1 b. In this way, the reference capacitance is subtracted from the capacitance to be measured. If the reference capacitance exactly matches the capacitance of interest, the output is zero. Even if the capacitance does not match exactly, it is possible to determine the capacitance of interest from the measured (nonzero) signal by precise knowledge of the reference capacitor. Choosing the reference capacitor of the same order of magnitude as the capacitor of interest, makes the output signal smaller and the end result more accurate.
The previous paragraph describes the basic principles of a lowfrequency capacitance bridge with high accuracy. Most of the measurements in this paper were performed with an Andeen Hagerlingh capacitance bridge (AH2550) [44] , which automatically switches reference capacitances, until the reference value is close to the capacitance of interest. The calibrated reference capacitors are kept at a constant temperature inside an internal oven. This guarantees that the measured capacitance values are of high accuracy and reproducibility. Kurokawa et al. [35] used a similar bridge to accurately characterize the capacitance of their tips, of which they measured the shape before with an electron microscope.
However, as dedicated capacitance bridges can be rather expensive, we will also present results measured with different instruments. The General Radio capacitance bridge [45] requires time consuming, manual switching of the reference capacitors. However, if one only wants to use this bridge for a pre-approach, it is not necessary to zero the signal for each step of the coarse-approach motor. Instead, the reference capacitance is set to a certain, desired threshold value. If the tip-sample capacitance value passes the reference (i.e. the Y-signal on the Lock-In passes zero or the phase rotates 180 °), then one knows that the tip enters the range where the automatic approach procedure should be turned on.
Finally, it is easily possible to determine the capacitance directly with dedicated STM electronics, which should be known by researchers that use STMs in spectroscopy mode. If, e.g. the tip is connected to ground via the PreAmp, one can put an AC signal (e.g. 1 V and 10 kHz) on the sample and determine the current through the tip. After the current is converted to a voltage, a Lock-In can be used to determine the out-of-phase component of the signal, from which the capacitance can be calculated using Eq. (1) .
However, at all tip-sample distances that are larger than the corresponding tunneling regime, the signal is dominated by the current through the capacitance. Therefore, measuring only the amplitude of the signal is enough to determine the capacitance (and no quadrature measurement, like Lock-In, is needed). For example, just by applying the control electronics described in [46,47] , it is possible to measure ∼ 10 aF when applying an AC signal of 1 V and 10 kHz to the sample. This concept is applied by Schlegel et al. [33] , although they did not work out the absolute capacitances and focused only on the second derivative.
Results
To demonstrate the generality of our approach, we investigate various SPMs. We start with an STM that is equipped with an absolute position readout such that one can directly measure the tipsample capacitance as a function of the distance. After that, we repeat our measurements on systems without position readout and will show that the tip-sample capacitance provides, in addition, an excellent way of determining the coarse-approach motor dynamics and reliability. Furthermore, we will demonstrate the advantage of a fast and safe pre-approach on an STM with a less reliable approach motor and will show that our method works as well for a noncontact AFM [48] that is equipped with a tuning fork.
We start with the JPE-STM: a custom Magnetic Resonance Force Microscopy system (MRFM) that consists of a commercially available stage from JPE [49] with a home-built absolute capacitative position readout. For our purpose, we equipped this stage with an STM tip-holder and a graphite (higly ordered pyrolytic graphite) sample. Applying the AH2550 capacitance bridge, it is possible to measure the absolute position with a precision below 100 nm. Fig. 2 shows the capacitance between tip (including tip holder) and the sample as a function of the tip-sample distance. The curve in Fig. 2 can be used as a calibration of the tip-sample distance by using the capacitance. This calibration holds even after a sample exchange, provided that the new sample has the same geometry. After a tip change, however, the calibration is usually lost. The influence of the tip with respect to the capacitance-distance curve is explained in detail in Section 4 . As the luxury of an absolute position readout is not present on most SPMs, a calibration like the one shown in Fig. 2 seems to be impossible. This is not fully true, as long as one is not interested in the absolute tip-sample distance in standard units. To demonstrate this, we performed a similar measurement on a commercial JT-STM [50] , of which the result is shown in Fig. 3 . Obviously, one still recognizes a relation between capacitance and distance. However, the distance here is defined in units of coarse-approach motor steps. Please note that, although the retract curve falls exactly on the approach curve, we applied 420 retract steps, but 497 approach ones. Coarse-motor step sizes are usually not very well defined. Therefore, the step size can only be defined as a statistical average. The step size in slip-stick motors can be directionally dependent due to some constant force pushing the slider towards one or the other direction, like gravitational forces or a spring. To account for such an asymmetry, we rescaled the trace for retracting and approaching in Fig. 3 accordingly: it is striking that the curves fall on top of each other quite accurately. This fact together with the smoothness of the curve (and its qualitatively similar shape as in Fig. 2 ) indicates a reliable motor with linear behavior: the step size is constant over the whole range, although it is different between the approach and retract movement. We determined the step size for retracting and approaching via the calibrated piezo tube, when the system was in tunneling regime. Assuming that these values are representative for the whole measured range, the total distance that the motor traveled was 10 μm.
Unfortunately the average step size of most coarse-approach motors is not only directionally dependent, but varies, in addition, with the precise position of the motor. This is due to imperfections of sliders and surfaces, wear, heavy use at certain positions of the travel range, and other position-dependent effects, like e.g. springs. This becomes clear from an experiment we performed on a heavily used Unisoko-STM [51] , of which the results are shown in Fig. 4 . Tip-sample capacitance measured on a JT-STM: without absolute height, the distance is measured in units of coarse approach motor steps. As the retract curve overlaps exactly with the approach curve (after rescaling), this motor runs reliably over the complete travel range, although the average step size is, due to anisotropic forces, different for both directions. We retracted 420 steps, while we needed 497 steps for the approach. Zero corresponds to a tip-sample distance of 10 nm, which we measured with the calibrated scan piezo. Using this piezo, we also calibrated the step sizes of the motor in the tunneling regime: extrapolating this, 420 retract steps correspond to approximately 10 μm. We used a commercially available PtIr tip and a 120 nm thick Au film on Si as a sample. The temperature during the experiment was 4.6 K.
coarse motor steps, normalized retract 5400 steps approach 8300 steps capacitance (pF) Fig. 4. Tip-sample capacitance measured on a Unisoko-STM: in both runs, we retracted 5490 motor steps and needed 8300 steps to get back. The starting point corresponds to a tip-sample distance of about 10 nm. Note that this motor runs reliably, as both runs fit almost perfectly on top of each other. However, it is obvious that the motor runs with different speeds on different positions of its travel range. Also directional asymmetry is present. We used a commercially available PtIr tip and a Cu(100) sample. These measurements demonstrate that our method can be applied to study the motor performance and dynamics in general. The temperature during the experiment was 1.5 K.
To cancel the asymmetry caused by gravity, we applied an analogous directional rescaling as in Fig. 3 . Here, however, the retract and approach curves do not fall on top of each other. Strikingly, two consecutive experiments (runs) do show reproducibility indicating that the step size does not change significantly in time for a position of the travel range, although there is a huge variation for different positions. As an example, two regions are clearly visible coarse motor steps Approach Reference
Fig. 5.
Tip-sample capacitance measured on the ReactorSTM: the particular design of the approach motor mechanism makes this motor less reliable compared to the approach motors of other STMs. The variation in the motor performance can be seen from three different retract curves. Still it is possible to significantly shorten the total approach time, as is indicated by the crosses, all of which represent an individual approach: based on earlier measured retract curves (run 1-3), the user chooses a safe threshold capacitance. The approach motor is continuously operated without extra interrupts until the chosen threshold value is reached. This procedure lasts only a few seconds, after which one switches to the automatic safe (but slower) approach mode and counts the number of steps that are needed to reach the tunneling regime. This procedure lasts only a few tens of seconds. The crosses indicate the chosen threshold capacitance versus the number of steps needed to reach the tunneling regime. To increase the accuracy/statistics, one should measure the capacitance once in a while for a complete retract curve. The data are obtained for different PtIr tips on various (metallic) samples.
in the approach direction. Our method enables not only the possibility to tune the motor parameters until it moves with constant speed, it even demonstrates the capability to use it for studying coarse-approach motor dynamics in general. The most rewarding application of the tip-sample capacitance measurement is probably the implementation of it for a fast, safe and reliable pre-approach without optical access. Fig. 5 shows the results for the ReactorSTM [23] . The rather unique coarse-approach mechanism in this STM is realized via a sliding movement of the tip (with tip holder) over two guiding rods at the inside of the scanning piezo tube. Between movements, the tip is magnetically pulled to the guiding rods. Due to this special design, this motor shows nonlinear, and sometimes unpredictable behavior, which is also reflected in the curves of Fig. 5 . The combination of this less reliable motor and the absence of optical access, required often long pre-approach times to safely find the tunneling regime.
After a tip exchange, one first measures one (or several) retract curves, which can also be done at ambient conditions if that is more practical. From these curves one can choose a threshold capacitance that one considers to be safe and fast enough (close enough to the sample) for the quick pre-approach. In a next step, one repeatedly runs the approach motor until the threshold value is reached. This happens within a few seconds. Then one switches to the automatic safe (but slower) approach mode and counts the number of steps that are needed to reach the tunneling regime. This procedure lasts only a few tens of seconds. The crosses in Fig. 5 indicate the chosen threshold capacitance versus the number of steps needed to reach the tunneling regime. Applying this way of approaching, the system could be regularly brought into tunneling regime within only 10 minutes, while it took usually 60 minutes and more before. Experience shows that this method is insensitive to sample exchange as long as the samples are of comparable coarse motor steps capacitance (pF) Fig. 6. Tip-sample capacitance measured on the tuning-fork-based AFM: The tuning fork has an electron beam induced deposited tip on one side. This result shows that our quick pre-approach method, which is based on capacitive measurements, is not only applicable to standard STMs. geometry. We expect that a complete approach (including the preapproach) can be realized in less than a minute, if one programs a dedicated routine for the used control electronics and provided that the motor can move fast enough.
In the final example, we show that the capacitive approach is more widely applicable than to STM only. To illustrate this, we performed a similar measurement using a noncontact AFM equipped with a quartz tuning fork (QTF) [52,61] . Using Electron Beam Induced Deposition (EBID [53] ) a nano-sized Pt/C tip was grown on the prong of the tuning fork facing the sample. The length of the tip was ∼ 2.6 μm and its diameter was ∼ 220 nm. The tip was first approached to the surface by measuring the shift in resonance frequency after every coarse approach step. After the approach, the QTF was retracted in small steps and the capacitance between tip and sample was measured. The results, plotted in Fig. 6 , show the same generic curve for the nano-sized tip as observed for the macroscopic STM tips. If one uses a non-conducting tip, one can still use the capacity between the sample and one electrode of the QTF for the pre-approach.
In the above examples we showed how the tip-sample capacitance provides valuable information about the tip-sample distance. Even when the capacitance cannot be related to absolute length scales, it still provides information on the motor performance. Depending on the reliability of the motor, the capacitances can be converted into distances in units of motor steps. In any case a reference capacitance can be chosen such that a fast and safe preapproach can be realized until this value is reached. This method significantly saves time and minimizes the number of tip crash events. In addition, detailed motor characterization and optimization is possible in this way.
Finite element analysis
The above presented tip-sample capacitance measurements all show a rather similar curve with a linear behavior for large distances and a steep rise for decreasingly shorter distances. Similar curves have been obtained before [29,[34][35][36][37] . Moreover, the absolute scale of the values is approximately the same, with the capacitance changing by 5 − 15 fF in the last few tens of micrometers. The AFM is an exception to this because the EBID grown tip is very short and the prong of the quartz tuning fork forms Fig. 7. Simple tip-model a) Schematic of the model (drawn not to scale) of a tip with radius W and length L, connected to a base plate with radius B. The end of the tip is conical with height H and truncated with a ball with radius R. The smallest distance between the tip (the apex) and the relatively large sample is noted d. In our finite element simulations the diameter of the tip wire (2 W ) is fixed to 0.25 mm. To calculate the capacitance in the simulation, we did set the tip to a potential of 1 V and the sample to 0 V. Panels b) and c) show the equipotential lines of the simulation for the particular tip geometry at one distance for the JPE-STM: r denotes the radial direction of the geometry and z is the vertical direction. The simulation was performed with COMSOL [54] . a parallel-plate capacitor with the sample surface. Still, the shape of the curve looks similar and suggests a generic behavior which raises the question: can we also understand the tip-sample capacitance curve as function of tip-sample distance?
In order to address this question, we performed a Finite Element Analysis (FEA) [54] calculation and created a simple tip-sample model taking into account cylindrical symmetry, see Fig. 7 a. Note that other FEA models have been discussed before [28,40] , however, none of them included the tip holder. By simulating the electrical field, shown in Fig. 7 b and c, we can determine the capacitance. Finally, by using a parametric sweep for the distance d , which means successive recalculation of the model, we generate a capacitance-distance curve. Furthermore, it is possible to determine the contributions of the tip holder ( B ), tip length ( L ), tip sharpness ( H ), tip wire radius ( W ), and radius of the apex at the end of the tip ( R ), as we will describe later in more detail.
To get an estimate for reasonable values of these parameters we can include a lower boundary, which is simply given by a parallelplate capacitor: C par causes the linear behavior for large tip-sample distances of the total capacitance, see Fig. 8 . This additional capacitance comes from the tip holder that forms, in good approximation, with the combination of the sample and sample holder a parallel-plate capacitor. Its capacitance can be easily determined from the data far away from the sample: and thus L par = 0 A par /C par − d max , where d max is the maximum tip-sample distance available in the data. C par is drawn in blue in the graphs of Fig. 8 and the corresponding parameters are provided in Table 1 . The remaining deviation for small distances comes from the tip itself and can be described with C tip . Note that L is the real tip length and that L par is the tip length if one assumes the whole capacitance curve could be explained by just one parallel plate at distance L par + d. In the following we discuss how L as well as the other parameters W, R, H, B influence the capacitance-distance curve. We will show that it is possible to determine all these parameters such that we finally receive fits that closely resemble the measured data, see Fig. 8 . Surprisingly, two branches of analytic descriptions for tipsample capacitances can be found in literature: the first and older ones [26,35] describe C tip with a sphere, whereas the newer ones consider a cone with a sphere at the end [36,38,39] . In honor of the first description by Kleinknecht et al. [26] , we follow this most simplified model to fit and analyze our data. This is fully justified, as we will show in a comparison in Section 5 that the other, more complicated models, do not deliver better fits or insight.
Describing the very end of the tip with a half sphere, the radius of this sphere, R , determines the distance-capacitance curve for small distances ( d < R ). In turn it is possible to derive the radius of the apex from the measured capacitance-distance curves by using [26,35] : effective tip-radius (nm) distance (nm) data JPE JT Fig. 9. Effective tip radius, R eff , versus tip-sample distance. Using our simulations, we varied the tip radius R , see Fig. 7 , which determines the radius of the apex at the end of the tip. For small tip-sample distances R eff converges to a constant value, which represents the real tip radius. This method provides the possibility to determine the end-of-tip radius (e.g. after a tip crash) in situ , in the microscope. For comparison we also plotted the JPE-STM as well as the JT-STM data. Note that the tip in the JT-STM had been crashed before, whereas no tip crash happened in the JPE-STM. This can also be seen from the data of the effective tip radii of the two different microscopes.
For real small distances ( d R ) R eff converges to the real tip radius R , which we can compare with the R in our simulation that fits the measured data. At larger distances C par contributes significantly to the slope of the capacitance-distance curve and therefore R eff is greater than R . This can be seen in Fig. 9 , in which we applied Eq. (3) to capacitance-distance FEA data that we calculated for different tip radii R . One sees that when d R, R eff indeed converges to the set value R . For completeness, we also plotted the measured data of the JPE-stage ( Fig. 2 ) and the JT-STM ( Fig. 3 ) in Fig. 9 . Although clearly different, both data sets fit the theory. The reason for the difference between these two data sets could be tip crashes as well as the different tip fabrication methods (see above).
It becomes clear from this comparison that it is easily possible to determine the apex radius inside the setup, which provides a powerful tool to judge, e.g., if one needs to replace the tip after a tip crash. If one wants to model a measured tip, one should use the lowest measured value for R eff .
Note that it is possible to determine the tip radius (and its sharpness) without the knowledge of the cone height! This finding stands in contrast to previous conclusions [55] .
Taking into account the above insight, we fitted the remaining geometric parameters of the tips of the JPE-and the JTmeasurements. Table 1 shows the results. From these fits we learned about their dependencies: In the 1-100 μm micron tip-sample distance regime, L and B contribute in the same manner: they act mainly as an offset to the capacitance curve. As the total tip length can be rather accurately determined and is usually even similar for different microscopes, the main difference often comes from C par , which is due to the specific tip-holder design (described by B ). For the fit in Fig. 8 we did set L to a fixed, realistic value of 3 mm and varied B as a fitting parameter. The second fitting parameter is given by the cone height H that describes the macroscopic sharpness at the tip end. In the large-distance regime (1 to 100 micron), this sharpness determines mainly the general slope of the curve, such that this parameter can be determined independently. The last missing param-eter is W , which is set by the used tip wire; 126 μm in our case.
In conclusion, to receive the fits presented in Fig. 8 , we determined first the real tip radius R (see Fig. 9 ) and further needed only an optimization of the geometric parameters B and H that determine the offset and slope, respectively, for the large distance range.
As a remark, please note that the values in Tab. 1 are not exactly representative for the geometry of the real tips and tipholders, especially as the geometry of real tip holders can be complicated. However, it is striking that this simple model generates two different curves that follow the capacitance-distance curves of two completely different measured systems remarkably well, see Fig. 9 .
Despite this fact, a careful comparison between the simulated curve (red) and the measured data (black) in Fig. 8 reveals too low capacitances of the fit for small distances. Speculating on the reason, we suspect that the extra capacitance in the experimental data stems from the roughness (imperfections) of the surface of the sphere, like protrusions, that are not included in the model. The additional charge buildup by these protrusions is expected to be commonly found for cut PtIr tips due to the tendency of this material to form micro-tips under cutting. How the capacitance is influenced by the surface roughness can be calculated [56][57][58] . However, the reverse, how to calculate the roughness of the STM tip based on the additional capacitances in the capacitance-distance curve, remains an interesting open question that is beyond the scope of this paper.
Analytical models
For the purpose of scanning capacitance microscopy, various analytical formulas have been developed that describe the (slope of the) capacitance as a function of tip-sample distance [26,[34][35][36][37][38][39] . One of the earliest contributions [26,35] state that the variation of the capacitance, ∂ C / ∂ d , comes mainly from the ball-shaped apex (with radius R ) in the regime where d R . For a ball approaching an infinite plane, this variation is given by [26] : Realizing that a real tip does consist of a combination of a ball with a cone, a refined formula was derived a decade later by using a dihedral approximation [39] . The result, , is less straightforward since it also involves the cone of the tip that is described by its angle θ , i.e. tan θ = W/H, see Fig. 7 . Please note that only the first term in the square brackets comes from the ballshaped end of the tip. Moreover, following the derivation in Ref. [39] one realizes that Eq. (4) was used as a boundary condition for deriving the first term in Eq. (5) . Since this term dominates at small distances, it is not at all surprising that Eq. (5) breaks down to Eq. (4) in this regime ( d R ). Noticing that the tip radius influences the total capacitance only for small distances, at which the radius can be determined experimentally, the added value of Eq. (5) should be the description of the total capacitance for rather large distances ( d ≥ R ). Equipped with the complete FEA tip model, in which we easily can change the tip radii, we tested both analytical descriptions against the FEA model. Fig. 10 shows the result, in which the solid colored lines are for different radii obtained from the FEA calculations. Our results nicely match those published by Lányi et al. [55] , who calculated the variation of the tip-sample capacitance for a tip with R = 100 nm. To evaluate the analytic theories, we fitted (dashed lines) our results with the ball model ( Eq. (4) ) in Fig. 10 a, and with the dihedral approximation model ( Eq. (5) ) in Fig. 10 b. Comparing the fits one realizes three important points: (1) As expected, there is little difference for small distances (compare offset values at the y-axis); (2) The ball model describes straight lines, whereas the dihedral-approximation model curves "down" to lower values at a distance d ∼ 1 10 R . (3) In contradiction, the FEA results curve "up" for large distances.
From this we can conclude that the dihedral-approximation model is not suited to describe the large-distance behavior [59] . The reason for this is that the cone ends at a certain height (see Fig. 7 a) and that the tip should be described from this point on with a straight wire that ends in a plate of a capacitor (shield). This means that fitting the cone angle directly from Eq. (5) is unreliable. As the ball radius is equally well derived from Eq. (4) , there is no advantage to continue using Eq. (5) . Therefore we used Eq. (4) to determine the radius of the ball-shaped apex, see Section 4 . Currently, if one needs to determine the cone angle, one should still create a realistic FEA model.
Conclusion
We showed that it is possible to determine the absolute distance between a tip and a sample via the capacitance between them. Although the capacitances are in the order of tenths to hundreds of femtofarads, the tip-sample separations can be measured reliably for both large scale as well as nanometer distances. Measuring such low capacitances with high accuracy seems to be a difficult task. However, we showed that the application of a low input impedance current-to-voltage converter in combination with proper grounding and shielding makes this task rather easy, as stray capacitances are eliminated in this way. Moreover, by applying an STM control electronics it is possible to measure ∼ 10 aF (and even below). We measured the tip-sample capacitance versus distance on several different setups with different tips and samples and found a generic curve with even similar absolute values. Our analysis provides deeper insight and delivers additional benefit for the user, as it is possible to extract the tip shape and radius from these curves. We find, in contrast to earlier conclusions, that it is possible to determine the tip radius without the knowledge of the height of the conical part of the tip. This is a powerful tool to determine the actual quality of a tip, whether it is freshly etched or has experienced a tip crash. We compared our FEA results with analytic theories and found that the most simple model, the ball model approximation [26] , delivers the best fit and should, therefore, be used in most cases. Probably the most important impact, however, is the implementation of a fast and reliable pre-approach for any type of SPM and especially for those that do not provide optical access, thereby significantly reducing the total approach time before imaging. Furthermore, it is possible to use the tip-sample capacitance as a characterization tool of the motor performance of the SPM: motor fine tuning, deterioration, and problem analysis can be performed in this way. Finally, the determination of the absolute tip-sample capacitance (including the tip holder) is crucial for a proper system characterization when working in the GHz regime [31] . The capacitance determines, in addition, the energy broadening of an STM when reaching the quantum limit at ultra-low temperatures [60] . | 9,766.4 | 2017-10-01T00:00:00.000 | [
"Physics"
] |
Spiritual Jihad among U . S . Muslims : Preliminary Measurement and Associations with Well-Being and Growth
Religious and spiritual (r/s) struggles entail tension and conflict regarding religious and spiritual aspects of life. R/s struggles relate to distress, but may also relate to growth. Growth from struggles is prominent in Islamic spirituality and is sometimes referred to as spiritual jihad. This work’s main hypothesis was that in the context of moral struggles, incorporating a spiritual jihad mindset would relate to well-being, spiritual growth, and virtue. The project included two samples of U.S. Muslims: an online sample from Amazon’s Mechanical Turk (MTurk) worker database website (N = 280) and a community sample (N = 74). Preliminary evidence of reliability and validity emerged for a new measure of a spiritual jihad mindset. Results revealed that Islamic religiousness and daily spiritual experiences with God predicted greater endorsement of a spiritual jihad mindset among participants from both samples. A spiritual jihad mindset predicted greater levels of positive religious coping (both samples), spiritual and post-traumatic growth (both samples), and virtuous behaviors (MTurk sample), and less depression and anxiety (MTurk sample). Results suggest that some Muslims incorporate a spiritual jihad mindset in the face of moral struggles. Muslims who endorse greater religiousness and spirituality may specifically benefit from implementing a spiritual jihad mindset in coping with religious and spiritual struggles.
Introduction
Numerous studies have investigated the beneficial effects of religion and spirituality on health and well-being (Seybold and Hill 2001;Miller and Thoresen 2003).While religious and spiritual involvement can yield various benefits, they can also be a source of struggle.Religious and spiritual (r/s) struggles transpire when a person's beliefs, practices, or experiences regarding r/s matters cause conflict or distress (for reviews, see Exline 2013; Exline and Rose 2013;Pargament 2007;Stauner et al. 2016).
There are several forms of general r/s struggles (Exline et al. 2014).Divine struggles occur when one experiences negative thoughts or feelings about God.Demonic struggles involve concerns about being attacked by a devil or various forms of evil spirits.Interpersonal struggles refer to conflicts surrounding religious people, groups, or institutions.Moral struggles involve concerns about obedience to moral principles and guilt surrounding violations of those principles.Doubt-related struggles involve concerns about religious doubts and questions.Finally, ultimate meaning struggles involve concerns regarding a perceived absence of meaning or purpose in life (Exline et al. 2014).
Many individuals experience r/s struggles.For example, in a study of undergraduates from U.S. colleges and universities (Astin et al. 2005), a majority of first-year students reported occasionally feeling distant from God (65%) and questioning their religious beliefs (57%).Furthermore, recent studies have documented r/s struggles among diverse cultural and religious groups.For example, self-reports on the Religious and Spiritual Struggles (RSS) scale among Israeli-Jewish university students indicated as many as 30% of students experience r/s struggles (Abu- Raiya et al. 2016).Religious and spiritual struggles have also been reported among broad samples of U.S. adults (Stauner et al. 2015(Stauner et al. /2016)).Using a large, nationally representative sample of adults, Ellison and Lee (2010) examined troubled relationships with God, negative social encounters within religious contexts, and chronic religious doubt and found that most people reported low levels of these struggles; nevertheless, the struggles were positively associated with psychological distress.Similarly, Abu- Raiya et al. (2015) found that, although participants that reported low levels of r/s struggle on average, all forms of struggle were positively related to depressive and anxious symptomology.
R/s struggles often imply tension and conflict regarding one's core beliefs and behaviors.Thus, it is not surprising that many studies have found r/s struggles to be linked with psychological distress (e.g., Ellison and Lee 2010;Exline et al. 2000).A meta-analysis on religious coping and psychological adjustment revealed a direct link between r/s struggles and indicators of distress such as anxiety, anger, and depression (Ano and Vasconcelles 2005).Such links with psychological distress have been found even after controlling demographic variables such as race and socioeconomic status (Ellison and Lee 2010).R/s struggles have also been associated with greater thoughts of suicide (Exline et al. 2000), lower levels of life satisfaction (Abu- Raiya et al. 2016;Abu-Raiya et al. 2015), and less happiness, even after controlling overall religiousness, personality factors, and social isolation (Abu- Raiya et al. 2015).Although there is not enough evidence to infer a causal relationship between r/s struggles and emotional distress, research suggests a strong connection between the two domains.
In contrast to the significant body of research on the distressing aspects of r/s struggles, relatively little attention has been given to the potential of r/s struggles to promote personal growth.The existing research on the relationship between r/s struggles and growth is mixed (for a review, see Pargament et al. 2006).Although some researchers have found a connection (Pargament et al. 2000), others have not (e.g., Phillips and Stein 2007), and some studies have even found negative links between struggle and growth (e.g., Park et al. 2009).The lack of concurrent findings in the literature suggest that it may be the actual coping response to the r/s struggle, rather than the struggle itself, that predicts spiritual growth or decline (Exline and Rose 2013;Exline et al. 2017).Similarly, growth from struggle has been linked with positive religious coping (Exline et al. 2017), perception of a secure relationship with God (for a review, see Granqvist and Granqvist and Kirkpatrick 2013), integrating religion into everyday life (Desai 2006), having religious support (Desai 2006), and perceived support or intervention from God (Pargament et al. 2006;Wilt et al. 2017).
Although studies have demonstrated that r/s struggles can be linked with growth-related outcomes, more research needs to be conducted to examine the growth processes that could accompany r/s struggles.Looking at the process of growth from a religious perspective, individuals may intentionally embrace the experience of struggle for a greater purpose, such as for the sake of becoming closer to God or eliminating their perceived shortcomings; such struggles may be intentional in nature for the purpose of spiritual growth.People of faith who may desire to become more devoted believers may embrace struggle as a medium through which they can develop a stronger relationship with the Divine.Struggling for growth purposes is prominent in the religion of Islam, and is sometimes referred to as spiritual jihad.Hence, a natural place to initiate an empirical investigation of such processes is within the context of the religion of Islam.
Spiritual Jihad: An Islamic Perspective
Much of the research conducted on r/s struggles has made use of predominantly Christian samples.The aim of the current project was to focus primarily on struggles and growth among Muslims, framed in terms of spiritual jihad.A brief review of relevant Islamic theology and psychological research will be addressed.The Arabic noun "jihad" is derived from the Arabic verb "jahada", which is translated as "struggle" or "hardships" (Al-Khalil 1986).Some traditions within Islam, such as the Sufi tradition, categorize jihad into two types: the greater and the lesser jihad.The greater jihad (al-jihad al-akbar), contrary to popular thought, refers to an internal spiritual struggle in the path of God against the various trials of life (Nizami 1997).On the other hand, the lesser jihad (al-jihad al-asghar) refers to an external endeavor for the sake of Islam (Al-Zabidi 1987).Examples of the lesser jihad include fighting for God's cause on the battlefield, stepping out of a conversation due to religious objections, or speaking out for God's sake.Notably, the lesser jihad (often simply referred to as "jihad") has become increasingly aligned with popular views of Muslims in recent years (Amin 2015;Afsaruddin 2013).The term jihad has particularly become increasingly associated with acts of terrorism, thereby promoting the notion that terrorism is a fundamental aspect of Islam (Turner 2007).Such interpretations of the term jihad not only ignore the majority of forms of the lesser jihad that are completely nonviolent, but also fail to acknowledge the meaning of greater jihad for many Muslims.Islamic spirituality, as reflected largely in the Sufi heritage, considers the greater spiritual jihad a fundamental component of spiritual growth and development.Spiritual jihad is a process that requires a conscious effort in "struggling against the soul (al-nafs) for the sake of God" (Picken 2011).In Islam, the nafs are thought to be responsible for a wide variety of dangerous, unsocialized impulses; this psychological influence is roughly analogous to the Freud (1923Freud ( /1962) concept of the id.For further information regarding the role of the nafs in the process of spiritual jihad, please request a copy of Saritoprak et al. (2018).
The ongoing journey of spiritual jihad may be a common experience among practicing Muslims.Numerous Qur'anic verses promote an intentional, continuous engagement in spiritual jihad, such as these: "And those who strive for us, we will surely guide them to our ways.And indeed, Allah is with the doers of good" (29:69), and, "The ones who have believed, emigrated, and striven in the cause of Allah with their wealth and their lives are greater in rank in the sight of Allah.And it is those who are the attainers [of success]" (9:20).Similarly, as narrated by Al-Bayhaqi (1996), after a successful defeat and arrival from the Battle of Badr, Prophet Muhammad stated, "We have returned from the lesser jihad to the greater jihad."When his companions inquired about the greater jihad's meaning, the Prophet replied, "It is the struggle that one must make against one's carnal self (nafs)."As the Day of Judgment is one of the six articles of the Islamic faith, practicing Muslims often engage in a conscious examination of their nafs with the aim of striving to better themselves as believers in return for not only an eternal afterlife, but also for the sole sake of God.Thus, introspection regarding one's behaviors, words, and thoughts throughout life on earth promotes a sense of preparedness for the final Judgment and a path toward spiritual refinement.
Nevertheless, despite the theological emphasis on spiritual jihad within Islam, no study to date has examined the construct of spiritual jihad within the field of psychology.A review of the current literature on r/s struggles and growth indicates a gap in both the conceptualization and measurement of spiritual jihad.As a preliminary attempt to address this gap, the aim of the current article is to investigate the process by which individuals engage in spiritual jihad and the outcomes associated with such engagement.
Spiritual Jihad: Attributing Wrongdoings to the Nafs
Attribution theory (Weiner 1985) emphasizes the need to assign responsibility for events.In the face of certain events, people often look for information regarding the cause of why an event occurred, and this is especially true for unexpected and negative events.In such cases, people may often think, "Why did this event occur?" or, "Why did I do what I did?" in attempting to explain why a particular incident took place.By seeking knowledge to explain certain outcomes, including successes and failures, the individual can learn to adapt their behavior accordingly in order to prevent or promote a certain incident in the future.
This line of research is relevant to the concept of spiritual jihad.Within a spiritual jihad framework, Muslims who are faced with certain desires or temptations may attribute such inclinations to their nafs.For example, one may think, "I have a sexual desire because my nafs wants it."Along similar lines, in the face of perceived wrongdoings or moral failures, a Muslim may think, "I engaged in the behavior because of the desires of my nafs", thereby attributing either thoughts or actions to such proclivities of their nafs.By attributing certain thoughts and behaviors to their nafs, Muslims incorporating a spiritual jihad approach into their life may be more likely to become aware of such inclinations in the future and engage in greater efforts in struggling against such desires.Speculatively speaking, cognitively separating the source of motivation for undesirable behaviors from one's own consciousness may help Muslims resolve cognitive dissonance and reject their unwanted impulses.
Spiritual Jihad and Positive Religious Coping
The mechanism of meaning-making may play a role in positive emotional experiences (Folkman 1997).Because religious and spiritual beliefs and practices may play a significant role in making meaning (e.g., Park 2012), they can also be major component of the coping process (Pargament 1997).Religious coping has been proposed to play five main functions: providing a sense of comfort in times of struggle, bringing a feeling of connectedness with others, bringing meaning to a distressing life experience, providing a framework for controlling events that are beyond one's direct personal control and resources, and providing help in making life transformations (Pargament et al. 2000).Additionally, religious coping may take both positive and negative forms.
Positive religious coping may involve being spiritually connected with the world and others, having a secure relationship with God, and/or finding a greater meaning in life (Pargament et al. 1998).On the other hand, negative religious coping methods may reflect religious/spiritual struggles such as being spiritually discontent, appraising a stressor as a punishment from God, viewing the stressor as an act of demonic forces, and/or being dissatisfied with other religious people or institutions (Pargament et al. 1998).Research has shown that negative and positive forms of religious coping can exhibit differing outcomes related to mental and physical health (e.g., Hebert et al. 2009;Trevino et al. 2010).For example, negative religious coping has been associated with greater symptoms of depression and lower quality of life, whereas positive religious coping has been linked with lower levels of psychological distress and greater well-being (Pargament et al. 1998).
Similarly, spiritual jihad may be framed as a form of positive religious coping.It may be a way in which some Muslims approach life experiences and a process that fosters making meaning of negative life events and coping in a proactive manner.In the face of adversities and struggle, Muslims may appraise the situation through a spiritual jihad-based interpretive lens.For example, they may regard a distressing life event as a test that will bring them closer to their faith, a test of their nafs that they must overcome, a way in which they can earn greater sawab (good deeds) in the afterlife, or an opportunity to ask for Divine forgiveness.Incorporating such a mindset may allow the individual to make meaning of their experience in a positive manner, and may promote perceptions of spiritual growth.Within the writings of some Islamic scholars, spiritual jihad has been considered an essential component of spiritual growth (Al-Ghazali 1982;Al-Bursawi 1990).It requires a constant and conscious struggle against one's nafs with the aim of developing a closer relationship with God and becoming a more devout Muslim.
1.4.Spiritual Jihad: Implications for Virtues, Vices, and Well-Being Spiritual jihad is not only intended to promote positive religious coping, but it is also intended to promote virtues.From an Islamic perspective, there are several overarching themes rooted in the Qur'an and Sunnah of the Prophet that promote actively bettering oneself in the path of God through virtuous behavior.For the purpose of this study, we will focus on patience, gratitude, and forgiveness with an emphasis on their potential links with spiritual and psychological well-being.
The cultivation of sabr (often translated from Arabic as "patience"), is an essential component in the active engagement of spiritual jihad.Differing from the traditional understanding of the English word patience, in the Islamic tradition, sabr can essentially be described as the active restraining of oneself from wrongdoings, limiting objections and complaints in the face of calamities, and putting all trust in God (Khan 2000).In order to ensure that we use the term as close as possible to the original Arabic term, herein, a nuanced presentation of the word patience is the most accurate way to present the information.One of the earliest examples of patience in Islamic history can be traced back to the time when the Prophet was being persecuted by the pagan Meccans of the time.During such times of hardship, the Qur'anic verse, "And whoever is patient and forgives . . .indeed, that is of the matters [requiring] determination" (42:42-43) encouraged Muslims to maintain a steadfast approach and patiently endure wrongdoings in a forgiving and non-combative manner (Afsaruddin 2007).From a psychological perspective, approaching situations in a patient manner enhances resilience in times of hardship, thereby promoting better coping ability (Connor and Zhang 2006).The act of being patient involves a proactive approach in coping with negative emotions such as anger and frustration.Therefore, it may encourage a less hostile approach to life experiences, a positive perspective, and increased resilience in the face of adversity.
Gratitude, referred to as shukr in Arabic, is an essential aspect of Islamic spirituality.Gratefulness towards God and other people is reflected through one's appreciation and acknowledgement of the surrounding blessings.Gratitude is a manner through which one remembers God and brings a religious perspective of life to conscious awareness, which may be regarded as a vital component of spiritual jihad.Numerous themes of gratitude can be found in the Qur'an and hadith (sayings of the Prophet Muhammad).For example, an emphasis on gratitude is evident in such sayings of the Prophet: "One who does not thank for the little does not thank for the abundant, and one who does not thank people does not thank God" (Al-Muslim 2006;hadith 2734).Psychological literature has considered gratitude to be a part of one's larger framework of life that fosters noticing and appreciating the positive in the world (Wood et al. 2010).Gratitude has also been linked with less anger and hostility and with more warmth, altruism, and trust (Wood et al. 2008), in addition to greater happiness and positive affect (e.g., Emmons and McCullough 2003;Watkins et al. 2003).
The act of forgiving can be regarded as an inevitable aspect of one's spiritual jihad and holds a distinguished place in Islamic theology.As humans are vulnerable to sins, mistakes, and transgressions, forgiveness promotes an opportunity for spiritual reformation.The act of forgiving fosters both one's relationship with God and with other humans.The Qur'an highlights both God's forgiveness and the act of forgiving others, as evident in the verse: "And let not those of virtue among you and wealth swear not to give [aid] to their relatives and the needy and the emigrants for the cause of Allah, and let them pardon and overlook.Would you not like that Allah should forgive you?And Allah is forgiving and merciful" (24:22).Within psychology, forgiveness has been studied as a positive and prosocial response to transgressions (for reviews, see Fehr et al. 2010;Riek and Mania 2012;Worthington 2005).Historically, researchers have found that individuals who tend to forgive others are more altruistic, caring, generous, and empathic (Ashton et al. 1998).More recent studies show that people who forgive are more likely to be in relationships described as "close", "committed", and "satisfactory" (Tsang et al. 2006).For a more detailed overview of virtues rooted in the Qur'an and Sunnah, please request a copy of Saritoprak et al. (2018).
Forgiveness can also take the form of self-forgiveness.Research has shown a positive association between self-forgiveness and perceived forgiveness from God (Martin 2008;McConnell and Dixon 2012).Feeling unforgiven by God may contribute to one's general view of the self (e.g., feeling unworthy) and/or of God (e.g., punitive and angry).Such experiences may form r/s struggles (Exline et al. 2017) and adversely impact an individual's spiritual and mental wellness.This possibility suggests another way in which forgiveness may facilitate growth: if taking a spiritual jihad mindset toward one's r/s struggles can help a person feel forgiven by God, that perception may then lead to self-forgiveness and allow healing to occur.
In addition to promoting virtuous behaviors, the greater jihad also fosters an active strife against the everyday malevolent temptations of the nafs as a means towards improving the self in the way of God.The individual must struggle to control sinful desires for the purpose of gaining God's favor and eternal Paradise, as evident in the verse, "But as for he who feared the position of his Lord and prevented the soul from [unlawful] inclination, then indeed, paradise will be [his] refuge" (79:40-41).Such a strife can take form against the many evils the Qur'an and Sunnah put forward.For example, the Qur'an presents numerous verses on the consequences of exhibiting arrogance and pride, such as, "And do not turn your cheek [in contempt] toward people and do not walk through the earth exultantly.Indeed, Allah does not like everyone self-deluded and boastful" (31:18).Similarly, other vices are also cautioned against among the Qur'anic verses and the life of the Prophet.For example, the Qur'an states "So fear Allah as much as you are able and listen and obey and spend [in the way of Allah]; it is better for yourselves.And whoever is protected from the stinginess of his soul-it is those who will be the successful" (64:16) highlights the strife to deter oneself from sinful traits such as greed and stinginess.Similarly, the saying of the Prophet "Do not spy upon one another and do not feel envy with the other, and nurse no malice, and nurse no aversion and hostility against one another.And be fellow-brothers and servants of Allah" (Al-Bukhari 1990) discourages Muslims from vices such as envy and hatred.
The Present Study
We are not aware of any empirical studies that have examined spiritual jihad, a growth-oriented mindset that Muslims may bring to r/s struggles.Our aim was to attempt to assess the mindset associated with spiritual jihad and to begin to examine its associations with perceptions of personal growth (including spiritual and posttraumatic growth), well-being, and virtues among U.S. Muslims.Although a mindset of spiritual jihad could be brought to almost any type of r/s struggle, we began with an emphasis on moral struggles, because these are struggles in which an internal conflict against one's unwanted inclinations would be especially salient.
Hypotheses
We expected positive associations between endorsement of a spiritual jihad mindset and two indicators of religious engagement: general religiousness and daily spiritual experiences with God while controlling social desirability.In response to a specific moral struggle, we hypothesized that greater endorsement of a spiritual jihad mindset would relate to higher levels of positive religious coping, spiritual growth, posttraumatic growth, and lower levels of spiritual decline.In terms of general well-being, we expected that endorsement of the spiritual jihad mindset would be associated with greater life satisfaction and fewer symptoms of anxiety and depression.Finally, we predicted that endorsement of the spiritual jihad mindset would be associated with reports of more virtuous behaviors in terms of greater endorsement of traits related to patience, forgiveness, and gratitude.All hypotheses were preregistered with the Open Science Framework (Saritoprak and Exline 2017a, embargoed until 2021).
Participants and Procedure
We included participants from two samples.The first was an adult Muslim sample (N = 280) obtained from Amazon's Mechanical Turk (MTurk) website.The second sample was comprised of an adult Muslim community sample (N = 74).To obtain the community sample, we contacted Muslim leaders throughout Northeast Ohio via email and asked them to forward an invitation to members of their congregations.All participants completed a battery of questionnaires assessing predictor and outcome variables related to spiritual jihad.Participants read the consent form prior to initiating the questionnaires and received a small monetary incentive for their participation (MTurk participants received $3; community participants received $10 to mitigate recruitment difficulty).
Table 1 summarizes demographic information for both samples.Both samples were comprised mostly of Middle Eastern participants, with the median age for both samples being in the range of early thirties to mid thirties.The participants in the MTurk sample comprised a larger percent of U.S.-born participants compared to the community sample, in addition to more participants identifying as single.In terms of English language proficiency, both samples were composed predominantly of native English speakers, followed by advanced English speakers.
Measures
Table 2 (which appears at the start of the Results section) lists descriptive statistics (means, standard deviations, ranges) for all study variables.For a brief description of all of the measures, please see Appendix B.
Demographic questionnaire.Participants completed a demographic questionnaire.The items provided further information on participants' genders, ages, religious/spiritual traditions, ethnicities, places of birth, relationship statuses, years of residence in the United States, and degrees of proficiency in the English language.
We initially developed a 16-item measure to examine the extent to which participants endorse a spiritual jihad interpretive framework in reference to a specific struggle.Note that spiritual jihad is our technical term for the Islamic concept; items did not use the term "jihad" to avoid unwanted connotations.Items were sent to academic scholars in the field of Islamic spirituality in order to develop content validity.The three scholars provided feedback regarding the content of items.Feedback from the scholarly experts primarily involved suggestions towards developing a working definition of the term spiritual jihad, translating Arabic terminology, and the rewording of items to better align with an Islamic framework.Participants were instructed to rate each item on a seven-point scale (1 = strongly disagree, 7 = strongly agree) pertaining to how they viewed a specific moral struggle they recently encountered.Sample items included "It is a test that will make me closer to God" and "It is a desire of my nafs that I must work against."Reverse-scored items such as "The struggle has no meaning for me" and "Allah plays no role in my struggle" were also included in the measure to address issues of response biases (e.g., acquiescence).As detailed in the results section, an exploratory factor analysis was conducted to evaluate the structure of the measure.One item was dropped as a result of the analysis, as described in the results section.The current study provided initial tests of this new measure's reliability and validity.See Appendix A for the complete measure.
Religious coping was measured with select, abbreviated (three-item) subscales from the Religious Coping Questionnaire (RCOPE; Pargament et al. 2000).The RCOPE consists of subscales assessing coping responses to stressful experiences within a religious context including Benevolent Religious Appraisal (e.g., "Thought the event might bring me closer to God"), Active Religious Surrender (e.g., "Did my best and turned the situation over to God"), Seeking Spiritual Support (e.g., "Looked to God for strength, support, and guidance"), Religious Focus (e.g., "Prayed to get my mind off problems"), Religious Purification (e.g., "Asked forgiveness for my sins"), Spiritual Connection (e.g., "Looked for a stronger connection with God") and Religious Forgiving (e.g., "Sought help from God in letting go of my anger").Subscale average scores and an overall average score were examined.
Islamic religiousness was measured with the five Islamic Dimensions subscales of the Psychological Measure of Islamic Religiousness (PMIR; Abu Raiya et al. 2008): Beliefs Dimension (e.g., "I believe in the Day of Judgment"), Practices Dimension (e.g., "How often do you fast?"), Ethical Conduct-Do Dimension (e.g., "Islam is the major reason why I honor my parents"), Ethical Conduct-Do Not Dimension (e.g., "Islam is the major reason why I do not drink alcohol"), and Islamic Universality Dimension (e.g., "I identify with the suffering of every Muslim in the world").An average score was obtained from each subscale, in addition to an overall average score, in order to measure levels of Islamic religiousness.
Daily spiritual experiences were measured with the Daily Spiritual Experiences Scale (DSES; Underwood and Teresi 2002).The DSES examines spiritual experiences such as a perceived connection with the transcendent (e.g., "I feel God's presence").Our focus was on the first 15 items, which were presented in the form of a six-point scale (1 = never, or almost never, 6 = many times a day).The word "Allah" was substituted for "God" for the purpose of the current study.An overall average score was obtained, with larger scores indicating greater perceived closeness with Allah.
The short form of the Post-Traumatic Growth Inventory (PTGI-S; Calhoun and Tedeschi 1999) assessed the extent to which participants perceived themselves as having grown from their reported crisis with 13 items (e.g., "A willingness to express my emotions").Ratings were averaged.
Spiritual growth and decline were measured via abbreviated versions of the Spiritual Growth (e.g., "Spirituality has become more important to me") and Spiritual Decline (e.g., In some ways I have shut down spiritually") subscales of the Spiritual Transformation Scale (STS; Cole et al. 2008).A shortened version of the STS (eight items), using the highest-loading items from each subscale, was administered for the current study, with permission from the scale author.Similar shortened forms have been used in other published studies of religious/spiritual struggles (Exline et al. 2017;Wilt et al. 2016).Participants were asked to rate their degree of agreement regarding spiritual growth and decline on a seven-point scale (1 = not at all, 7 = very true).An overall average score was calculated for both subscales.
The five-item Satisfaction with Life Scale (SWLS; Diener et al. 1985) was used in order to measure satisfaction with life (e.g., "So far I have gotten the important things I want in life").Participants responded to items on a seven-point scale (1 = strongly disagree, 7 = strongly agree).An overall score was obtained from all five items, including reverse-scored items, with higher scores indicating greater self-reported life satisfaction.
Generalized anxiety was measured with the Generalized Anxiety Disorder seven-item scale (GAD-7; Spitzer et al. 2006).The GAD-7 assesses generalized anxiety symptoms by asking participants to report their frequency of anxiety-related concerns (e.g., "Worrying too much about different things") on a four-point scale ranging from 0 (not at all) to 3 (nearly every day).Scores were summed.
Depressed mood was assessed with the Center for Epidemiological Studies of Depression Short Form (CES-D-10; Radloff 1977), which includes 10 items (e.g., "I was bothered by things that usually don't bother me").Participants responded to statements measuring depressive symptoms in the past week on a four-point scale ranging from 0 (rarely or none of the time) to 3 (all of the time).Ratings were summed.
Dispositional gratitude was measured with the Gratitude Questionnaire-Six Item Form (GQ-6; McCullough et al. 2002).Participants responded to six items addressing gratefulness (e.g., "I have so much in life to be thankful for").Items were answered on a seven-point scale (1 = strongly disagree, 7 = strongly agree).Item ratings were summed.
A general tendency to forgive was measured with the Heartland Forgiveness Scale (HFS; Thompson and Snyder 2003), a self-report questionnaire with 18 items (e.g., "Learning from bad things that I've done helps me get over them").Participants responded on a scale ranging from 1 (almost always false of me) to 7 (almost always true of me).An overall scale score was calculated from ratings of the 18 items, including reverse-scored items.
Patience was measured with the 3-Factor Patience Scale (3-FPS, Schnitker 2012).The scale is comprised of 11 items (e.g., "I am able to wait-out tough times").A composite patience score was calculated by summing ratings of all items, including reverse-scored items.
Social desirability: the five-item short form of the Marlowe-Crowne Social Desirability Scale (MCSDS; Reynolds 1982) was included.Items (e.g., "No matter whom I am talking to, I am always a good listener") were rated true or false.The MCSDS has exhibited good internal consistency and test-retest reliability in prior research (Reynolds 1982).Ratings were summed, including reverse-scored items, with higher scores indicating greater endorsement of socially desirable responses.
Descriptive Statistics
Frequency and descriptive statistics for the demographics and main variables were examined for the MTurk and community samples.Participants were asked to skip any questions they may feel uncomfortable answering.The ability to skip items resulted in increased missing data and lower sample sizes for various variables, particularly within the community sample.In the interest of validity, we eliminated participant responses reporting no current moral struggles and/or responding in incomprehensible ways to qualitative items (MTurk, n = 39; Community, n = 12).Preliminary analyses were performed to examine any violations of the assumptions of approximate normality.Negligible violations of normality (defined provisionally as skew and excess kurtosis ≤ 1) were observed within the MTurk sample.However, substantial violations of normality were observed (in spiritual decline, Islamic religiousness, gratitude, and daily spiritual experiences) within the community sample.In this sample, the distribution of spiritual decline had a skewness of 1.11 and kurtosis of 0.32 (i.e., excess kurtosis, which is ordinary kurtosis −3; we only refer to this excess kurtosis throughout this report).Islamic religiousness had a skewness of −1.63 and kurtosis of 3.48.Gratitude had a skewness of −1.43 and kurtosis of 1.78.Daily spiritual experiences had a skewness of −1.01 and kurtosis of 1.19.Square root transformations (except daily spiritual experiences, which was squared) reduced skew and kurtosis to less than one in magnitude for all four variables.
Table 2 provides descriptive statistics for the main variables.Mann-Whitney U tests evaluated the evidence for any tendency of either population to score higher or lower than the other on each variable.A Benjamini and Yekutieli (2001) correction maintained α = 0.05 across this set of dependent pairwise comparisons.Specifically, in comparison to those in the MTurk sample, participants in the community sample endorsed higher levels of incorporating a spiritual jihad mindset when approaching struggles.Similarly, they reported greater religiousness and higher levels of daily spiritual experiences and life satisfaction.Those in the community sample were also significantly more likely to endorse dispositions toward forgiveness and gratitude.Finally, the community sample participants indicated lower levels of spiritual decline.
Exploratory Factor Analysis
All 16 items from the spiritual jihad mindset questionnaire (MTurk sample) were entered into an exploratory factor analysis using ordinary least squares estimation from a polychoric correlation matrix.(A factor analysis was not conducted with the community sample data due to the small sample size.)The Kaiser-Meyer-Olkin overall measure of sampling adequacy value was 0.92, indicating excellent factorability.Barlett's sphericity test of the polychoric correlation matrix rejected the null hypothesis (χ 2 (120) = 2340, p < 0.001), further supporting the factor analysis.The first and second eigenvalues (6.5 and 1.4, respectively) substantially exceeded the others (eigenvalues 3-16 < 0.5), which did not differ meaningfully from each other or from resampled eigenvalues in parallel analysis (all differences < 0.3; see Figure 1).This test indicated that a two-factor model accounts for the majority of variance (51%) with optimal efficiency and parsimony.
Examination of direct oblimin-rotated factor loadings revealed one item (i.e., "I believe this struggle is ultimately weakening my faith") that had a weak factor loading (λ = 0.38).This item was dropped from the overall measure, which improved its average interitem correlation (∆r = 0.03).A second factor analysis of the remaining 15 items revealed that two factors explained 54% of the variance.This model fit the data acceptably (Tucker-Lewis index = 0.908, RMSEA = 0.085, root mean square of residuals corrected for degrees of freedom = 0.05).The first factor (∑ λ 2 = 5.40) explained 36% of the variance, and the second factor (∑ λ 2 = 2.72) explained 18% of the variance.Table 3 shows all items' factor loadings, which exhibit fairly simple structure (all primary λ > 0.5, all secondary |λ| < 0.2).Overall, these results were compatible with the theoretical framework proposed in development of the measure, although a second factor was not anticipated.Conceptually, we interpreted the two factors as endorsing a spiritual jihad mindset (SJM) and rejecting a SJM, respectively.These factors correlated negatively and strongly (r = −0.50,p < 0.001).
All 16 items from the spiritual jihad mindset questionnaire (MTurk sample) were entered into an exploratory factor analysis using ordinary least squares estimation from a polychoric correlation matrix.(A factor analysis was not conducted with the community sample data due to the small sample size.)The Kaiser-Meyer-Olkin overall measure of sampling adequacy value was 0.92, indicating excellent factorability.Barlett's sphericity test of the polychoric correlation matrix rejected the null hypothesis (χ²(120) = 2340, p < 0.001), further supporting the factor analysis.The first and second eigenvalues (6.5 and 1.4, respectively) substantially exceeded the others (eigenvalues 3-16 < 0.5), which did not differ meaningfully from each other or from resampled eigenvalues in parallel analysis (all differences < 0.3; see Figure 1).This test indicated that a two-factor model accounts for the majority of variance (51%) with optimal efficiency and parsimony.Table 3. Summary of the exploratory factor analysis of the spiritual jihad measure using ordinary least squares estimation from a polychoric correlation matrix and direct oblimin rotation.−0.17 0.59 1 Boldfaced text indicates items assigned to each factor.
Internal Consistency
Results from the factor analysis were used to generate subscales.Estimates of omega total were calculated for the factor analytically derived subscales.The Endorsing a Spiritual Jihad Mindset subscale revealed excellent internal consistency (ω total = 0.91).The Rejecting a Spiritual Jihad Mindset subscale revealed good internal consistency (ω total = 0.82).With both subscales combined after reversing the coding of responses to items on the Rejecting a Spiritual Jihad Mindset subscale, the total measure revealed excellent internal consistency (ω total = 0.92).This total score is presented as a composite that represents the overall consistency of responses with a spiritual jihad mindset-both endorsing and not rejecting it-rather than as a unidimensional latent factor, since the factor analysis indicated greater complexity than that.Item-total correlations (calculated from the polychoric correlation matrix with corrections for item overlap and scale reliability) were between 0.48 and 0.78.If the complexity was due to acquiescence bias or true ambivalence or indifference, then the distinctions between these possibilities were largely set aside in the composite score, which would have represented each of these configurations as middling scores.To best enable multiple interpretive perspectives, many of the analyses below will be examined in reference to both the two subscales and the total composite score.Each score was calculated by coding response options as consecutive integers (1-7) and averaging responses across items.Items on the Rejecting a Spiritual Jihad Mindset subscale were reverse-coded for the purposes of calculating composite scores.
Spiritual Jihad, Daily Spiritual Experiences, and Islamic Religiousness
Associations of the spiritual jihad mindset with Islamic religiousness and daily spiritual experiences were estimated as Pearson product-moment correlations (Table 4).As predicted, results within the MTurk sample revealed that incorporating a spiritual jihad mindset correlated significantly and positively with Islamic religiousness and daily spiritual experiences.Similar results were found among participants in the community sample.The spiritual jihad mindset composite was regressed onto Islamic religiousness (β = 0.35, t (262) = 4.41, and p < 0.001) and daily spiritual experiences (β = 0.37, t (262) = 4.18, and p < 0.001) simultaneously (adjusted R 2 = 0.44) by using the iteratively reweighted least squares estimation (by default a bisquare redescending score function with other defaults suggested in Koller and Stahel 2017), revealing independent predictive effects.This model's residuals approximated a normal distribution (|skew| and |kurtosis| = 0.11) and passed a test of independence (H 0 : no first-order autocorrelation; Durbin-Watson d = 2.2, p = 0.210).A Breusch-Pagan test retained the null hypothesis of homoskedasticity (χ 2 (2) = 0.70 and p = 0.703).The variance inflation factor (VIF = 2.3) indicated minimal multicollinearity.Effects appeared roughly linear, though exploratory analysis of a third-order polynomial model suggested positive quadratic (β = 0.19, t (262) = 4.24, and p < 0.001) and cubic (β = 0.10, t (262) = 3.42, and p < 0.001) effects of Islamic religiousness could partly explain and reduce its linear effect (β = 0.20, t (262) = 1.80, and p = 0.073) while improving the model fit significantly (robust Wald χ 2 (2) = 19.3,p < 0.001; ∆R 2 adj.= 0.04).Despite this model's robustness to high-leverage and outlying observations, the curvilinear effects seemed to reflect the influence of a few very low scores in both Islamic religiousness and the spiritual jihad mindset, which strengthened their positive relationship at low levels of both factors.The sparseness of data at these low levels and the exploratory nature of this model precluded confident interpretation of curvilinear effects, and the model's close resemblance to a linear relationship above low levels favored the originally predicted model of simple main effects.
Partial correlation analysis was used to explore the relationship between Islamic religiousness and endorsing a spiritual jihad mindset, while controlling scores on the Marlowe-Crowne Social Desirability Scale within the MTurk and community samples.There was a strong, positive, partial correlation between Islamic religiousness and endorsing a spiritual jihad mindset when controlling social desirability (MTurk sample: r (264) = 0.61, p < 0.001; community sample: r (43) = 0.69, p < 0.001).Similar results were found between Islamic religiousness and participants' composite spiritual jihad mindset score (MTurk sample: r (264) = 0.60, p < 0.001; community sample: r (43) = 0.70, p < 0.001).
Spiritual Jihad and Religious Coping
Correlations between a spiritual jihad mindset and various forms of positive religious coping (as measured by subscales of the RCOPE) were investigated (see Table 5).As expected, there were moderate to strong, positive correlations between incorporating a spiritual jihad mindset and positive religious coping subscales, with high levels of a spiritual jihad mindset associated with higher levels of all forms of positive religious coping within both the MTurk and community samples, indicating strong support for the hypotheses.Similar results were found in regard to participants' composite spiritual jihad mindset scores.Consistently, rejecting a spiritual jihad mindset was significantly negatively correlated with all forms of positive religious coping within both samples (except religious purification coping in the community sample: r (61) = −0.23,p = 0.07).
Spiritual Jihad, Growth, and Decline
As expected, significant, fairly strong, positive correlations with post-traumatic growth were found for the spiritual jihad mindset endorsement subscale and composite score in both the MTurk and community samples (Table 4).Also as expected, in the MTurk sample, a significant, moderate, negative correlation was found between the spiritual jihad mindset composite and spiritual decline, whereas rejecting a spiritual jihad mindset was positively associated with spiritual decline.Though negative in valence as hypothesized, these same correlations in the community sample between one's spiritual jihad mindset scores and spiritual decline did not differ from zero significantly.
Discussion
The goal of the present study was to investigate the process of approaching moral struggles with a spiritual jihad mindset among Muslims living in the United States, and the outcomes associated with incorporating such a mindset.One aim was to create a new measure to assess the construct of spiritual jihad.Participants were obtained from two samples: an online platform (MTurk) and a community sample.The following sections will examine key findings of the current study, in addition to research and practical implications, and limitations and directions for future research.
Key Findings
The results of the current study provided preliminary support for the Spiritual Jihad Measure.An exploratory factor analysis revealed a two-factor solution (Endorsing SJ Mindset, Rejecting SJ Mindset).Both subscales showed good to excellent internal consistency.Examining the measure, the two subscales and the total composite scale provided complementary results regarding associations.Though we reported results using both the individual subscales and the composite scale for completeness, we suggest using the composite scale to measure respondents' overall consistency with the spiritual jihad mindset in general applications of this measure.Internal consistency was still very good when the subscales were combined, and the inclusion of both Endorsing SJ and Rejecting SJ items may help to mitigate any influence of acquiescence bias on total scores.However, this scoring system conflates general non-endorsement (i.e., low scores on both subscales) with ambivalence (high scores on both), which might represent legitimate perspectives on the spiritual jihad mindset rather than acquiescence bias.The moderate correlation between the subscales implies that such perspectives may not be rare.Therefore, methodologists and any researchers with interests in ambivalence toward the spiritual jihad mindset or the potential for acquiescence bias in its measurement should consider the endorsement and rejection factors separately or within a bifactor model.
The findings of the present study revealed that Islamic religiousness and daily spiritual experiences both significantly predict incorporating a spiritual jihad mindset when Muslims face moral struggles, even when controlling social desirability.These close associations between greater religious devotion and a spiritual jihad mindset are consistent with the construct of spiritual jihad, which implies a conscious effort in striving to become a more devout Muslim by working against the temptations and desires of the nafs.Furthermore, the results indicated that Muslims in both samples who endorsed higher levels of a spiritual jihad mindset were more likely to make use of positive religious coping.For example, they were more likely to see stressors as beneficial for them or to view stressors as part of God's plan.The findings provided strong support for the hypotheses in the current study.
A further key finding was that Muslims in both samples who endorsed a spiritual jihad mindset when faced with moral struggles also reported greater levels of perceived spiritual and post-traumatic growth.Importantly, the results remained significant even after controlling Islamic religiousness, implying that a spiritual jihad mindset may be contributing additional unique variance in Muslims' perceived spiritual and post-traumatic growth experiences.Although research on the relationship between r/s struggles and growth is mixed, the current findings add preliminary evidence to proposed suggestions in the literature that the actual response to the r/s struggle, rather than the struggle itself, may be what predicts spiritual growth or decline (Exline and Rose 2013;Exline et al. 2016;Wilt et al. 2017).Similar results emerged in regard to the association between a spiritual jihad mindset and perceived spiritual decline.As expected, Muslims in the MTurk sample who were more likely to endorse a spiritual jihad mindset were also less likely to endorse perceived spiritual decline.However, this relationship was not clear for participants in the community sample.
In terms of mental health outcomes, results revealed negative associations between participants' spiritual jihad mindset scores and their levels of anxious and depressive symptoms, as expected, within the MTurk sample.However, these results should be interpreted with caution and will need further investigation, as the associations were weak, and no significant correlations were found within the community sample.Given that moral struggles themselves are usually associated with distress (see, e.g., Exline et al. 2014), these results suggest that endorsement of a spiritual jihad mindset may not play a large role in attenuating this overall level of distress.It is important to note that the measures of anxiety and depression used here are not specific to the struggle situation, and instead represent a broader picture of recent mental health symptoms.As such, it makes sense that their associations with the struggle-specific endorsement of the spiritual jihad mindset would be modest in magnitude.In addition, it is of course possible that a person might see a struggle as personally beneficial (i.e., leading to growth) without necessarily experiencing immediate, widespread mood benefits from this mindset.This same logic may also help to explain the (unexpected) lack of conclusive evidence for an association with life satisfaction.
Finally, Muslims who were more likely to endorse a spiritual jihad mindset were found to also endorse greater levels of virtue traits such as gratitude, patience, and forgiveness, as we predicted-but only in the MTurk sample.The lack of associations within the community sample may be a result of devout Muslims portraying themselves with greater humility when inquired about virtues.On the other hand, these Muslims may be more likely to be honest regarding their negative inclinations or be more aware of their lower self-tendencies, potentially due to having very high moral standards.Granted, these are only speculations; these issues can be addressed systematically in future studies with supplementary measures such as implicit or behavioral assessments of virtues or morality.
Implications for Research and Practice
The proposed psychological construct of spiritual jihad and the associated findings of the present study have noteworthy implications for both research and practice.First and foremost, spiritual jihad is a construct that had never before been studied in the field of psychology.As a result of the current study, researchers can begin to learn more, not only about Islamic spirituality, but also the emerging field of Islamic psychology in a quantifiable manner.The proposed new measure also showed good internal consistency.In addition, by correlating with variables such as Islamic religiousness, daily spiritual experiences, spiritual growth, post-traumatic growth, forgiveness, patience, and gratitude, the measure shows preliminary evidence of validity for future use.Second, although we chose not to use the term jihad within the measure itself, the study may begin to highlight the importance of a more positive and beneficial understanding of the term jihad, a term that can often be misunderstood by non-Muslims and/or Muslims practicing in extremist manners.
Third, the results indicate the importance of considering Muslims' religious beliefs and practices within therapeutic settings.The practice of spiritual jihad can be brought to attention within the therapeutic setting when working with Muslim clients who may identify themselves as practicing.This may specifically be important for practicing Muslims experiencing struggles related to their religion and spirituality.Fourth, the findings of the study add further evidence that r/s struggles do not necessarily result in only negative psychological outcomes.In circumstances such as those of Muslims who apply a spiritual jihad mindset to their moral struggles, perceived growth may follow.Finally, the results from the current study suggest the possibility of some parallels between Muslims and those of other faith traditions, as many faith traditions are likely to emphasize the idea of seeing moral struggles as a personal challenge that can lead to growth.Further similar constructs may be researched with Christians and other groups residing in the United States (see, e.g., Saritoprak and Exline 2017b).Though Islam may be unique and distinct in certain beliefs and practices, it also shares great overlap with other traditions, specifically Abrahamic traditions, which may open doors for greater cross-cultural research of theory and practice.
Limitations and Future Directions
It is important to note several limitations of the current study.First, we aimed to develop a self-report measure of a spiritual jihad mindset, in addition to evaluating the newly developed measure's reliability and validity.Self-report measures have limitations such as susceptibilities to participants responding in biased ways, participants lacking adequate introspective ability to respond accurately, and participants interpreting items in unintended manners.Second, to the best of our knowledge, the construct of spiritual jihad has never been empirically assessed prior to the current study.Hence, the reported findings are preliminary and should be interpreted with caution.Third, the presented data were cross-sectional.Hence, results do not indicate any causal inferences regarding the construct of spiritual jihad.In future research, it will be important to conduct research regarding Muslims and spiritual jihad with longitudinal analyses, and it may be feasible to develop and test experimental interventions.
Fourth, the community sample was local and smaller than intended, which limited the conclusiveness and generalizability of results within the group.In addition, some community sample distributions were less approximately normal, which may have biased hypothesis tests in that sample.Subsequent studies should focus on gathering larger samples from the community, in addition to gathering clinical samples to investigate the role of spiritual jihad among Muslims seeking mental health treatment.Fifth, it is important that future research focuses on more refined and nuanced research predictors and outcomes associated with a spiritual jihad mindset.For example, what factors may mediate or moderate the relationship between Islamic religiousness and having a spiritual jihad mindset?
Additionally, future studies that utilize different research methods such as qualitative analyses and implicit or behavioral measurement can provide further tests of the hypotheses considered here.It is also recommended that researchers translate the Spiritual Jihad Measure into other languages in order to promote greater applicability for non-English speaking Muslims within or outside of the United States.Similarly, it will be important for researchers to modify the measure in regards to its specific terminology that is grounded within an Islamic framework, with the aim of better accommodating other theistic and nontheistic religious orientations.Finally, it will be important for future studies with larger sample sizes to conduct confirmatory factor analyses of the measure.
4.
I believe that through this struggle, my iman (faith) will become stronger.5.
I have been thinking of my struggle as a trial through which I will become a better Muslim.6.
I view the struggle as means of earning more thawāb (good deeds) for the afterlife.7.
I know that there is khair (good) in the struggle because there is khair (good) in everything Allah does.8.
The struggle is an opportunity for me to seek Allah's forgiveness.9.
I tend to think that the struggle is for my best interest because Allah is al-Alim (all-knowing).10.I believe that the struggle is a way in which I can understand my imperfect human nature.11.I do not see the struggle as part of my spiritual growth (reverse).12.The struggle has no meaning for me (reverse).13.There is no place for Islam in my struggle (reverse).14.I do not view the struggle as means to become closer to Allah (reverse).15.Allah plays no role in my struggle (reverse).
Table 1 .
Descriptive statistics for demographics.
Table 2 .
Descriptive statistics and differences between MTurk and community samples for main study variables.
Benjamini and Yekutieli (2001)are adjusted using theBenjamini and Yekutieli (2001)correction for inflation of the false discovery rate (FDR).
Table 4 .
Pearson product-moment correlations between the Spiritual Jihad Measure and main variables.
Table 5 .
Pearson product-moment correlations between the Spiritual Jihad Measure and Forms of Positive Religious Coping.
item measure to examine the extent to which participants endorse a spiritual jihad interpretive framework in reference to a specific moral struggle.Participants were instructed to rate each item on a seven-point scale (1 = strongly disagree, 7 = strongly agree) "I have been thinking of my struggle as a test that will make me closer to Allah" | 12,194 | 2018-05-13T00:00:00.000 | [
"Philosophy"
] |
Gaussian Regression Models for Evaluation of Network Lifetime and Cluster-Head Selection in Wireless Sensor Devices
The paper presents a model predictive approach for evaluating network lifetime and cluster head selection for a wireless sensor network. The dynamic parameters of a wireless sensor network are collected using Smart Mesh IP Power and performance calculator. The study considers a machine learning approach to combine clustering with the optimal routing protocol. The hop depth, advertising, number of Motes, backbone, routing, reporting interval, payload size, downstream frame size, supply voltage, and path stability are the predictors, and the current consumption, data latency, and build time are the response variables to establish the models for estimating the power and performance of the network. The remaining energy in each node, distance from the base station, and data transmission rate are the predictors, and the priority of the cluster head is the response variable to establish models for achieving an optimal routing path in a wireless sensor network. The standard tree, Support Vector Machine, Ensemble, and Gaussian process regression models for lifetime estimation are analyzed in comparison with the Smart Mesh IP tool, and the models for cluster head selection are investigated in comparison with ANFIS based models. This novel approach concentrates on the effect of various dynamic parameters on network lifetime prediction.
• Join duty cycle -how much time a searching mote spends listening for a network Vs. sleeping • Downstream bandwidth -affects how quickly motes can send data • Number of motes -contention among many motes simultaneously trying to join for limited resources slows down joining with collisions • Mote join state machine timeouts and path stabilityuser has little or no control.
• Network topology -Mesh networks are self-healing, while star and tree networks have a single point of failure.
• Recovery time -if one of the nodes is powered down, time taken by the network to re-establish the full mesh or recover all other nodes for uninterrupted data delivery without degradation in the Quality of Service (QoS) metric.
The Internet of Things (IoT) connects devices to the internet via the IP protocol. Low energy consumption and low power operation become critical for IoT devices as they operate on batteries or harvest energy from the environment. Predicting the energy consumption and the device lifetime is thus essential for selecting the most suitable technology, communication protocols and finding the optimal configuration parameters in a network.
A. BACKGROUND STUDY AND LITERATURE SURVEY
The operating temperature and discharge current values influence energy stored in battery devices. Software and hardware-based approaches are used to estimate the state of charge and voltage of batteries using analytical battery models and electrochemical cells to implement energyaware policies. In literature, studies have evaluated the cost of complex algorithms in terms of memory usage, power consumption, and execution time in low-power MCUs. The cyclical behaviour of WSN nodes is assumed, and an openloop computation is used to study the behaviour of the battery [4]. Routing protocols choose the correct route from cluster head to base station. The objective of routing is to realize the scalability of the network, improve the data transfer and energy efficiency of WSNs. Energy-efficient routing protocols are classified based on network structure, communication model, topology, and reliable routing. Based on the network structure, routing protocols are classified as flat, hierarchical, and location-based protocols. In flat network architecture protocols like Sensor Protocol for Information via Negotiation (SPIN), Directed Diffusion, and Rumor Routing, the nodes follow a standard rule for data transmission. In hierarchical networks, the Cluster Heads (CHs) are responsible for communicating with the Base station. Each node is equipped with GPS in location-based networks, and sleep mode schemes are incorporated. Geographic Adaptive Fidelity (GAF), Geographic and Energy Aware Routing (GEAR), and SPAN are routing protocols based on location.
Clustering is a solution used to solve network partitioning that arises because of the limited capacity of battery nodes [5]. Low Energy Adaptive Clustering Hierarchy (LEACH) is the most famous hierarchical routing protocol, where the cluster head (CH) is selected on a rotation basis based on a probabilistic threshold value, and only CHs are allowed to send the information to the base station (BS). Some of the drawbacks of LEACH include improper distribution of energy, non-reflection of remaining energy in nodes and unidentified CHs after some iteration.
LEACH (Low Energy Adaptive Clustering Hierarchy) was proposed to guarantee a balanced energy utilization and to enhance the efficiency of WSNs by partitioning the network into multiple clusters and through a random Cluster Head (CH) rotation [6]. LEACH is a Medium Access Control (MAC) protocol based on the Time Division Multiple Access (TDMA) method. Two main stages of the LEACH algorithm include the Setup phase and Data Transfer Phase. The setup phase includes Cluster selection, TDMA schedule creation, and Cluster configuration. In the setup phase, a sensor node becomes a Cluster head if the number is less than the threshold value defined by eq (1): where P L introduces the percentages of CHs in each epoch, r is the present epoch, and C is a set of sensor nodes that have not yet been CH in the period 1/P L epoch. Once CHs are chosen, the nodes join the cluster heads depending on specific metrics to the cluster head. The different metrics based on which CHs may be selected are (1) residual energy, (2) Centralization, (3) mobility, (4) energy efficiency, and (5) distance. Once clusters are established, the CHs send a TDMA schedule to allow nodes to recognize their time slot for sending the data to CHs. After the fusion of data by CHs, these data will be forwarded to the sink using the Code Division Multiple Access (CDMA) code to avoid collision [7]. The data transfer stage routes the data to the base station either using single-hop or multi-hop techniques. The advantage of LEACH is that the nodes remain in sleep mode until their turn to send data. The disadvantage of LEACH is that for a random selection of CHs the number of cluster heads cannot be guaranteed in each round. Also, as the remaining energy in each node is not considered, the nodes with low residual energy and high residual energy have the same chance of becoming cluster heads. CHs use the singlehop to direct data to the BS, making LEACH not adopted for an extensive network. Different authors [8], [9] have surveyed various descendants of LEACH protocol like LEACH-C, MM-LEACH, TL-LEACH, Stable Election Protocol (SEP), V-LEACH, and Modified (MOD-LEACH). Table 1 shows the performance of various LEACH algorithms in terms of the number of data packets delivered to the Base station (BS), first dead node, and total energy dissipated. [10], [11].
In LEACH-B, there is a Uniform Number of CHs given by the global number of nodes in the network and the proportion of CHs. The algorithm considers remaining energy after the first round and shows improvement in network lifespan than LEACH.
Intelligent (I-LEACH) elects CH based on the remaining energy and nodes location. However, CH integrates collected data to reduce the cost of supplementary data transmission, which is not practical for nodes that receive different data.
The residual energy of nodes E r where E max presents the initial energy of the node, while E current represents the residual energy of each node. The distance from the base station to CH is given by Here, d bs parameter denotes the distance between a node and the BS, when the distance from the farthest node in a cluster to the BS is expressed by d far . To extend the network lifetime and the scalability, functions described in Eqs. (2) and (3) are incorporated and multiplied by the probability function. The LEACH protocol uses the energy model as used in Heinzelman et al. [12]. Energy consumption at each node depends on the size of the data packet and the distance from the source node. For transmitting the l-bits of a data packet from a sensor node to its d distance remote receiver node, the total energy consumption of a sensor node is calculated by the following equation: However, for receiving the l-bits of a data packet at a sensor node, the energy consumed by the receiver nodes is calculated by the following equation: The value of the E elec is the energy dissipated per bit during the execution of the transmitter or receiver circuit. fs and mp is the amplification coefficient of the transmission amplifier for free space and multi-path model, respectively. d 0 represents threshold transmission distance, and its value is
1) FINDING THE OPTIMAL NUMBER OF CLUSTER HEADS K
For N sensors divided into C clusters, the energy consumption of the cluster head is given by where E DA is the energy consumed in aggregation d toBS is the average distance from the base station to the cluster head nodes. Energy consumed in non-cluster head nodes for transmitting the packet to the cluster head is given by is the average distance from the non-cluster head nodes to their cluster head nodes. R is the radius of the network and M 2 C is the area of each cluster. Total energy dissipated by a cluster is given by Total energy dissipated for the frame is: The optimal cluster heads can be obtained by differentiating E total with respect to C Elshrkawey et al. [13] has discussed an enhanced schedule based on Time Division Multiple Access (TDMA) and augmentation of energy balancing in clusters among all sensor nodes to reduce energy consumption and prolong the network lifetime of WSN. A sensor node is considered a cluster head if the random number of the sensor node is less than the threshold value defined using factors like remaining energy of the sensor node, the distance of sensor node to the base station, and the number of times a node is selected as a cluster head. SEP (Stable Election Protocol) [14] can be applied for heterogeneous networks where a fraction of m nodes have VOLUME 10, 2022 additional energy factor α. The probability of these advanced nodes to become CHs is given by An increase in the number of advanced nodes results in an increased stability period and network life. However, throughput is also increased due to two levels of heterogeneity. TEEN [15] has two threshold levels -a hard threshold and a soft threshold. Nodes turn on their transmitters whenever the sensed attribute's value becomes equal or greater than the hard threshold, and data is conveyed to CHs. And for the second time, they transmit only in case the difference between sensed value and previously saved value at which transmission was done is greater than or equal to soft threshold. So, energy consumption and throughput are reduced; hence network life and stability period are improved than other protocols.
Sharma S et al. [16], have used residual energy as a factor to make cluster head. The radial-based function network model and Artificial Neural Network (ANN) are used for the cluster head selection problem. The improved performance is observed in the number of alive nodes, total energy consumption, cluster head formation, and the number of packets transferred to the base station and cluster head compared with LEACH and LEACH-C algorithms.
Han et al. [17] have discussed Clustering protocol based on the meta-heuristic approach (CPMA) that focuses on cluster head selection based on Harmony Search Algorithm, which aims to reduce total energy dissipation. The CPMA uses the Artificial Bee Colony algorithm to optimize crucial parameters.
Seyyedabbasi et al. [18], have developed an algorithm HEEL where the cluster head is selected based on node energy, the energy of node's neighbour, number of hops, and number of links to neighbours and shows improvement compared to Nr-LEACH, ModLEACH, LEACH-B, LEACH, PEGASIS energy-aware clustering scheme.
Aslam et al. [19] proposed a novel method for integrating a multi-objective function for charging a wireless portable charging device and sensor node's training for data routing carried out using clustering and reinforcement learning. The techniques used in our paper SVM and KNN have only been proposed as future scope of research and have not been implemented in lifetime prediction or selection of cluster heads.
Different performance metrics of clustering algorithm include: i. Total Energy Consumption (E total ) -It is defined as total energy consumption in the network after k rounds of data gathering from the area of interest.
Here E i,k is the total energy consumption by a node i after k number of rounds of data gathering from the network. N is the total number of nodes in the network.
ii. Number of alive nodes (N alive_nodes , k): It is defined as the total number of nodes alive whose residual energy is greater than the threshold energy after a specified number of data gathering rounds (k). (14) iii. Network lifetime: It is defined as the number of data gathering rounds that a WSN has carried on until the first node death. A comparison of energy consumed by different wireless protocols like IEEE 802.15.4/e, Bluetooth low energy (BLE), the IEEE 802.11 power-saving mode, the IEEE 802.11ah, LoRa and SIGFOX is carried out based on the power required in the sleep mode, idle mode, transmit and receive mode and the duration of each state using an analyzer [20]. The results showed that BLE obtained the best network lifetime in all traffic intensities. At ultra-low traffic intensities, LoRa obtained the third-best network lifetime.
In literature [21]- [28] the energy consumption models take transmission power, the distance between two nodes, packet size, and path loss as parameters to predict battery lifetime. The approach modelled the behaviour of the physical layer, and it did not reflect the operation of duty-cycled IoT devices realistically. The topology of all networks considered in these works is the star.
The importance of Machine Learning (ML) in WSNs due to the dynamic nature of networks is presented [29]. Maddikunta et al., [30] have predicted battery life based on various regression models, and predictive accuracy of 97% was obtained. The different predictors used in work include the beach name, water temperature, turbidity, transducer depth, water height, wave period, and measurement timestamp.
Artificial Intelligence is unlocking software solutions like ML approaches in battery systems to reduce fabrication and development costs while improving performance metrics. Data-driven models with ML algorithms can be used to predict the state of charge and remaining useful life in batteries. ML techniques can be applied to dynamic wireless sensor networks to affect the adaptiveness and ability of networks to respond quickly and efficiently without compromising the quality of service. Support Vector Machine (SVM) is a non-parametric method that relies on kernel functions to perform classification and regression tasks [31]. Here, a Lagrangian function is constructed as an objective function, and by introducing α n and α * n (non-negative multipliers) for each training data x n and response y n .
where the Gram matrix G(x i , x j ) represents whether the kernel function is linear, polynomial or gaussian.
20878 VOLUME 10, 2022 Subject to the constraint N n=1 α n − α * n = 0 (16) ∀n: 0 ≤ a n , α * n ≤ C (17) where C is the box constraint that controls the penalty imposed on data points that lie outside margin and prevents the problem of overfitting.
The function used to predict new values is given by Each Lagrange multiplier is updated with each iteration until the convergence criterion is met. Ensemble learning is an ML and statistical technique that uses different ML algorithms to improve predictive performance. Here a Least Square Boosting (LSBoost) method minimizes the mean squared errors.
Gaussian Process Regression (GPR) is a probabilistic and non-parametric model [32].
For a training set {x i , y i } the GPR model is given by where f represents a Gaussian process with zero mean for each input x i , H represents the set of basis functions that projects the inputs into feature space, β basis function coefficients and σ 2 error variance. While training using a GPR model, the coefficient of basis function, the noise variance σ 2 and hyperparameters of the kernel function are estimated.
The selection of an appropriate ML model is insufficient for obtaining excellent performance and tuning the model argument before the learning process is called hyperparameter tuning. Bayesian optimization is an effective hyperparameter optimization tool.
One of the major issues encountered in machine learning models is the problem of the bias-variance dichotomy. Bias is the error that is introduced by the model's prediction and the actual data.
High Bias means the model has created a function that fails to understand the relationship between input and output data.
Low Bias means the model has made a function that has understood the relationship between input and output data. Variance -is the amount by which its performance varies with different data set.
Low variance means the machine learning model's performance does not vary much with the different data sets. High variance means the machine learning model's performance varies considerably with other data set.
A well-trained model should have low variance, and low Bias is also known as Good Fit.
Overfitting -During the training phase, the model can learn the complexity of training data in so detail that it creates a complex function that can almost map entire input data with output data correctly, with very little or no error. The model shows low error or Bias during the training phase but fails to show similar accuracy with the test or unseen data (i.e., high variance) Underfitting -During the training phase, the model may not learn the complex relationship between training data in detail and can come up with a straightforward model. It is so simple that it produces too much error in prediction (high Bias).
RMSE of training data should be more or less the same as the RMSE of testing data. The techniques for reducing overfitting include increasing training data, reducing model complexity, early stopping during the training phase, L1 and L2 regularization, and dropouts for the neural network. Techniques for reducing underfitting include increasing training, increasing model complexity, increasing the number of features, removing noise from data, and increasing the number of training epochs.
Regularization is a technique that makes slight modifications to the learning algorithm such that the model generalizes in a better way. In L1 regularization, a penalty term that contains the absolute weights is added to reduce the complexity of the model. The equation for L1 regularization is given by: In L2 regularization, a penalty term that contains lambda times squared weight of each feature is added to reduce the complexity of the model. The equation for ridge regression will be: Due to the addition of this regularization term, the values of weight matrices decrease because it assumes that a neural network with smaller weight matrices leads to simpler models. Therefore, it also reduces overfitting to quite an extent. The design of energy balanced and energy-efficient routing protocols is required for increasing the lifetime of wireless sensor nodes. Hierarchical clustering protocols extend the network lifetime by dividing nodes into multiple clusters. Some clustering algorithms in the literature are listed in Table 2.
B. CONTRIBUTION AND PAPER ORGANIZATION
In this paper, ML methods are used to i) predict the CHs and an optimum number of nodes in a network ii) forecast the energy consumed of IoT nodes by considering the dynamic nature of the networks. The highlights of the paper include • Comparison of the machine learning-based cluster head selection model with ANFIS based models.
• Here considered analysis on the effect of various dynamic parameters on network lifetime prediction.
• Machine Learning based cluster head priority is combined with modified threshold sensitive Stable Election Protocol (TSEP) for cluster head selection.
• A comparison of various protocols like TEEN, SEP, LEACH and Machine Learning based TSEP (ML-TSEP) is carried out in terms of the average energy of each node and the number of dead nodes.
• This work contributes a novel approach to combining clustering with the optimal routing protocol. The paper has been organized as follows: Section II describes the data-driven and model predictive approach for combining the clustering and routing protocol in Wireless Sensor Networks. The results for Lifetime prediction and cluster head selection using ML are presented in Section III. A comparison of different ML techniques with its performance metric is also carried out in this section. The concluding remarks are outlined in Section IV.
II. DATASET FOR THE MODEL PREDICTIVE WIRELESS SENSOR NETWORK
The dataset for lifetime prediction is developed using smart mesh IP tool [17] as shown in Fig. 1. A sensitivity study of various network parameters and its dependency on total current consumption of the network is also carried out using the data generated ( Fig. 2-4).
III. MODEL PREDICTIVE APPROACH FOR OPTIMAL ROUTING PATH AND LIFETIME PREDICTION
A WSN consists of a network manager and several motes. The proper network interfaces configuration can address a wide range of sensor applications to tradeoff between speed and power consumption. Each mote represents a location where the sensor can send and receive data. The network manager builds and maintains the network and makes available the sensor data for data collection applications. Some motes can directly communicate to the manager, while others must route the data through other motes. Turning off-network advertising and reducing downstream communication can reduce the network's power consumption, thereby doubling the battery life of nodes. Configuring the nodes as a mesh network and configuring all battery-powered nodes to be non-routing can also result in a battery life greater than ten years. Non-routing nodes behave as leaf nodes that do not advertise and never route the data. Setting the backbone mode on at the manager reduces the data latency of the network; Fig. 5 shows a WSN obtained from the Smart IP Mesh calculator. Here we consider a WSN consisting of 200 sensor nodes installed on one floor of a building. The network is divided into four occupancy zones, each with its own Passive Infrared [PIR], Occupancy Sensors, two LED Luminaires and motorized window blinds [39], [40].
The selection of CHs with appropriate clustering protocols is another crucial aspect for enhancing the network lifetime of IoT nodes. Optimal CHs are selected to obtain efficient routing in a multi-hop communication network. Fig. 6 shows the block diagram for the optimal routing path of the network. In work presented in [41], a Fuzzy based LEACH protocol was developed to obtain a priority value for the CH based on the initial energy, distance from the base station, and data transmission rate. Using the Fuzzy based LEACH, the input-output training dataset for ANFIS based LEACH is developed. The same dataset is used for training the machine learning model. The predictors of the Machine Learning model are the Remaining energy of nodes, Data Transmission rate, and distance from the base station. Various machine learning models like Gaussian Process Regression, Support Vector Machine, Ensemble, and Decision Tree are deployed using the dataset. The detailed pseudocode for cluster Head Priority using Gaussian Process Regression (GPR) with Bayesian Optimization is illustrated in Table 3. Once the optimal cluster heads are selected, those sensors transfer data to the cloud.
The power and performance predictor considers network topology, data report rates, packet size, supply voltage, and packet success rate as inputs and predicts the average current consumption, data latency, and network build time. Fig. 7 shows the block diagram for the network lifetime prediction model. The model used for predicting the current consumption, data latency, and build time of the WSN makes use of ten predictors, namely hop depth, advertising, number of motes, backbone, routing, reporting interval, payload size, downstream frame size, supply voltage, and path stability. Five-fold cross-validation is performed on the model to overcome the overfitting problem and to obtain a reasonable accuracy estimate on each fold. In k-fold cross-validation, the data is partitioned into k disjoint sets. Here the data is trained on the k-1 data set and tested first. The process is carried out for k iterations, and the accuracy score is calculated. The developed model is used to evaluate the dependency of various parameters on power and performance.
A network consisting of 200 nodes is placed randomly in a region of 100 × 100 sq.m, and the Base station is placed in the center. The parameters used in MATLAB simulation are shown in Table 4 In the proposed Machine Learning-based Threshold Sensitive Stable Election Protocol (ML-TSEP), a node's probability to become CH is decided from the machine learning model. In TSEP, two levels of heterogeneity is considered, and the transmission of data from sensor node to CH takes place based on the threshold defined by T (n) is the threshold defined in LEACH algorithm E re is residual energy of sensor nodes E in is initial energy of sensor nodes E avg is the average energy of sensor nodes in current round d toBSav is average distance of sensor nodes to base station d toBSn is distance of sensor node to base station CH s is the time that node is selected as a cluster head Nb n is the number of neighbours of n nodes. G is set of sensor nodes that have not been cluster heads The summary of the steps involved in the proposed method include: 20882 VOLUME 10, 2022 Data Gathering -For lifetime prediction, the data is collected from the SmartMesh IP tool, and for cluster head priority, the data is collected from the fuzzy-based model.
Data preprocessing to remove outliers and deleting duplicates The features most affecting the lifetime are identified for the lifetime prediction model.
Build machine learning models using a Decision tree, Support Vector Machine, Ensemble, and Gaussian Process Regression Analyze the performance metrics of the models and identify the best model Hypertuning of the parameters using Bayesian optimizer Validation of the lifetime prediction model using test data obtained from SmartMesh IP tool.
Comparison of the results (Mean Squared Values) of Machine Learning based and ANFIS based cluster head priority.
Machine Learning based cluster head priority is combined with modified Threshold Sensitive Stable Election Protocol (ML-TSEP) for cluster head selection. The threshold value of the modified TSEP is given by (23) A comparison of various protocols like TEEN, SEP, LEACH Machine Learning based Threshold Sensitive Stable Election Protocol (ML-TSEP) is carried out in terms of the average energy of each node and the number of dead nodes.
A. LIFETIME PREDICTION MODEL USING ML
The different steps involved in developing an ML model include data collection, data preprocessing, model development, training, hyperparameter optimization, testing and validation, as depicted in Fig. 8.
The different performance metrics used for evaluating the regression model include root mean squared error, R-squared, mean absolute error, prediction speed and training time.
Mean Absolute Error (MAE) is the sum of the average of the absolute difference between the predicted and actual values given by (24) where Y i = actual output values, =Ŷ i predicted output values. The mean squared error (MSE) is given by Eq. (24).
R-squared explains to what extent the variance of one variable explains the variance of the second variable. Higher the R-squared value, the better is the model. As there is more than one independent variable, linear regression is not used for predictive analysis. Table 5 shows the RMSE and performance metric for the lifetime prediction model, and Fig. 9 shows the predicted and actual responses for different algorithms.
The models are validated against actual current consumption and predicted current consumption, as shown in Fig. 10. The actual measurement of current consumption is obtained from the smart mesh IP power and performance calculator, and the lifetime prediction model is validated. Table 6 shows the interaction between the features to the response variable, the dependency of various parameters on current consumption, which helps reduce the dimensionality of data and thereby reduce the complexity of the model. It is seen that no of motes, hop depth and backbone most affect the current consumption of the wireless sensor network. 20884 VOLUME 10, 2022 Again, using 70% of data for training, 15% for validation, and 15% for testing using neural network training tool of MATLAB with Bayesian regularization following mean square error and R-squared values are obtained as shown in Fig. 11. The best training performance is observed at the 102nd epoch, as shown in Fig. 7. Fig. 8 shows the predicted and actual response at different iteration when trained using neural network training. The Bayesian regularization technique minimizes squared errors and weights and optimized learning parameters, as shown in Fig. 9.
B. RESULTS: CH SELECTION USING ML
The RMSE values obtained from the ANFIS model and various ML regression models are shown in Table 7. Fig. 15 shows the predicted Vs. True response of the clustering model obtained using optimizable GPR. The results VOLUME 10, 2022 indicate that the R-squared value for this algorithm is close to one. Fig. 16 shows the Minimum Mean Squared (MSE) error using the GPR algorithm with Bayesian optimization.
For a Tadiran TL4903AA with a capacity of 2160 mAh, the variation in battery life with current consumption is shown in Fig. 17. A comparison of various protocols like TEEN, SEP, LEACH, and Machine Learning based Threshold Sensitive Stable Election Protocol (ML-TSEP) protocol is carried out in terms of the average energy of each node and number of dead nodes as shown in Fig. 18 and Fig. 19.
V. CONCLUSION
This research work combines intelligent clustering and routing protocols to improve energy consumption and the lifetime of wireless sensor nodes. In this work, the energy consumption, data latency, and build time of sensor nodes are predicted based on various parameters that affect the dynamic behaviour of WSNs, and the factors that most affect the response of the predictive model are identified. Predicting the lifetime of sensor nodes avoids the problems of the constant replacement of batteries, particularly for sensor nodes deployed in remote areas. The most affected network current consumption factors are hop depth, number of motes, and backbone. The results for lifetime prediction are validated with the results obtained from the SmartMesh IP tool. The GPR model for current consumption prediction shows significant improvement in RMSE, R-squared value, and MAE. Apart from this, the priority of CHs is predicted using ML techniques. The priority of a node to become cluster head acts as an input to the modified Threshold Sensitive Stable Election Protocol (ML-TSEP), which selects the cluster head and transmits the data from the sensor nodes to the CHs. The cluster head prediction based on GPR shows significant improvement in RMSE compared to the ANFIS model. Since 1987, she has been with the Electrical and Electronics Engineering Department, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, India. Her research interests include lighting controls-technology and applications, soft computing, and control systems.
Dr. Kurian is a fellow of the Institution of Engineers, India, and a Life Member of professional bodies, such as the Indian Society of Lighting Engineers, the Indian Society for Technical Education, and the Systems Society of India. | 7,245 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Agalsidase alfa and agalsidase beta in the treatment of Fabry disease: does the dose really matter?
Agalsidase alfa and agalsidase beta in the treatment of Fabry disease: does the dose really matter?
Fabry disease (FD) is a multiorgan, X-linked lysosomal storage disorder that particularly affects the heart, the kidneys, and the cerebrovascular system. 1 The treatment options for patients with FD include long-term enzyme replacement therapy (ERT) in addition to supportive management. Two recombinant enzyme formulations for the ERT of FD are available on the European market: agalsidase alfa (Replagal; Shire Human Genetic Therapies AB, Danderyd, Sweden) and agalsidase beta (Fabrazyme; Genzyme Corporation, Cambridge, MA). 1 Numerous clinical trials, observational studies, and registry data have provided evidence about the safety and efficacy of ERT 2 ; to date, however, there have been limited comparisons between the two agents, and no firm conclusion regarding their specific efficacy and safety can be made.
A viral contamination in the manufacturing process of Fabrazyme in June 2009 led to a global shortage of agalsidase beta. Recommendations to reduce the dosage of the drug were consequently published by the European Medicines Agency for patients receiving agalsidase beta; this obviously caused fear and concern among both patients and physicians. On the basis of an increased rate of serious adverse effects in patients administered reduced doses, 3 a subsequent European Medicines Agency report suggested restarting treatment with full-dose agalsidase beta or shifting patients to recommended doses of agalsidase alfa. Therefore, after a period of reduced dosage of agalsidase beta, many patients were switched to agalsidase alfa. This offered the unique opportunity to compare the two drugs, albeit indirectly, evaluating any clinical modifications or adverse events that occurred after the switch. In 2011, in a Dutch cohort of 35 patients with FD who continued agalsidase beta at reduced doses or were switched to agalsidase alfa after about 5 months of low-dose agalsidase beta, Smid et al. 4 showed that renal function, left-ventricular mass, symptoms of pain, and incidence of clinical events were not significantly altered during the shortage; quality of life was minimally but significantly affected in females in two subscales of the 36-Item Short Form Health Survey; more important, an increase in lyso-Gb3, a marker of disease involvement, was observed in males after 1 year of therapy at either low doses of agalsidase beta or a full dose of agalsidase alfa.
One year later, Tsuboi and Yamamoto 5 presented the results of an observational study involving 11 patients who switched from agalsidase beta (1 mg/kg every other week) to agalsidase alfa (0.2 mg/kg every other week): renal function, cardiac mass, and quality of life remained stable throughout the 12-month follow-up. Similarly, our group 6 evaluated the effect of such a switch in 10 patients with FD (7 males, 3 females) who were previously treated with agalsidase beta for at least 48 months. The results showed that renal function, cardiac mass assessed by magnetic resonance imaging, symptoms of pain, and health status scores remained stable throughout the 24-month followup period. More recently, Weidemann et al. 7 reported their experience during the agalsidase beta shortage that resulted in a change of treatment regimen in many patients. They assessed end-organ damage and clinical symptoms among 105 patients with FD who were previously treated with agalsidase beta (1.0 mg/kg every other week for ≥1 year) and who were arbitrarily assigned, on the basis of their symptoms, to continue their treatment regimen, to receive a reduced dose of agalsidase beta (0.3-0.5 mg/kg), or to be switched to the full dose of agalsidase alfa (0.2 mg/kg). No clinical event occurred after dose reduction or compound switch, as already observed by Tsuboi and Yamamoto 5 and us. However, Weidemann et al. 7 reported a significant deterioration of Fabry-related symptoms in both groups, a significant decline in glomerular filtration rate estimated using cystatin in the dose-reduction group and a significant increase in urinary albumin-to-creatinine ratio only in patients switched to agalsidase alfa. This result was stressed by Warnock and Mauer 8 in a recent editorial emphasizing that the dose of the drug "matters" in FD treatment and suggesting that the full dosage of agalsidase alfa could be too low to guarantee results as effective as those of agalsidase beta. 8 However, there are some considerations to be taken into account when considering all the studies dealing with the shift to agalsidase alfa: because of their observational nature, the unavoidable selection of patients, the short follow-up, and the low number of events observed after ERT introduction, the intrinsic limits of these studies do not allow final conclusions about the efficacy and safety of agalsidase alfa to be made.
Indeed, no renal biopsy was performed during the followup in shifted patients, able to demonstrate greater podocyte injury and/or new deposition of Gb3 in tubular cells. 1 It was impossible to demonstrate the injury because renal biopsy was not performed. Moreover, the increased levels of lyso-Gb3 described by Smid et al. 4 1 year after the shift were observed in patients previously treated with a low dose of agalsidase beta for 6 months; if dose matters, such a finding could be ascribed to the reduced dose of agalsidase beta and not to agalsidase alfa.
The observed twofold increase in the albumin-to-creatinine ratio, described by Weidemann et al. 7 after a 12-month treatment with agalsidase alfa, in the presence of a relatively stable cystatin-C-based glomerular filtration rate, could suggest a "true" worsening of renal function. However, the same patients had already shown a 2.7-fold increase in this parameter during the preceding year (i.e., while receiving the full dose of agalsidase beta). The characteristics and the clinical treatment of these patients may explain the progressive and continuous increase in albuminuria; in fact, it is interesting to note that these patients were less protected by renin-angiotensin system blockers, which were administered to only 24% of shifted patients compared with 58% of patients in the dose reduction group and 34% of patients receiving the full dose of agalsidase beta. Although such a difference was not significant, it is widely accepted that proteinuria does not respond solely to ERT, 1 and renin-angiotensin system blockers represent a critical stabilizing factor of proteinuria. 1 Therefore, a specific role for agalsidase alfa in worsening proteinuria should be reconsidered.
Finally, the significant increase in adverse events, such as gastrointestinal symptoms, pain attacks, or chronic pain, during agalsidase alfa treatment is difficult to interpret and quantify adequately. It is not possible to exclude that the anxiety caused by the drug shortage and by European Medicines Agency warnings led to the increased reporting of adverse events by patients and greater attention given to their diagnosis by physicians. This has probably overestimated the real incidence of these "subjective" symptoms. It is much more important to stress that, under agalsidase alfa treatment, "objective" targets, such as cardiac measures or neurologic involvement, were not affected, and the number of events remained stable despite the short observation period.
The recent data by Tsuboi and Yamamoto 9 support the safety of switching from agalsidase beta to agalsidase alfa at the approved doses, without loss of efficacy on organ involvement over a long-term period. They reported data from 11 patients switched from agalsidase beta to agalsidase alfa during a prolonged follow-up; in fact, clinical data were collected for 5 years-2 years before and 3 years after the switch. Their results showed that renal function remained stable during the last 3 years and that the improvements in cardiac mass, recorded 12 months after switching to agalsidase alfa, were maintained throughout the follow-up. Moreover, there was no significant difference in pain severity and quality-of-life parameters evaluated before and after switching.
Our recent data from 10 patients with FD who were previously switched to agalsidase alfa further support these results.
In fact, with the increased availability of agalsidase beta in the last quarter of 2012, five patients (three males) returned to full-dose agalsidase beta (1.0 mg/kg every other week) after a 30-month average treatment with agalsidase alfa, whereas the remaining five patients (four males) continued their ongoing therapy with agalsidase alfa (0.2 mg/kg). To date, the follow-up of these 10 patients averages 40 months after the first switch to agalsidase alfa. As in our previous study, 7 we evaluated renal function, selected cardiac parameters, pain symptoms, and patient health status either at baseline (i.e., 20 months after the switch) and after 20 further months of continuous agalsidase alfa or 20 months after the switch back to agalsidase beta. There was no difference in age between the two groups (total mean, 43.5 ± 5.5 years) nor in the estimated glomerular filtration rate (total mean, 91.1 ± 14.9 mL/minute), and all patients had a described mutation expressing the classic FD phenotype with severe multiorgan involvement, which makes unlikely the hypothesis that they had a stable or a slowly progressing disease. Our data demonstrate that no clinical event occurred during the follow-up period in any group using the approved drug doses. Throughout the follow-up period, renal function remained stable in both groups, and no change was observed in median urinary protein-to-creatinine ratio nor in cardiac function assessed by left-ventricular ejection fraction and by changes in left-ventricular mass on cardiac magnetic resonance imaging, as compared with values before the shift. Finally, symptoms of pain and health status scores did not worsen during the follow-up. Agalsidase alfa was well tolerated throughout the observation period, and no clinical problem occurred after the reintroduction of agalsidase beta in patients who switched back. Despite the exiguous number of patients involved in this observation, and considering that 80% of the patients treated with this drug were males, who are more prone to disease progression, these data offer further information about the safety and efficacy of agalsidase alfa. A recent report showed two cases of significant clinical improvement of severe adverse events on an approved/reduced dose of agalsidase beta after the switch to agalsidase alfa. 10 obviously, we need to get further information from all the centers involved in the switch policy. DISCLOSURE A.P. is a consultant for Genzyme Corporation and Shire Pharmaceuticals and has received investigator-initiated research support from Genzyme and Shire. These interests have been reviewed and managed by the University Federico II of Naples in adherence with its conflict of interest policies. The other authors declare no conflict of interest. | 2,401 | 2014-07-10T00:00:00.000 | [
"Physics",
"Medicine"
] |
The correspondence between thermodynamic curvature and isoperimetric theorem from ultraspinning black hole
In this paper, a preliminary correspondence between the thermodynamic curvature and the isoperimetric theorem is established from a $4$-dimensional ultraspinning black hole. We find that the thermodynamic curvature of ultraspinning black hole is negative which means the ultraspinning black hole is likely to present an attractive between its molecules phenomenologically if we accept the analogical observation that the thermodynamic curvature reflects the interaction between molecules in a black hole system. Meanwhile we obtain a general conclusion that the thermodynamic curvature of the extreme black hole of the super-entropic black hole has a (positive or negative) remnant approximately proportional to the reciprocal of entropy of the black hole.
In this paper, a preliminary correspondence between the thermodynamic curvature and the isoperimetric theorem is established from a 4-dimensional ultraspinning black hole. We find that the thermodynamic curvature of ultraspinning black hole is negative which means the ultraspinning black hole is likely to present an attractive between its molecules phenomenologically if we accept the analogical observation that the thermodynamic curvature reflects the interaction between molecules in a black hole system. Meanwhile we obtain a general conclusion that the thermodynamic curvature of the extreme black hole of the super-entropic black hole has a (positive or negative) remnant approximately proportional to the reciprocal of entropy of the black hole.
I. INTRODUCTION
A very interesting and challenging problem in black hole thermodynamics is the volume of black hole. Although there are various versions of black hole volume discussion [1][2][3][4][5][6][7][8][9][10], there is no unified description yet. In the problem of understanding the volume of black holes, especially in AdS black holes, the application of isoperimetric theorem deepens our mathematical understanding of black hole thermodynamics insofar as it places a constraint on the thermodynamic volume and entropy of an AdS (or dS) black hole [11,12]. Isoperimetric theorem is an ancient mathematical problem, which simply means that in a simple closed curve of a given length on a plane, the area around the circumference is the largest. With the proposal of black hole area entropy (in the natural unit system, S = A/4, where S is the entropy of the black hole and A is the area of the event horizon) [13,14] and the introduction of extended phase space [15], Cvetič, Gibbons, Kubizňák, and Pope creatively applied the theorem to AdS black hole system and conjectured that in general for any d-dimensional asymptotic AdS black hole, its thermodynamic volume V and entropy S satisfy the reverse isoperimetric inequality [11], where ω n = 2π (n+1)/2 /Γ [(n + 1)/2] is the standard volume of the round unit sphere, and the equality is attained for the (charged) Schwarzschild-AdS black hole. Physically, the above isoperimetric ratio indicates that the entropy of black holes is maximized for the (charged) Schwarzschild-AdS black hole at a given thermodynamic volume. Up to now, the ratio has been verified for a variety of black holes with the horizon of spherical topology and black rings with the horizon of toroidal topology [16]. The black hole, which violates the reverse isoperimetric inequality, i.e., R < 1, is called a super-entropic black hole [17]. To date, there are only two known super-entropic black holes. One is (2 + 1)-dimensional charged Banados-Teitelboim-Zanelli (BTZ) black hole which is the simplest [18][19][20][21][22]. Another important super-entropic black hole is a kind of ultraspinning black hole [23][24][25].
Now turn to another important concept, thermodynamic curvature. It is now the most important physical quantity in studying the micro-mechanism of black holes from the axioms of thermodynamics phenomenologically. Its theoretical basis is mainly based on the thermodynamics geometry, which is mainly to use the Hessian matrix structure to represent the thermodynamic fluctuation theory [26]. Hitherto without an underlying theory of quantum gravity, the exploration on the microscopic structure of black holes is bound to some speculative assumptions. Owing to the well-established black hole thermodynamics, as an analogy analysis and a primary description, it can be said that the thermodynamic geometry should yet be regarded as probe kits to phenomenologically or qualitatively extract certain information about interactions of black holes. In this scene, one can regard that an empirical observation in ordinary thermodynamics that negative (positive) thermodynamic curvature is associated with attractive (repulsive) microscopic interactions, is also applicable to black hole systems [27]. Based on this empirical analogy analysis, the primary microscopic information of the BTZ black hole, (charged) Schwarzschild (-AdS) black hole, Gauss-Bonnet (-AdS) black hole, higher dimensional black holes and other black holes are explored .
In this paper, we shall calculate the thermodynamic curvature of 4-dimensional ultraspinning black hole and explore the correspondence between thermodynamic curvature and isoperimetric theorem of super-entropic black hole. First, the thermodynamic curvature of ultraspinning black hole has never been analyzed, so we want to fill this gap. Second, the isoperimetric ratio (1) has been simply an observation made in the literature, but no physical reason has been given for the bound.
Hence we want to try to understand this isoperimetric ratio from the point of view of thermodynamics geometry. Third, in our previous work [22] about the thermodynamic curvature of (2+1)-dimensional charged BTZ black hole, we give a preliminary conjecture that when the isoperimetric ratio is saturated (R = 1), the thermodynamic curvature of an extreme black hole tends to be infinity while for super-entropic black holes (R < 1), the thermodynamic curvature of the extreme black hole goes to a finite value. In present paper, through the analysis of the thermodynamic curvature of the only second super-entropic black hole, we want to verify and perfect the previous conjecture and establish a new correspondence, that is, the correspondence of thermodynamics curvature and isoperimetric theorem of AdS black holes.
II. THERMODYNAMIC PROPERTIES OF ULTRASPINNING BLACK HOLE
We start to demonstrate this procedure with the 4-dimensional Kerr-AdS black hole and write its metric in the standard Boyer-Lindquist form [8,23] here m is related to black hole mass, l is the AdS radius which is connected with the negative cosmological constant Λ via Λ = −1/l 2 and a is rotation parameter.
To avoid a singular metric in limit a → l, Refs. [23,24] define a new azimuthal coordinate ψ = φ/Ξ and identify it with period 2π/Ξ to prevent a conical singularity. After these coordinate transformations and then taking the limit a → l, one can get the metric of the ultraspinning black hole [23,24] and the horizon r h defined by ∆(r h ) = 0. In addition, due to the new azimuthal coordinate ψ is noncompact, Refs. [23,24] choose to compactify by requiring that ψ ∼ ψ + µ with a dimensionless parameter µ. For this black hole, in order to make the horizon exist, the mass of the black hole is required to have a minimum, that is, an extreme black hole, Correspondingly, the first law of ultraspinning black hole thermodynamics is [23,24] where the basic thermodynamic properties, i.e., enthalpy M, temperature T , entropy S, thermodynamic pressure P , thermodynamic volume V , angular momentum J and angular velocity Ω, of ultraspinning black hole associated with horizon radius r h are [23,24] Meanwhile authors in Refs. [23,24] find the above ultraspinning black hole is super-entropic, i.e., the relation between the entropy S and thermodynamic volume V in Eq. (8) violates the reverse isoperimetric inequality (1).
We notice that the above first law (7) is mathematically problematic, like as the Maxwell relation (∂T /∂P ) S,J = (∂V /∂S) P,J . Because angular momentum J = Ml (it's also known in the Ref. [23] as chirality condition), it renders the enthalpy M of a black hole just a function of entropy S and pressure P . Hence we need to find a more suitable expression of the first law and the derived expressions of temperature and volume. By inserting the chirality condition into the Eq. (7), we can get the right form of the first law of ultraspinning black hole Of course, naturally, we can verify the Maxwell relation (∂T /∂P ) S = (∂Ṽ /∂S) P . Meanwhile we can write the corresponding Smarr relation which can also be derived from a scaling (dimensional) argument [60]. Next let's check whether the ultraspinning black hole is still super-entropic in our new thermodynamic framework. Keeping in mind that the space is compactified due to ψ ∼ ψ + µ, we have ω 2 = 2µ [23]. For convenience, we set a dimensionless parameter x = l 2 /r 2 h . Consequently, the isoperimetric ratio reads Now let's analyze the situation of the extreme black hole in our new thermodynamic framework.
• For the black hole thermodynamic system, the temperature and thermodynamic volume of the system should be non-negative (we mainly focus on these two physical quantities and the others are positive). For the case of negative temperature and negative thermodynamic volume, this is beyond the scope of this paper, so we have to exclude this situation. Especially for negative thermodynamic volume, it is not well defined in thermodynamics.
• For the ultraspinning black hole, the original extreme black hole corresponds to Eq. (6). There is a lower bound for the mass of the black hole. In short, the original black hole satisfies the condition 0 ≤ x ≤ 3. Under this condition, the temperature and thermodynamic volume are not negative, and the extreme black hole is at x = 3. But unfortunately, as mentioned earlier, the first law of thermodynamics Eq. (7) for the black hole is mathematically problematic.
• In our new thermodynamic framework, see Eqs. (9), (10), and (11), we guarantee the right form of the first law of thermodynamics by introducing new expressions of black hole temperature and thermodynamic volume. In order to ensure the non-negativity of these two thermodynamics quantities, we must require 0 ≤ x ≤ 2. Under this new condition, the first law of thermodynamics of the ultraspinning black hole is mathematically reasonable, but the cost is to change the original extreme configuration of the black hole. Specifically, the new extreme black hole is at x = 2 or corresponds to the new lower bound This is different from the original extreme black hole structure Eq. (6).
At 0 < x ≤ 2, we can easily prove that R ≤ 1, which implies that the ultraspinning black hole is still super-entropic in our new thermodynamic framework. When the value of x exceeds 2, the thermodynamic volume of black hole becomes negative, and the isoperimetric ratio is no longer applicable, so it is impossible to determine whether the ultraspinning black hole is super-entropic or not.
III. THERMODYNAMIC CURVATURE OF ULTRASPINNING BLACK HOLE
Now we start to calculate the thermodynamic curvature of the ultraspinning black hole, so as to verify the corresponding relationship proposed by Ref. [22] between the thermodynamic curvature and the isoperimetric theorem, and extract the possible microscopic information of the ultraspinning black hole completely from a thermodynamic point of view.
Considering an isolated thermodynamic system with entropy S in equilibrium, the author Ruppeiner in Refs. [26][27][28] divided it into a small subsystem S B and a large subsystem S E with requirement of S B ≪ S E ∼ S. We have known that in equilibrium state, the isolated thermodynamic system has a local maximum entropy S 0 at x µ 0 . Hence at the vicinity of the local maximum, we can expand the entropy S of the system to a series form about the equilibrium state where x µ stand for some independent thermodynamic variables. Due the conservation of the entropy of the equilibrium isolated system and the condition S B ≪ S E ∼ S, the above formula approximately becomes ∆S = S 0 − S ≈ − 1 2 where the so-called Ruppeiner metric is (here we omit subscript B) Now focus on the system of the ultraspinning black hole and its surrounding infinite environment.
Black hole itself can be regarded as the small subsystem mentioned above. In the light of the right form of the first law of thermodynamics Eq. (9), we can get the general form of the Ruppeiner metric for the ultraspinning black holes In principle, according to the first law Eq.
where (X 1 , X 2 ) = (S, P ) and in the right part of the second equal sign, we have used the first law of thermodynamics Eq. (9). The above thermodynamic metric g µν is equivalent to the metric g S µν in Eq. (17), but they have different representation forms. The metric g S µν is in the entropy representation, while the metric g µν is in the enthalpy representation. Next according to the specific form of the metric g µν , we start to calculate the thermodynamic curvature, which is the "thermodynamic analog" of the geometric curvature in general relativity. By using the Christoffel symbols Γ α βγ = g µα (∂ γ g µβ + ∂ β g µγ − ∂ µ g βγ ) /2 and the Riemannian curvature tensors R α βγδ = ∂ δ Γ α βγ − ∂ γ Γ α βδ + Γ µ βγ Γ α µδ − Γ µ βδ Γ α µγ , we can obtain the thermodynamic curvature R SP = g µν R ξ µξν . With the help of Eqs. (10), (11) and the expressions of entropy S and thermodynamic pressure P in Eq. (8), the thermodynamic curvature can be directly read as In view of the thermodynamic curvature obtained above, some explanations are made.
• For the extreme black hole, i.e., x = 2, we can observe clearly that thermodynamic curvature is finite negative value R SP | extreme = −57/S.
• Due to 0 < x ≤ 2, with a little calculation, we can obtain R SP < 0. We can speculate that the ultraspinning black hole is likely to present a attractive between its molecules phenomenologically or qualitatively.
• Look at the original extreme black hole, i.e., x = 3, you might intuitively get that the thermodynamic curvature tends to be infinite at this time according to Eq. (20). In fact, in this case, the basic thermodynamic metric (19) is no longer valid, because the first law (7) is pathological.
At present, the known super-entropic black holes are only (2+1)-dimensional charged BTZ black hole and the ultraspinning black hole. According to our current analysis and the calculation of charged BTZ black hole in the previous paper [22], we have for ultraspinning black hole R SP | extreme = −57/S and for charged BTZ black hole R SP | extreme = 1/(3S). Hence, a universal relationship is We know that the reverse isoperimetric inequality physically indicates that at a given thermodynamic volume, the (charged) Schwarzschild-AdS black holes are maximally entropic. The super-entropic black hole means that the entropy of black hole exceeds the maximal bound. For the (charged) Schwarzschild-AdS black hole, the thermodynamic curvature of the corresponding extreme black hole tends to be infinity, which is verified by various simple static black hole solutions of the pure Einstein gravity or higher-derivative generalizations thereof. Therefore, we can have the following corresponding relations: • For the black holes with R = 1, the thermodynamic curvature of the corresponding extreme black hole tends to be (positive or negative) infinity.
• For the black holes with R < 1, the thermodynamic curvature of the corresponding extreme black hole has a (positive or negative) remnant which is approximately proportional to 1/S.
• For the black holes with R > 1, the thermodynamic curvature of the corresponding extreme black hole is also (positive or negative) infinity.
We note that the last conjecture mentioned above about the extreme behavior of the thermodynamic curvature of the sub-entropic black hole (R > 1), needs further verification in the future. At present, we only think that at the case of exceeding the maximum entropy, the thermodynamic curvature of the corresponding extreme black hole has a finite remnant, but at the case of the maximum entropy, it tends to infinity. Naturally, when the entropy of black hole is less than the maximum entropy, the thermodynamic curvature of the corresponding extreme black hole tends to infinity intuitively.
IV. CONCLUSION AND DISCUSSION
In this paper, we investigate the the thermodynamic curvature of the ultraspinning black hole by introducing the right form of the first law (9). We find that the ultraspinning black hole is still super-entropic in our new thermodynamic framework, which is consistent with the result obtained in [23,24]. Meanwhile the obtained thermodynamic curvature is negative which means the ultraspinning black hole is likely to present a attractive between its molecules phenomenologically or qualitatively if we accept the analogical observation that the thermodynamic curvature reflects the interaction between molecules in a black hole system. Through the analysis of the extreme behavior of the thermodynamic curvature, we can get a general conclusion that the thermodynamic curvature of the extreme black hole of the super-entropic black hole has a (positive or negative) remnant approximately proportional to 1/S. This is a very interesting result.
In our previous work [44], we analyze the thermodynamic curvature of Schwarzschild black hole and obtain R Schwarzschild = ±1/S Schwarzschild . This one is very similar to what we've got in present paper. Maybe it's a coincidence? Maybe it suggests that the excess entropy in the super-entropic black hole comes from the Schwarzschild black hole? This unexpected problem needs further analysis and discussion.
Furthermore, we need to in the future confirm the conjecture about the sub-entropic black hole, such as the Kerr-AdS black hole [11,61], STU black holes [61,62], Taub-NUT/Bolt black hole [63], generalized exotic BTZ black hole [20], noncommutative black hole [64] and accelerating black holes [65]. The verification of this conjecture will help us to improve the correspondence between the thermodynamic curvature and the isoperimetric theorem, which is a very meaningful research content. | 4,163.6 | 2020-06-01T00:00:00.000 | [
"Physics"
] |
Strain Engineering of Domain Coexistence in Epitaxial Lead-Titanite Thin Films
: Phase and domain structures in ferroelectric materials play a vital role in determining their dielectric and piezoelectric properties. Ferroelectric thin films with coexisting multiple domains or phases often have fascinating high sensitivity and ultrahigh physical properties. However, the control of the coexisting multiple domains is still challenging, thus necessitating the theoretical prediction. Here, we studied the phase coexistence and the domain morphology of PbTiO 3 epitaxial films by using a Landau–Devonshire phenomenological model and canonic statistical method. Results show that PbTiO 3 films can exist in multiple domain structures that can be diversified by the substrates with different misfit strains. Experimental results for PbTiO 3 epitaxial films on different substrates are in good accordance with the theoretical prediction, which shows an alternative way for further manipulation of the ferroelectric domain structures.
Introduction
Phase and domain structures are crucial to dielectric and piezoelectric responses in ferroelectric materials [1][2][3] and, furthermore, the coexistence of multiple phases/domains exhibit unique structures and physical properties, opening a new window for sensitive mechanical sensors with higher dielectric and electromechanical responses [4,5]. To this end, tremendous efforts have been made to explore the ferroelectric phase transition and domain formation both theoretically and experimentally [6,7]. However, it has still been a big challenge to achieve precious control of domain structures in ferroelectric epitaxial thin films [8]. It is therefore necessary to understand the mechanism of the domain formation for further manipulation of the multiple domains and phases in ferroelectric thin films.
Such a coexistence of multiple phases/domains in ferroelectric thin films has been investigated by controlling misfit strains of substrate and growing conditions [4][5][6]. Leadtitanite ferroelectric thin films (PbTiO 3 ), usually, in the typical tetragonal phase can form c, a/c and a 1 /a 2 domain structures [4]. The domain structures become diverse in films with different substrates, where they can change from almost pure c to a mixing state of c and a/c, then a mixing a/c and a 1 /a 2 state; and finally, to a 1 /a 2 state as the substrate misfit strain increases from compressive to tensile strains [9][10][11][12]. Specifically, Li et al. illustrated the thickness-dependence of the a 1 /a 2 domain fraction of PbTiO 3 films grown on a single substrate [9]. Langenberg et al. reported the thickness-dependence and substrate straindependence of domain morphology in PbTiO 3 films and the effect of an applied electric field on domain structure distribution [10]. Nesterov et al. concluded the domain pattern in epitaxial PbTiO 3 films depends on the film thickness, miscut angle and growth speed [11]. Johann et al. found that interface type, substrate symmetry and miscut location direction affects the domain structures as well [12]. Damodaran et al. described the effect of substrate strain on the degree of competition near the phase boundary between the a/c and a 1 /a 2 domain structures by experiments and phase field simulations [8]. Surprisingly, Lu et al. obtained epitaxial PbTiO 3 thin films that exhibit abnormal mechanical-force-induced largearea, non-local, collective ferroelastic domain switching near the critical tensile misfit strain [7]. This "domino-like" domain switching was attributed to the coexistence of a/c and a 1 /a 2 nanodomains with a small potential barrier in-between [13].
The energy potential between the multiple phases is extremely lowered due to their structural competition based on the phenomenological Landau-Devonshire theory [7,[9][10][11]. In perovskite ferroelectrics with the multiple phases, the strain/stress state could be complex and monoclinic phases are usually formed under mechanical distortions [14]. The phase diagrams of perovskite ferroelectric films clarify that the multiple phases/domains are stabilized at the critical misfit strain [13,15,16]. These multiple domain structures near the phase boundary possess nearly degenerated energies and could coexist on a large scale [17] and the fraction is dependent on the difference of the coexisting energy potential, which has a great impact on the formed domain structures [18]. Recently, thermodynamic analysis and phase-field simulations further confirm that large piezoelectric and dielectric responses arise at morphotropic phase boundaries, which may be attributed to the dense domain wall [19].
As aforementioned, the multiple domains are crucial to the physical properties of ferroelectric materials especially near the critical points with coexisting phases/domains; however, it is still unclear about the phase/and domains distribution and formation of the final domain structures in epitaxial ferroelectric thin films. In this study, Landau's phenomenological theory was used to study the phase and domain coexistence in a PbTiO 3 epitaxial thin film [7]. The free energy of each coexisting domain state was compared with the consideration of misfit strain, domain size and temperature, which are vitally important for the domain formation. The volumetric fraction of coexisting domain structures was also calculated and compared with experimental results.
Materials and Methods
The PbTiO 3 has a high phase transition temperature (673 K), and a large spontaneous polarization of about P = 0.7 C/m 2 . The films were epitaxially grown on single crystal substrates with nearly the same lattice parameters; therefore, we consider few defects were induced in the film growth. PbTiO 3 films are fabricated at 800-900 K by pulsed laser deposition. The domain structure begins to form at about 600 K [20]. Therefore, we investigate the domain coexistence both at the temperatures for domain initialization and finalization. To study the polydomain coexistence in ferroelectric thin films, Landau theory with the consideration of polydomain mechanical interaction was adopted [21]. We assume that the film is epitaxially uniform with a transversely isotropic misfit strain and the energy density is only composed of Landau free energy and elastic strain energy without the influence of depolarization fields. The renormalized thermodynamic potential after the Legendre transformation of the Gibbs free energy can be expressed with respect to the primary order parameters of polarization P i and internal mechanical stresses σ i in the film [9,21] as follows: F = a 1 P 2 1 + P 2 2 + P 2 3 + a 11 P 4 1 + P 4 2 + P 4 3 + a 12 P 2 1 P 2 2 + P 2 2 P 2 3 + P 2 3 P 2 1 + a 123 P 2 1 P 2 2 P 2 3 +a 111 P 6 1 + P 6 2 + P 6 3 + a 112 P 2 1 P 4 2 + P 4 3 + P 2 2 P 4 3 + P 4 1 + P 2 where F is the total free energy, a i , a ij , a ijk are the linear and nonlinear dielectric stiffness coefficients, s ij and σ i are the elastic compliances and mechanical stresses, respectively. Parameters for our calculations are taken from Ref. [22]. According to experiments, there are three types of domain structures in PbTiO 3 films: pure c domain, a/c domain, and a 1 /a 2 domain.
For the pure c domain structure, the expression of the total free energy and spontaneous polarization could be written as Equations (2) and (3) [23].
where a * 3 = a 1 − 2Q 12 u m /(s 11 + s 12 ) and a * 33 = a 11 + Q 2 12 /(s 11 + s 12 ) with u m the substrate misfit strain and Q ij the electrostrictive coefficents. Following the polydomain theory, the spontaneous polarization and the corresponding total energies for a/c domain structures could be calculated by using the following Equations (4) and (5) [21].
where a * 3 = a 1 − Q 12 u m /s 11 and a * 33 = a 11 + Q 2 12 /2s 11 . Similarly, the analytical expression of the spontaneous polarization and the corresponding free energies for a 1 /a 2 domains are described by Equations (6) and (7) [23].
where a * 1 = a 1 − (Q 11 + Q 12 )u m /(s 11 + s 12 ) and a * 11 = a 11 + (Q 11 + Q 12 ) 2 /[4(s 11 + s 12 )]. According to the canonic statistical method, the volume fraction of domain structure in PbTiO 3 films could be calculated by thermodynamic probability. For each statistically equivalent ensembles i, the distribution probability of the ensemble being in the energy level G i can be written as Equation (8).
where k and T are the Boltzmann's constant and temperature, respectively. G i = F i V i is the energy for the ith different temperature domain structure with F i the system free energy densities and V i is the corresponding domain structure volume. G 0 is the energy for the ground state. The existing volume fractions for each phase are f i = γ i /(γ c + γ ac + γ aa ) with I = c, ac, aa that refer to the possible existing c domain, a/c domain, and a 1 /a 2 domain structures.
Since ferroelectric domain structures in PbTiO 3 films initially form at around 600 K, it is necessary to investigate the domain distribution at high temperatures prior to the final state at room temperature [20]. We note that the intrinsic exiting properties were studied without consideration of the interaction between multiple domain structures.
Results and Discussion
The total free energy of PbTiO 3 films with respect to misfit strains at 600 and 300 K are shown in Figure 1a,b. The gray, orange and blue curves for pure c domain, c/a domain and a 1 /a 2 domain in Figure 1a,b were calculated by Equations (2)-(7), respectively. The domain fractions at 300 and 600 K in Figure 1c,d were calculated by Equation (8). Due to the great interest of recent research on ferroelectric films under tensile strain, we here focus on the formation of the multiple domains near the critical tensile strains, where the total free energies of a 1 /a 2 domain and a/c domain are equal at critical strains of u m = 0.46% at 300 K and u m = 0.26% at 600 K. As shown in Figure 1c,d, the volume fraction of a/c domain sharply decreases with the increase of tensile strain, leading to a 1 /a 2 domain dominating state at a higher tensile strain. The coexisting area shrinks with the decrease of temperature due to the larger energy difference of coexisting domain structures at lower temperatures. Besides, pure c domain also coexists with a relatively small volume fraction under small tensile strain. It is worthwhile noting that the existing volume fraction changes with a temperature. For instant, a 1 /a 2 domain dominates in the film with a misfit strain of u m = 0.46% at high temperatures, while a 1 /a 2 and a/c co-dominate at room temperature. Therefore, the initial domain structure could be frozen with slightly higher elastic energy until further external stimuli were involved. It is also the reason for large-area, non-local, collective ferroelastic domain switching reported in PbTiO 3 epitaxial thin films [7]. domain fractions at 300 and 600 K in Figure 1c,d were calculated by Equation (8). Due to the great interest of recent research on ferroelectric films under tensile strain, we here focus on the formation of the multiple domains near the critical tensile strains, where the total free energies of a1/a2 domain and a/c domain are equal at critical strains of um = 0.46% at 300 K and um = 0.26% at 600 K. As shown in Figure 1c,d, the volume fraction of a/c domain sharply decreases with the increase of tensile strain, leading to a1/a2 domain dominating state at a higher tensile strain. The coexisting area shrinks with the decrease of temperature due to the larger energy difference of coexisting domain structures at lower temperatures. Besides, pure c domain also coexists with a relatively small volume fraction under small tensile strain. It is worthwhile noting that the existing volume fraction changes with a temperature. For instant, a1/a2 domain dominates in the film with a misfit strain of um = 0.46% at high temperatures, while a1/a2 and a/c co-dominate at room temperature. Therefore, the initial domain structure could be frozen with slightly higher elastic energy until further external stimuli were involved. It is also the reason for large-area, non-local, collective ferroelastic domain switching reported in PbTiO3 epitaxial thin films [7]. To further investigate the domain coexistence and domain evolution during the film growth, we calculate the spontaneous polarizations of the a1/a2 and a/c domains at the two typical temperatures. Without loss of generality, the domain size was considered as the same for each domain structure. The polarization-dependent total free energies in Figure 2a,c were calculated by using Equations (4) and (6) at different misfit strain and temperature. The temperature-dependent polarizations under different misfit strains were calculated by Equations (5) and (7). Inserts were calculated by using Equation (8). As shown in Figure 2a, the PbTiO3 film consists of coexisting a1/a2 domain and a/c domain under the critical strain of 0.26% at 600 K (Red curves), while the a1/a2 domain dominates at 300 K To further investigate the domain coexistence and domain evolution during the film growth, we calculate the spontaneous polarizations of the a 1 /a 2 and a/c domains at the two typical temperatures. Without loss of generality, the domain size was considered as the same for each domain structure. The polarization-dependent total free energies in Figure 2a,c were calculated by using Equations (4) and (6) at different misfit strain and temperature. The temperature-dependent polarizations under different misfit strains were calculated by Equations (5) and (7). Inserts were calculated by using Equation (8). As shown in Figure 2a, the PbTiO 3 film consists of coexisting a 1 /a 2 domain and a/c domain under the critical strain of 0.26% at 600 K (Red curves), while the a 1 /a 2 domain dominates at 300 K due to the lower energy (Blue curves). Since the spontaneous polarization decreases with the increase of temperature as shown in Figure 2b, it is likely that these two domain structures more easily coexist at higher temperatures. With the decrease of temperature, initially formed domains slightly grow up with considerable increase of elastic energy, which could also limit the transformation of the initial domain structure to the more stable domain structure at low temperatures. Such initial-heritage domain structure will be kept during the cooling process and becomes relatively unstable due to the emergence of more stable states.
tially formed domains slightly grow up with considerable increase of elastic en which could also limit the transformation of the initial domain structure to the more s domain structure at low temperatures. Such initial-heritage domain structure will be during the cooling process and becomes relatively unstable due to the emergence of stable states.
Misfit strain at room temperature is an important factor for film design. For Pb thin film, the critical substrate tensile strain is 0.46% at 300 K with a/c and a1/a2 do coexistence as shown in Figure 2c. It is reasonable that spontaneous polarizations similar values are easier to switch in-between. Therefore, films under a tensile stra 0.46% will be easier to have coexisting domains compared with that under strain of 0 as shown in Figure 2b,d. We note that the spontaneous polarizations for these two ty domain structures are the same at about 650 K with a substrate strain of 0.46%, w could be a critical temperature for aggressive domain competition and domain shrinkage due to the easy switching between existing domain structures. It is reported that coexisting domain structures commonly exist in epitaxial le tanite thin films with a misfit strain ranging from −0.1% to 0.6% [9][10][11][12]. In order to i tigate the domain coexistence, we first compare the free energy density of each pos domain under various misfit strains as shown in Figure 3. The energy difference bet existing domain structures is quite small at 600 K. Therefore, it is likely that each do structure is aggressively competing to exist. Domain size could be determined by the peting intensity. We could speculate that the domain size of the initially formed do Misfit strain at room temperature is an important factor for film design. For PbTiO 3 thin film, the critical substrate tensile strain is 0.46% at 300 K with a/c and a 1 /a 2 domain coexistence as shown in Figure 2c. It is reasonable that spontaneous polarizations with similar values are easier to switch in-between. Therefore, films under a tensile strain of 0.46% will be easier to have coexisting domains compared with that under strain of 0.26% as shown in Figure 2b,d. We note that the spontaneous polarizations for these two typical domain structures are the same at about 650 K with a substrate strain of 0.46%, which could be a critical temperature for aggressive domain competition and domain size shrinkage due to the easy switching between existing domain structures.
It is reported that coexisting domain structures commonly exist in epitaxial leadtitanite thin films with a misfit strain ranging from −0.1% to 0.6% [9][10][11][12]. In order to investigate the domain coexistence, we first compare the free energy density of each possible domain under various misfit strains as shown in Figure 3. The energy difference between existing domain structures is quite small at 600 K. Therefore, it is likely that each domain structure is aggressively competing to exist. Domain size could be determined by the competing intensity. We could speculate that the domain size of the initially formed domain structures is related to the free energy of coexisting domains, and could be smaller if the competition is more aggressive. under misfit strain ranging from 0.46% to 0.6% (grey area in Figure 3c,d), which could be helpful for the construction of frustrated systems since the competition of the existing domains can be kept during the cooling process. The existing fraction changes sharply with the domain size as shown in Figure 3e,f. For a domain with a length of 500 nm and the above-mentioned thickness and width, the coexistence of a/c and a1/a2 domains can be kept in a larger range of temperature in the film under a misfit strain of 0.46% than that of 0.26% during the cooling process. To verify the theoretical prediction, we compare the results with experiments for (001)-oriented epitaxial PbTiO3 thin film grown by pulsed-laser deposition on substrates with different misfit strains. Epitaxial PbTiO3 films were grown at 670 °C in a dynamic oxygen pressure of 50 mTorr at a laser repetition rate of 10 Hz, and a laser fluence of 1.9 J/cm 2 with the same condition as reported in Ref. [7]. The film thickness is 70 nm with an epitaxial Ba0.5Sr0.5RuO3 layer as bottom electrodes. We choose (001) To quantitively examine the domain competition, we theoretically compare the energy difference between a 1 /a 2 and a/c domains under various misfit strains at 600 K and 300 K as shown in Figure 3a,b, respectively. The energy difference between a 1 /a 2 and a/c domains is lower than 10 MJ/m 3 for film with tensile strains at both temperatures, while for films with compressive strain, the energy difference at 300 K is much higher than that at 600 K, indicating the multiple domains preferentially exist in films under the tensile strain. To examine the corresponding existing volume fraction, each domain volume is also a key factor. Although the domain size is dependent on each specific system, we here fix the domain thickness of 70 nm and the domain width of 40 nm to examine the effect of the domain length on the domain existing fraction. As shown in Figure 3c,d, the a 1 /a 2 and a/c domains more easily coexist at higher temperatures with a nearly equal total volume fraction. It is interesting to note that the domain fractions change little with temperature under misfit strain ranging from 0.46% to 0.6% (grey area in Figure 3c,d), which could be helpful for the construction of frustrated systems since the competition of the existing domains can be kept during the cooling process. The existing fraction changes sharply with the domain size as shown in Figure 3e,f. For a domain with a length of 500 nm and the above-mentioned thickness and width, the coexistence of a/c and a 1 /a 2 domains can be kept in a larger range of temperature in the film under a misfit strain of 0.46% than that of 0.26% during the cooling process.
To verify the theoretical prediction, we compare the results with experiments for (001)-oriented epitaxial PbTiO 3 thin film grown by pulsed-laser deposition on substrates with different misfit strains. Epitaxial PbTiO 3 films were grown at 670 • C in a dynamic oxygen pressure of 50 mTorr at a laser repetition rate of 10 Hz, and a laser fluence of 1.9 J/cm 2 with the same condition as reported in Ref. [7]. The film thickness is 70 nm with an epitaxial Ba 0.5 Sr 0. As shown in Figure 4a-d, topographic images show distinct domain structures in films on different substrates. In film on a (001) C -SrTiO 3 substrate causing −0.1% compressive strain, a/c domains dominate together with only about 5% a 1 /a 2 domains. With an increase of tensile strain, the a/c domain decreases and a 1 /a 2 domains become dominating on the stage, which is in good accordance with theoretical predictions as shown in Figure 4e-h. For film on (110) O -SmScO 3 , the fractions of a 1 /a 2 and a/c domains are nearly equal, indicating the more aggressive competitions of existing domains that can be described by the MPB (multiple phase boundary). Despite the slight mismatch between the theoretical prediction and experimental result under 0.46%-tensile strain, which may be due to more complicated domain formation conditions in this system [11,20], our calculations can explain the role of multiple domains as a function of temperature, strain, and domain size.
Coatings 2022, 12, x FOR PEER REVIEW 7 of 8 tips (Nanosensor, PPP-NCLPt, NanoSensors, Neuchatel, Switzerland). The PFM imaging scan uses alternating current (AC) driving voltage of 1 V in dual AC resonance tracking (DART) mode. Here, we only use topographic images of the films to identify the domain fractions of the well-known tetragonal-based domain structures. As shown in Figure 4a-d, topographic images show distinct domain structures in films on different substrates. In film on a (001)C-SrTiO3 substrate causing −0.1% compressive strain, a/c domains dominate together with only about 5% a1/a2 domains. With an increase of tensile strain, the a/c domain decreases and a1/a2 domains become dominating on the stage, which is in good accordance with theoretical predictions as shown in Figure 4eh. For film on (110)O-SmScO3, the fractions of a1/a2 and a/c domains are nearly equal, indicating the more aggressive competitions of existing domains that can be described by the MPB (multiple phase boundary). Despite the slight mismatch between the theoretical prediction and experimental result under 0.46%-tensile strain, which may be due to more complicated domain formation conditions in this system [11,20], our calculations can explain the role of multiple domains as a function of temperature, strain, and domain size.
Conclusions
In summary, we studied the domain distribution in ferroelectric lead-titanite thin films using the Landau-Devonshire phenomenological model with the canonic statistical method. The coexistence and existing fraction of a/c and a1/a2 domains in PbTiO3 films were analyzed with the consideration of epitaxial misfit strain, temperature and domain size. Our results are in good accordance with experimental results for PbTiO3 thin films grown on various substrates. This new methodology opens a new window for manipulating ferroelectric domain structures with ultrahigh-sensitivity and multi-state memories.
Conclusions
In summary, we studied the domain distribution in ferroelectric lead-titanite thin films using the Landau-Devonshire phenomenological model with the canonic statistical method. The coexistence and existing fraction of a/c and a 1 /a 2 domains in PbTiO 3 films were analyzed with the consideration of epitaxial misfit strain, temperature and domain size. Our results are in good accordance with experimental results for PbTiO 3 thin films grown on various substrates. This new methodology opens a new window for manipulating ferroelectric domain structures with ultrahigh-sensitivity and multi-state memories.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author. | 5,651.8 | 2022-04-18T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
The sequence of a gastropod hemocyanin (HtH1 from Haliotis tuberculata).
The eight functional units (FUs), a-h, of the hemocyanin isoform HtH1 from Haliotis tuberculata (Prosobranchia, Archaeogastropoda) have been sequenced via cDNA, which provides the first complete primary structure of a gastropod hemocyanin subunit. With 3404 amino acids (392 kDa) it is the largest polypeptide sequence ever obtained for a respiratory protein. The cDNA comprises 10,758 base pairs and includes the coding regions for a short signal peptide, the eight different functional units, a 3'-untranslated region of 478 base pairs, and a poly(A) tail. The predicted protein contains 13 potential sites for N-linked carbohydrates (one for HtH1-a, none for HtH1-c, and two each for the other six functional units). Multiple sequence alignments show that the fragment HtH1-abcdefg is structurally equivalent to the seven-FU subunit from Octopus hemocyanin, which is fundamental to our understanding of the quaternary structures of both hemocyanins. Using the fossil record of the gastropod-cephalopod split to calibrate a molecular clock, the origin of the molluscan hemocyanin from a single-FU protein was calculated as 753 +/- 68 million years ago. This fits recent paleontological evidence for the existence of rather large mollusc-like species in the late Precambrian.
The blue copper-containing respiratory protein hemocyanin occurs in molluscs as a ring-like decamer with a molecular mass of 4 MDa, consisting in a wall made up of 60 globular functional units (FUs) 1 and an internal collar complex of either 10 or 20 functional units, depending on the species. In cephalopods, such decamers are the only hemocyanin quaternary structure observed, but in gastropods two decamers are assembled face to face to form the so-called didecamer. It is well established that the cephalopod and gastropod hemocyanin collar complexes differ considerably. Moreover, in some marine gastropods, didecamers plus decamers have been observed to form tube-like multidecamers of varying length. (For reviews, see Refs. [1][2][3]. In these species, two immunologically, physicochemically, and functionally distinct hemocyanin isoforms occur, which were first described for the keyhole limpet Megathura crenulata, where these isoforms have been termed KLH1 and KLH2 (4,5).
The hemocyanin functional unit's mass is ϳ50 kDa, and each carries a binuclear copper active site. The functional units are arranged as a linear sequence of seven or eight, to form the 350 -400-kDa polypeptide of molluscan hemocyanin, which then represents the subunit (see Ref. 1). The eight functional units are structurally distinct and have been termed FU-a-FU-h, starting from the N-terminal functional unit. From Octopus dofleini hemocyanin (OdH), the complete amino acid sequence of the seven-FU subunit OdH-abcdefg is known (6). Moreover, the x-ray structure of a crystallographic dimer of functional unit OdH-g has been solved at a 2.3-Å resolution (7). From the more complex gastropod hemocyanins, much data on disassembly, reassembly, and oxygen binding behavior is available (for review see Refs. [1][2][3], and a 15-Å three-dimensional reconstruction of the didecamer has been produced from electron micrographs (8). However, when we started our sequencing work on the two isoforms of Haliotis tuberculata hemocyanin, HtH1 and HtH2, the primary structure of gastropod hemocyanin was only partially known, namely from FU-d and FU-g of Helix pomatia and FU-a of Rapana thomasiana hemocyanin (9 -11). Recently, we published the cDNA sequence coding for HtH1-fgh (12) and for HtH2-defgh (13). The present study was designed to complete the entire cDNA sequence coding for HtH1 and thereby provide the first complete primary structure of a gastropod hemocyanin.
EXPERIMENTAL PROCEDURES
Animals-The European abalone Haliotis tuberculata is a member of the phylogenetically rather ancient Archaeogastropoda. Animals were gifts from Syndicat Mixte d' Equipment du Littoral Blainville sur Mer, France and Biosyn, Fellbach, Germany. The abalone were kept in a sea water aquarium at 17°C and fed on brown algae.
Construction and Screening of cDNA Libraries-RNA was isolated from Haliotis tuberculata mantle tissue (1 g) using an RNeasy Maxi kit (Qiagen, Hilden, Germany) according to the instruction manual followed by mRNA isolation using paramagnetic beads from Promega (Mannheim, Germany). Two cDNA libraries were constructed using the Lambda ZAP®-CMV cDNA synthesis kit from Stratagene (Heidelberg, Germany). Pooled RNA from 19 and 2 individuals, respectively, was applied. The first library stems from Keller et al. (12) and was oligo(dT)primed; the second library was constructed in the present study, using random primers (Life Technologies, Inc.) in addition to specific HtH1 oligonucleotides, derived from the cDNA encoding HtH1-e and HtH1-f. Screening was performed with digoxygenin-labeled DNA probes.
Isolation and Analysis of cDNA Clones-cDNA clones obtained from the cDNA libraries were isolated using the in vivo excision protocol from Stratagene. cDNA-containing plasmids were isolated using the QIAprep Spin Miniprep kit (Qiagen) and restricted with EcoRI and XhoI (Stratagene) for clones that were derived from the first cDNA library where the cDNA fragments were cloned in a directional manner. In the case of the random-primed cDNA library, cDNA clones were analyzed only by EcoRI restriction. cDNA clones with different restriction patterns were sequenced by Seqlab (Göttingen, Germany) using standard primers. Subsequent sequencing reactions on both strands were performed with specific oligonucleotides.
Computer Software-The obtained sequences were analyzed by the latest versions of CHROMAS, Translate Tool, ALIGN, and Signal P * This study was supported in part by Deutsche Forschungsgemeinschaft Grant Ma 843/4 -3. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
‡ To whom correspondence should be addressed: Institut fü r Zoologie, Universitä t Mainz, D-55099 Mainz, Deutschland. (14); and TreeView (15); all of these programs are freely available in the Internet.
Linearization of the Trees and Time Estimations-Corrected pairwise distances were calculated with the PROTDIST program of the PHYLIP software package (16) and scaled in expected historical events per site (17). Distance matrices were imported into the Microsoft Excel for Windows 97 spread sheet program, and the divergence times were inferred under the assumption that the gastropodan and cephalopodan hemocyanins diverged 520 million years ago (18).
RESULTS
From the two cDNA libraries we isolated five different cDNA clones, together encoding the previously unknown N-terminal fragment HtH1-abcde and the recently published C-terminal fragment HtH1-fgh (see Ref. 12) of the subunit of Haliotis hemocyanin isoform 1 (HtH1). Overlapping regions of about 300 bp of the different cDNA clones were analyzed to ensure that they represented a single mRNA. The complete cDNA encoding HtH1 comprises 10,758 bp (including a 3Ј-untranslated region of 478 bp plus a poly(A) tail of 18 bp); it is available from the EBI Data Bank (accession number Y13219). The 5Јterminal sequence encodes a short signal peptide, typical for a protein synthesized in the rough endoplasmic reticulum. This is compatible with our current knowledge of hemocyanin biosynthesis in the pore cells of Haliotis (19). Numerous attempts by reverse-transcriptase polymerase chain reaction or cDNA screening to obtain the 5Ј-untranslated region and the triplet ATG coding for the very first methionine of the subunit have been unsuccessful to date. We assume that there are still a few amino acids of the signal peptide missing, but from the Nterminal motif of the biochemically isolated HtH1 subunit obtained by direct protein sequencing (DNVVRKDVSHLTD-DEVQ; see Ref. 12), it is clear that we have now obtained the complete sequence of the hemocyanin secreted into the hemolymph.
The amino acid sequence of the Haliotis hemocyanin subunit predicted from the cDNA sequence contains 3404 amino acids plus the signal peptide of 15 amino acids (Fig. 1). Identification of the latter was confirmed using the computer program Signal P (not shown). Because the only molluscan hemocyanin subunit completely sequenced previously, from Octopus, has a length of 2896 amino acids due to its lack of FU-h (6), the present Haliotis hemocyanin primary structure is the largest ever obtained for a respiratory protein, and it is certainly among the largest polypeptides in nature. The sequence is clearly substructured into eight homologous regions of 405-420 amino acids ( Fig. 1), corresponding to the eight different functional units HtH1-a-HtH1-h identified at the protein level; HtH1-h carries a unique tail extension of ϳ95 additional amino acids (see also Ref. 12). The N-terminal partial sequences of the biochemically isolated HtH1 functional units obtained from direct Edman degradation (12) fit the present sequence 100%, which was therefore conclusively identified as HtH1. Moreover, the ϳ300-bp sequence overlaps of the cDNA clones ensured that we did indeed analyze cDNAs encoding a continuous polypeptide chain. For each functional unit, some characteristics calculated from the sequence are shown in Table I. A multiple sequence alignment of the different functional units from HtH1 and OdH is shown in Fig. 2. For illustration of the structural aspects, a schematic representation of the x-ray structure of OdH-g is given in Fig. 3 (re-drawn from Refs. 6 and 7). The sequence identities and similarities, as calculated from a broader sequence alignment of molluscan hemocyanin functional units, are shown in Fig. 4, and a phylogenetic tree is presented in Fig. 5. Finally, Fig. 6 shows a time scale of the phylogenetic diversification of the sequenced molluscan hemocyanins as calculated from a percent accepted mutation matrix (17), assuming 520 million years ago for the gastropod-cephalopod split.
Significance of the Haliotis Hemocyanin Sequence-Both
Octopus and Sepia hemocyanin have been studied to much detail (see Refs. 1, 20 -22), but the established cephalopod reference hemocyanin is clearly that from Octopus, because it has been completely sequenced (6). However, cephalopod hemocyanins FIG. 1. Primary structure of the subunit of Haliotis hemocyanin isoform HtH1 as deduced from the cDNA sequence. The total number of amino acids including the signal peptide fragment (in italics) is 3419, with a calculated molecular mass of 393,102 Da and a theoretical pI of 5.73; the N terminus of the secreted protein is marked by an arrow. For each functional unit (indicated on the right), the copper A site is underlined, the copper B site is double underlined, and the copper binding histidines are white letters in black boxes. The peptide bridges linking the functional units are shaded as are the 13 potential N-linked carbohydrate attachment sites. Note that functional unit HtH1-c is devoid of potential sites for N-linked sugar chains. Shaded arrow, start of the tail extension of functional unit HtH1-h. are restricted to decamers, and therefore, their analysis is inefficient to explain the didecamers and multidecamers observed in other molluscan classes, notably the Gastropoda. In gastropods, the "classical" hemocyanin studied is that from Helix pomatia (see Ref. 1), but its sequence is only partially known (see Fig. 5). The recent biochemical characterization of Haliotis hemocyanin (12,13), in combination with the complete amino acid sequence presented here, now establishes HtH1 as the gastropod reference hemocyanin. Together with the partial sequence of HtH2 (13), which will soon be completed, questions of the biological significance and regulation of the two physicochemically and apparently physiologically distinct hemocyanin isoforms found in prosobranch species (see Refs. 4, 5, 23, and 24) can now be approached at the single amino acid level.
More importantly, the combined present and future data from Octopus and Haliotis hemocyanin will enable fundamental questions on the structure-function relationships of molluscan hemocyanin to be solved, which could not be efficiently addressed if only one of the two reference sequences was available. For example, the sequence of Octopus hemocyanin alone allowed the identification of amino acid residues that are conserved in all seven functional units (6), whereas together with the Haliotis hemocyanin sequence, residues can now be identified that have been specifically conserved in corresponding functional units over more than 500 million years but are different in the other functional unit types (Fig. 2). In addition, gastropod-and cephalopod-specific sequence motifs are now discernible (Fig. 2). Thus, efficient strategies for the analysis of specific functions of the different functional units (for example, with recombinant hemocyanin) can now be designed, which was impossible on the basis of the Octopus sequence alone. Moreover, tracing the as yet unclear path of the elongated subunit within the native quaternary structures is now greatly facilitated, because electron microscopical and biochemical data from the decameric cephalopod and the didecameric gastropod hemocyanin can now be combined on the basis of sequence alignments. Ultimately, the date of the evolutionary origin of molluscan hemocyanin, which was previously only roughly estimated (1,13), can now be traced with greatly improved accuracy by using the complete cephalopod and the complete gastropod hemocyanin primary structure in combination.
Localizing Potential Sugar Sites-The total molecular mass of the secreted polypeptide calculated from the sequence (therefore neglecting possible carbohydrate side chains) is 392 kDa. This is very close to the value of ϳ400 kDa measured in SDS-polyacrylamide gel electrophoresis for both HtH1 and KLH1 (12,24). From the multitude of conserved and variable structural features visible in Fig. 2, the potential N-linked carbohydrate sites (NXT/S) will now be discussed in more detail. 13 such sites exist in the sequence of Haliotis hemocyanin isoform HtH1 (Fig. 1), which, according to the x-ray structure of functional unit OdH-g from Octopus hemocyanin (Fig. 3), would all be accessible on the protein's surface. If they all carry an oligosaccharide side chain similar to that confirmed for OdH-g (ϳ1 kDa; see Ref. 6), they would together increase the molecular mass of the Haliotis hemocyanin subunit to ϳ405 kDa.
The x-ray structure of functional unit OdH-g (7) consists in an ␣-helix-rich "core domain" and a "-sandwich domain" rich in -strands (Fig. 3). In OdH-g between strands 2 and 3 of the core domain, a carbohydrate side chain is anchored that protrudes toward strand 12 in the -sandwich domain. This view of the 3-dimensional structure has therefore been called the "carbohydrate face" (7). In functional unit HtH1-g a potential N-linked sugar site is present in a similar position (Figs. 2 and 3). Interestingly, in FU-a, FU-d, FU-e, and FU-f of both Haliotis hemocyanin isoform HtH1 and Octopus hemocyanin, a potential N-linked sugar site is present in the region of strand 12, suggesting that a carbohydrate chain anchored there is likely to decorate the carbohydrate face from the opposite direction (Figs. 2 and 3).
The second region in which some functional units contain potential N-linked carbohydrate sites lies in the core domain as a group of exposed loops (the connections between ␣2/␣3, ␣5/4, 4/5, and ␣Ј11/␣12; see Fig. 3). There, FU-d and FU-f of both species as well as HtH1-e and HtH1-g possess a second potential N-glycosylation site (Figs. 2 and 3). In HtH1-e, in the region corresponding to strands 2/3, the sequence NPS is present but is probably blocked for glycosylation by the central proline (cf. Ref. 25). This suppression is interesting, because it avoids a second sugar chain in the carbohydrate face that would sterically interfere with the sugar chain potentially anchored to the accessible site in strand 12 (see Fig. 3). Another specific difference between Haliotis hemocyanin isoform HtH1 and Octopus hemocyanin concerns FU-b. Whereas functional unit OdH-b shows a single potential sugar site in the standard 12 position, in HtH1-b two such sites occur in the core domain, but none occur in the carbohydrate face (Figs. 2 and 3). Surprisingly, functional units HtH1-c and OdH-c both lack a potential N-glycosylation site (Fig. 2); in OdH-c the sequence NPT exists (in the position of helix ␣Ј6), but it should be masked by the central proline (cf. Ref. 25). Indeed, carbohydrate analysis TABLE I Properties of the Haliotis hemocyanin isoform HtH1 and its functional units as predicted from the cDNA sequence The cleavage points between the various functional units are somewhat arbitrary. *, values estimated from SDS-polyacrylamide gel electrophoresis of proteolytic fragments of HtH1 (12). The molecular mass discrepancies in the case of HtH1-a and HtH1-d suggest that the two proteolytic fragments did not correspond exactly to a single functional unit, respectively. Note that except for HtH1-c, 1000 -2000 Da should be added per functional unit for carbohydrate side chains, as deduced from the situation in OdH-g (6,7). of keyhole limpet hemocyanin has revealed that in contrast to all other functional units, KLH1-c is devoid of any sugar moiety, whereas KLH2-c lacks N-linked carbohydrates yet contains O-linked sugars (26). Functional unit HtH1-h has two sugar sites in quite unusual positions ( Fig. 2 and 3), which has already been discussed in detail elsewhere (12,13). Significantly, Haliotis hemocyanin isoform HtH1 shows 13 potential N-glycosylation sites per subunit, and Octopus hemocyanin shows only seven (excluding the NPS/T sites). It should be noted that HtH1 but not Octopus hemocyanin shows three NXC sequences in addition (in positions 1412, 1578, and 2956; see Fig. 2), which, according to Miletich and Broze (27), also might have the potential to bind carbohydrates. However, as deduced from the x-ray structure of functional unit OdH-g, all three sulfhydryl groups form a disulfide bond (Fig. 2) and are therefore inefficient as a hydrogen bond acceptor for the glycosylation reaction.
In comparing the two hemocyanin isoforms from Haliotis, the subunit fragment HtH1-defgh contains ten potential Nlinked carbohydrate sites, and HtH2-defgh contains only six (see Ref. 13 and this study). Such fundamental differences in glycosylation could well play a biological role; the counterparts of the two hemocyanin isoforms HtH1 and HtH2 in the keyhole limpet, KLH1 and KLH2, are differentially regulated physiologically. They prevail in the hemolymph for different periods, depending on the physiological condition of the animal, with KLH1 selectively disappearing from the hemolymph during starvation (see Refs. 4 and 5). In case of Haliotis hemocyanin, the relative proportion of the two isoforms also varies considerably between individuals (12). The present sequence data suggest that the two hemocyanin isoforms found in the prosobranch gastropods might be selectively recognized and sequestered via their differential glycosylation. Implications for the Quaternary Structure-Despite the 15-Å structure of the didecamer of keyhole limpet hemocyanin isoform KLH1 (8)
Sequence of a Gastropod Hemocyanin
from Octopus hemocyanin (7), and a wealth of data from immunoelectron microscopy (e.g. Refs. 22 and 28) and dissociation/reassembly studies (e.g. Refs. 29 -31) including recent ones on Haliotis hemocyanin (32), the exact topological position of the various functional unit types and the path of the ten identical subunits within the decamer of molluscan hemocyanin remains unclear.
In Octopus hemocyanin decamers, the internal central collar is composed of ten copies of FU-g, whereas the asymmetric collar-arc complex of gastropod hemocyanin decamers consists of ten copies of FU-gh (see Refs. 1,8,22,and 24). Topologically, the arc of KLH1 corresponds to the central collar of Octopus hemocyanin (cf. Refs. 8, 21, and 34). Because of the comparatively high structural conservation of FU-g between Haliotis and Octopus (Fig. 4) it is now strongly suggested that in Haliotis hemocyanin, FU-g forms a structure directly comparable with the collar in Octopus, and consequently, this would be the arc. This excludes the possibility that in gastropod hemocyanin, the arc is formed by a combination of FU-g and FU-h or by FU-h alone (the latter is also excluded from recent immunoelectron microscopy (35), showing FU-h in the collar of keyhole limpet hemocyanin). In contrast, it strongly supports the idea that the gastropodan arc contains only FU-g, leaving FU-h for the gastropodan peripheral collar. The distinctive primary structure of HtH1-h and HtH2-h (cf. Refs. 12 and 13) further supports this interpretation, because it indicates that these functional units form a structure within their respective decamer that has no parallel in Octopus hemocyanin and that this is the peripheral gastropodan collar.
It is well accepted that the cylinder wall of gastropod and cephalopod hemocyanin decamers consists in ten copies of the subunit fragment FU-abcdef (provided the functional unit nomenclature is adapted for Sepia; see Ref. 13), but details of the localization of these functional units within the wall are still uncertain (for a recent discussion, see Ref. 22). Into this debate, new light is provided by our present results. A phylogenetic tree constructed from the currently available sequences of functional units, in a highly bootstrap-supported manner, shows that the six wall-forming functional units from Haliotis hemocyanin isoform HtH1 and Octopus hemocyanin form six discrete branches (Fig. 5). This means that functional units from Haliotis and Octopus, which occupy the same position in the elongated subunit and therefore carry the same designation, correspond to each other structurally. Indeed, if the hemocyanin sequences of Haliotis and Octopus are aligned and functional unit HtH1-h is chopped off at exactly the point where the Octopus hemocyanin ends (Fig. 2), the number of amino acids constituting the remaining fragment HtH1-abcdefg is 2897, which is surprisingly close to the value of 2896 for the whole Octopus hemocyanin. Only 19 gaps for a single amino acid and two gaps for an amino acid pair have to be introduced for a continuous alignment of both sequences (not shown). Moreover, as judged from the x-ray structure of functional unit OdH-g, most of these small gaps are not within ␣-helical or -strand regions. This is all very strong evidence that the wall architecture is similar in these two phylogenetically distant molluscan hemocyanins, and consequently, it is highly unlikely that from the proposed models of subunit arrangement the parallel version will hold true for some hemocyanins and the anti-parallel for others. Because computer-processed 3-dimensional reconstructions derived from electron micrograph images strongly support an anti-parallel arrangement in the cases of KLH1 and Octopus vulgaris hemocyanin (8,21), the parallel models proposed from other evidence for Helix pomatia and Sepia officinalis hemocyanin (36) and later by our own group for KLH2 (28) are likely to be incorrect.
The newly available Haliotis hemocyanin sequence is especially stimulating in view of upcoming higher resolution 3-dimensional reconstructions of gastropod hemocyanin molecules, including a 12-Å structure of the HtH1 didecamer, which is already available in our laboratory. 2 Using the present sequence from the Haliotis hemocyanin subunit and the x-ray structure of functional unit OdH-g, molecular modeling experiments are in progress to predict the ternary structure of each functional unit, which then could be fitted into the HtH1 quaternary structure already obtained from electron microscopy. In this context, the data on possible glycosylation sites (Fig. 3) should also help, because glycosylated regions should be exposed to the free solvent and not directed toward a closely apposed functional unit. The next goal will then be to identify the locations of those amino acids that establish the intersubunit contacts within the decamer and didecamer.
Tracing the Evolution of Molluscan Hemocyanins and of the Phylum Mollusca-For the phylogenetic tree we chose an unrooted radial representation (Fig. 5), because no suitable outgroup is yet available; as shown by our previous work (13), the relationship to tyrosinase was found to be too remote for this purpose. In our recent phylogenetic analysis we found it impossible to resolve the evolutionary branching orders among the different functional units, indicating that they evolved very rapidly from their ancestral precursor (13). Indeed, even with the five additional functional unit sequences included, this aspect of the tree is not improved, with the branching pattern of the different functional unit types still being highly unstable (Fig. 5). However, the stable branches of the eight different functional units demonstrate that FU-a-FU-h existed individually long before gastropods and cephalopods separated, which according to fossil records was about 520 million years ago in the late Cambrian (18). This also means that in cephalopods, the lack of FU-h is the result of a secondary loss. The only exception is functional unit "a" from Rapana thomasiana he- mocyanin (RtH2-a), which groups together with the functional units of type "g" of the other hemocyanins and moreover, branches off from this line before the gastropod-cephalopod separation (Fig. 5), although Rapana is a prosobranch gastropod. This phenomenon has already been discussed (12,13), but in view of the present "correct" grouping of functional units "a" from Haliotis and Octopus hemocyanin together in one branch it is even more difficult to interpret. A recent immunochemical analysis of Rapana hemocyanin also revealed some unusual features (37). Functional unit RtH2-a has been analyzed by direct Edman degradation (11), and confirmation by cDNA sequencing might be appropriate in this confusing case, because if the sequence holds true, Rapana hemocyanin could become extremely interesting in terms of evolution.
Using the gastropod-cephalopod split to calibrate a molecular clock, from the structural divergencies of the different functional units we calculated the HtH1-HtH2 split 320 Ϯ 60 million years ago (Fig. 6), and the prosobranch-pulmonate split 359 Ϯ 24 million years ago (Fig. 6); the latter corresponds to the occurrence of the first pulmonate fossils in the early Carboniferous (18). The common origin of the different functional units could now be dated back, with much better statistics than in the previous rough estimations (1,13), 753 Ϯ 68 million years ago (Fig. 6). We believe that this early event does indeed signal the birth of a protein for extracellular oxygen transport. The different functional units arranged in an elongated subunit are the prerequisite for hemocyanin oligomerization; in turn, oligomerization is required for hemocyanin to function efficiently as an extracellular blood oxygen carrier (for colloid-osmotic and rheological reasons as well as for establishing allosteric effects of the respiratory protein; see Ref. 38). On the other hand, extracellular oxygen carriers only make sense for compara- (12,13) and the present study; the Octopus hemocyanin sequences were taken from Miller et al. (6). In the case of the similarity values, isofunctional exchanges are also considered. Note that each corresponding pair of the wall-forming functional units of the two hemocyanin isoforms from Haliotis that have now been sequenced (FU-d, FU-e, FU-f) shares 65-66% sequence identity. In contrast, the identity of the arc-forming functional units HtH1-g and HtH2-g is significantly higher (74%), and the identity of the collar-forming components HtH1-h and HtH2-h is somewhat lower (60%). In the phylogenetic tree this is illustrated by different branch lengths (see Fig. 5); it suggests that in the two hemocyanin isoforms of Haliotis, the molecular clocks of wall, arc, and collar run at different rates.
FIG. 5. Radial phylogenetic tree of molluscan hemocyanin functional units. This unrooted tree is based on a Clustal multiple alignment of the currently available, complete functional unit sequences. They stem from hemocyanins of the prosobranch gastropod Haliotis tuberculata (HtH1, HtH2), the cephalopod Octopus dofleini (OdH), the pulmonate gastropod Helix pomatia (HpH), the prosobranch gastropod Rapana thomasiana (RtH2), and the cephalopod Sepia officinalis (SoH). For sources, see Refs. 6, 9 -13, and 42. Because functional unit "h" from Sepia corresponds to functional unit "g" from other molluscan hemocyanins, it is termed here SoH[h]-g as recently proposed (see Refs. 12 and 13). Bootstrap percentages are based on 1000 replicates. Bootstrap values (33) are shown only if Ͼ50. Note that with the exception of RtH2-a (see "Discussion"), topologically corresponding functional unit types group together to form eight distinct branches.
FIG. 6. Timescale of the evolution of the molluscan hemocyanins. A linearized tree was obtained on the basis of corrected protein distance data. The divergence times were estimated under the assumption that the Gastropoda and Cephalopoda diverged 520 million years ago (18). The bars represent the standard errors of the means. MYA, million years ago. tively large and complex animals equipped with an efficient circulatory system, and such animals are usually thought to have evolved in the early Cambrian, about 540 million years ago (cf. Ref. 39). However, recent calculations based on 22 nuclear genes suggest that the early metazoan divergence was about 830 million years ago (40). Moreover, the Ediacaran fossil Kimberella, measuring up to 14 cm in size, which is found in late Precambrian strata, has recently been reconstructed as a mollusc-like animal with a soft shell (41). This indicates that rather complex mollusc-like metazoans did indeed exist long before the "Cambrian explosion." The present phylogenetic tree suggests that an efficient hemolymph oxygen carrier was available for animals like Kimberella and supports the concept of a graduate evolution of the protostome phyla over hundreds of millions of years in the late Precambrian rather than their punctuate origin in the early Paleozoic. | 6,470.8 | 2000-02-25T00:00:00.000 | [
"Biology"
] |
Prediction Techniques on FPGA for Latency Reduction on Tactile Internet
Tactile Internet (TI) is a new internet paradigm that enables sending touch interaction information and other stimuli, which will lead to new human-to-machine applications. However, TI applications require very low latency between devices, as the system’s latency can result from the communication channel, processing power of local devices, and the complexity of the data processing techniques, among others. Therefore, this work proposes using dedicated hardware-based reconfigurable computing to reduce the latency of prediction techniques applied to TI. Finally, we demonstrate that prediction techniques developed on field-programmable gate array (FPGA) can minimize the impacts caused by delays and loss of information. To validate our proposal, we present a comparison between software and hardware implementations and analyze synthesis results regarding hardware area occupation, throughput, and power consumption. Furthermore, comparisons with state-of-the-art works are presented, showing a significant reduction in power consumption of ≈1300× and reaching speedup rates of up to ≈52×.
Introduction
Tactile Internet (TI) enables the propagation of the touch sensation, video, audio, and text data through the Internet [1]. TI-based communication systems will provide solutions to more complex computational problems, such as human-to-machine interactions (H2M) in real time [2,3]. Therefore, TI is a new communication concept that allows transmitting skills through the Internet [4]. Several applications are available in the literature, such as virtual and augmented reality, industrial automation, games, and education [5]. Currently, the system's latency is a major bottleneck for TI applications. Therefore, it is necessary to guarantee very low latency, as demonstrated in [5][6][7][8]. Studies indicate that TI applications' latency varies from 1 to 10 ms in most cases or up to 40 ms in specific cases. Nevertheless, high latency can result in many problems, as stated in [7], such as cybersickness [9,10]. Several works have investigated methods to minimize the problems associated with the latency on TI applications, as presented in [1,[11][12][13][14]. The work shown in [15] provides a comprehensive survey of techniques designed to deal with latency, which proposes prediction techniques as a solution to minimize the impacts caused by delays and loss of information. Thus, the system "hides the real network latency" by predicting the user's behavior; notably, the proposal does not reduce the latency but predicts the system behavior, thus, enhancing the user experience's quality.
Plenty of research areas, such as market, industry, stocks, health, and communication, have used forecasting techniques over the years [16][17][18][19][20][21][22]. However, these techniques are often implemented in software, increasing the latency in computer systems within tactile links due to the high computational complexity of the techniques and the large datasets to be processed.
Systems based on reconfigurable computing (RC), such as field-programmable gate arrays (FPGAs), have been proposed to overcome the processing speed limitations of complex prediction techniques [23]. In addition, FPGAs enable the deployment of dedicated hardware, enhancing the performance of computer systems within the tactile system. In addition, systems deployed with FPGAs proposed in the literature can reach 1000× speedup compared to software-based ones [24][25][26][27][28].
Therefore, we propose the parallel implementation of linear and nonlinear prediction techniques applied to the TI on reconfigurable hardware, that is, on FPGA. Hence, the main contributions of this work are the following: • Parallel implementation of prediction techniques on FPGA without additional embedded processors. • A detailed description of the modules implemented for the linear and nonlinear regression techniques on FPGA. • A synthesis-based analysis of the system's throughput, area occupation, and power consumption, using data from a robotic manipulator.
•
An analysis of fixed-point precision against floating-point precision used by software implementations.
Related Works
The use of RC for computationally complex algorithms is widely available in the literature. Prediction techniques based on machine learning (ML), such as multilayer perceptron (MLP), are proposed to assist the bandwidth allocation process on the server automatically [29][30][31]. However, the presented systems are local and may not be scalable for use in more complex networks with higher traffic due to the need for data from all communications to perform the techniques' configuration and training steps. Therefore, linear prediction techniques have been proposed in [32,33] to avoid the loss of packages or errors.
Numerous works applied to TI are software-based implementations, such as cloud applications [34][35][36]. Usually, these software-based approaches are slower compared to hardware-based ones, thus affecting the data processing time of prediction techniques. As a result, some proposals were deployed on FPGA to increase the performance of manipulative tools [37][38][39][40], requiring accurate feedback [41][42][43][44].
Prediction techniques deployed on hardware, such as FPGAs, can reduce the latency in computer systems. In [45], an implementation of the quadratic prediction technique based on FPGA regression is proposed. In [46], a technique to detect epistasis based on logistic regression is implemented with an FPGA combined with GPU, achieving between 1000× to 1600× speedup compared to software implementations. In [47], an implementation of a probabilistic predictor on FPGA is proposed. Ref. [23] presented the hardware area occupation and processing time results for various RNA configurations of functions radial bases. Meanwhile, [48,49] demonstrate the feasibility of implementing algorithms based on deep learning (DL) using an RC-based platform.
Few studies explore linear regression applied to signal prediction on FPGAs or predictors applied in TI systems. However, there are proposals for machine learning (ML) techniques on FPGA. As an example, [50] proposes an MLP architecture for wheezing identification of the auscultation of lung sounds in real time. The MLP training step is performed offline, and its topology contains 2 inputs, 12 neurons in the hidden layer, and 2 neurons in the output layer (2-12-2). The architecture uses a 36-bits fixed-point implementation on an Artix-7 FPGA, achieving a sampling time of 8.63 ns and a throughput of 115.88 Msps.
The work presented in [51] uses an MLP on FPGA to perform the activity classification for a human activity recognition system (HAR) for smart military garments. The system has seven inputs, six neurons in the hidden layer, and five in the output layer (7-6-5). In addition, five versions of the architecture were implemented by varying the data precision. The analysis shows that the MLP designed with a 16-bit fixed-point is more efficient concerning classification accuracy, resource utilization, and energy consumption, reaching a sampling time of 270 ns using about 90% of the embedded multipliers and a throughput of 3.70 Msp.
Another MLP implemented on FPGA is proposed by [52] for real-time classification of gases with low latency. The MLP has 12 inputs, 3 neurons in the hidden layer, and 1 neuron in the output layer (12-3-1). In addition, the Levenberg-Marquardt backpropagation algorithm is used to perform offline training. The architecture was developed on Vivado using high-level synthesis (HLS) to optimize the development time and deployed on a Xilinx Zynq-7000 XC7Z010T-1CLG400 FPGA. Concerning the bit-width, a 24-bit signed fixed-point representation was used for the trained weight data with 20 bits on the fractional part. Meanwhile, 16-bit (14 bits on the fractional part) was used to deploy the output layer using the TanH function. A throughput of 539.7 ns was achieved.
In [53], an MLP was implemented for automatic blue whale classification. The MLP had 12 inputs, 7 neurons in the hidden layer, and 3 in the output layer (12-7-3). The backpropagation algorithm was used for an offline training process. The trained weight data were deployed using fixed-point representation with a 24-bit maximum length. The output function adopted was the logistic sigmoid function. The architecture was developed on a Xilinx Virtex 6 XC6VLX240T and Artix-7 XC7A100T FPGAs, reaching a throughput of 27.89 Msps and 25.24 Msps, respectively.
Unlike the literature works discussed, we propose linear and nonlinear prediction techniques designed on hardware for TI applications to reduce the latency. The linear techniques proposed are predictions based on linear regression using the floating-point standard IEEE 754. In addition, four solutions for different ranges of the regression buffer are presented. Regarding the nonlinear techniques, an MLP-BP prediction technique is proposed, using fixed-point representation, performed with online training. The Phantom Omni dataset is used to validate the implementations and compare them to software versions implemented on Matlab.
Proposal Description
TI-based communication enables sending the sensation of touch through the Internet. The user, OP, interacts with a virtual environment or a physical tool, ENV, over the network. Figure 1 shows the general tactile internet system, with two devices interacting. The devices can be the most diverse, such as manipulators, virtual environments, and tactile or haptic gloves. The master device (MD) sends signals to the slave device (SD) during the forward flow. Meanwhile, the SD feedbacks the signals to the MD on the backward flow.
Each master and slave device has its subsystem, computational system, responsible for data processing, control, robotics, and prediction algorithms at each side of the communication process. MCS and SCS are the identifications for the master and the slave device computational systems, respectively. The total execution time of each of these blocks can be given by the sum of the individual time of each algorithm, assuming they are sequential.
The model adopted in this work considers that several algorithms constitute the computational systems, and each of them increases the system's latency. Thus, the prediction process should be implemented in parallel to the other algorithms embedded in the MCS and SCS. This consideration aims to decouple prediction techniques from other algorithms, simplify the analysis, and to improve performance. Figure 1 presents a model that uses prediction methods in parallel with computational systems. The prediction modules, identified as MPD and SPD, have the same signal inputs as their respective computational systems, signalsq(n) andc(n). In this project, the predictions performed use Cartesian values. The module MPD predicts a vector calledq(n) upon receiving the input vector. This prediction has a processing time of t mpd . Similarly, the SPD module predicts theĉ(n) vector on the slave side, with a prediction processing time of t spd . Figure 1. Block diagram illustrating the behavior of a generic Tactile Internet system that uses a parallel prediction method.
Prediction Methods
As shown in Figure 1, the modules responsible for the prediction system, called MPD and SPD, can be implemented in parallel with MCS and SCS computational systems. These prediction systems can execute nonlinear prediction methods (NLPM), linear prediction methods (LPM), or probabilistic prediction methods (PPM), as illustrated in Figure 2. We propose the implementation of linear regression and the multilayer perceptron with the backpropagation algorithm (MLP-BP).
As mentioned in the previous section, the system has two data streams, forward and backward, represented by the signal vectors c(n) and q(n). In this section, υ(n) represents the input samples, andυ(n) represents the predicted samples for these two vectors in both streams.
Each prediction module can implement different prediction methods that can be applied for both Cartesian and joint coordinates, as described in [54]. The implementations can replicate the same technique multiple times. A replication index, NI, can be used as a metric to define the hardware capacity to implement multiple techniques in parallel. The NI value may vary according to the degree of freedom of the virtual environment or robotic manipulator model.
Linear Regression
The linear regression prediction model uses a set of M past samples to infer possible predicted data. It uses a set of observed pairs composed of the time marker, t m , and the dependent variable, υ, that is, (t m (1), υ(1)), (t m (2), υ m (2)), . . . , (t m (M − 1),υ(M − 1)),(t m (M), υ(M)). The regression can be defined by Equation (1), whereυ(n) is the predicted value of υ(n),β 0 (n) is the linear estimation coefficient, and β 1 (n) is the coefficient of angular estimation for the same estimated sample. The parameter estimation process uses the principle of least squares [55]. Equations (2) and (3) indicate the coefficients,β whereῡ(n) andt m (n) are the average values of the sample variables υ and t m .
Multilayer Perceptron Networks
Commonly, complex problems are solved with machine-learning-based solutions, such as artificial neural networks (ANN). The mathematical structure of the ANN is composed of processing units called artificial neurons. The neurons can operate in a parallel and distributed manner [56]. Hence, ANN solutions can exploit the high parallelism degree provided by FPGAs.
Architecture
Several applications based on neural networks use the architecture of an MLP-BP due to the ability to deal with nonlinearly separable problems [57]. Equation (4) represents the prediction function using the MLP technique, which uses B past samples of υ to generate theυ(n) value, as follows:υ where υ n−1 , υ n−2 , . . . υ n−B are the input values of the MLP andυ is the MLP predicted output. Equation (5) presents a generic MLP with L layers, where each k-th (k = 1, . . . , L) layer can have N k neurons with N k−1 + 1 inputs representing the number of neurons in the previous layer. The neurons from the k-th layer process their respective input and output signals through an activation function f k (•). At the n-th sample, this function is given by where y k i (n)(i = 1, . . . , N k ) is the i-th neuron output in the k-th layer, and x k i (n) can be defined as where w k ij (n) is the synaptic weight associated with j-th input of the i-th neuron. Figure 3 illustrates the structure of an MLP ANN with L layers and Figure 4 illustrates the i-th neuron in the k-th layer. Figure 4. Structure of a neuron (perceptron) with N k−1 + 1 inputs.
The f k (•) function was defined by rectified linear unit (ReLU) function according to Equation (7): The backpropagation algorithm is the training algorithm used with MLP.
Backpropagation Training Algorithm
The weights are updated with the error gradient descent vector. At the n-th iteration, the i-th neuron error signal in the k-th layer is defined by where d i (n) is the desired value, and δ k+1 j (n) is the local gradient for the i-th neuron in the (k + 1)-th layer at the n-th iteration. Equation (9) describes the local gradient, where f ′ (y(n)) is the derivative of the activation function. The synaptic weights are updated according to the following: where η is the learning rate, α is the regularization or penalty term, and w k ij (n + 1) is the updated synaptic weight used in the next iteration.
Implementation Description
We propose an architecture using a 32-bit floating-point (IEEE754) format for the linear prediction technique. Throughout this section, we use the notation [F32]. For the MLP prediction technique proposed, we designed an architecture with a fixed-point format (varying the bit-width). We use the notation [sT.W] to represent the fixed-point values, where s represents the sign with 1 bit, T is the total number of bits, and W the number of bits in the fractional part. Therefore, the integer part of signed variables is T − W − 1 bits long, while for unsigned variables it is T − W bits.
Linear Regression
The hardware architecture implemented for the linear prediction technique based on linear regression was based on Equations (1)-(3). All circuits in the structure use 32-bits floating-point precision.
The circuit shown in Figure 5 executes Equation (1). As can be observed, the circuit is composed of one multiplier and one adder. There are three input values, (t m [F32](n), β 0 [F32](n), and β 1 [F32](n)), and one output, (υ[F32](n)). To perform Equation (2), we use one multiplier and one subtractor, as shown in The circuit shown in Figure 7 performs Equation (3). As can be seen, the circuit is composed of two multipliers, one subtractor, one cascading sum module (CS), and two constant values (C). The constant values, C, were obtained empirically to simplify the existing division process in Equation (3). The circuit has two inputs values (υ[F32](n), υ[F32](n)), and one output value (β 1 [F32](n)).
-CS C C The cascading sum (CS) module shown in Figure 7 is implemented by the generic circuits shown in Figure 8. The cascading sum is also used as an input to calculate the mean values of t[F32](n) and υ[F32](n), as shown by the circuit illustrated in Figure 9.
Multilayer Perceptron
The main modules that perform the multilayer perceptron with the backpropagation training (MLP-BP) and the multilayer perceptron with recurrent output (RMLP-BP) are shown in Figures 10 and 11, respectively. The hardware structures are similar. The main difference between them is that the first input signal of the RMLP-BP is a feedback of the output signal. As can be observed, there are two main modules called multilayer perceptron module (MLPM) and backpropagation module (BPM). Both modules implement the variables in fixed-point format.
The MLPM module for the MLP-BP proposal ( Figure 10) has B inputs from previous instants of the υ variable. The MLPM for the RMLP-BP proposal ( Figure 11 . The hidden layers of the network also use the structure described. As mentioned in Section 3.2.1, the output layer uses the ReLU activation function. Figure 13 shows its hardware implementation. The signal x k i [sT.W](n) is the input of the nonlinear function described in Equation (7). The linear combination of weight and hidden layer output provides the neural network output.
Backpropagation Module (BPM)
The BPM defines the error gradient and updates the neurons' weights. The error gradient, e[sT.W](n), described in Equations (8) and (9), is performed by the circuits shown in Figure 14.
The circuit shown in Figure 15 calculates the MLP neurons' weights, as previously described in Equation (10 Table 1 summarizes the value used for each parameter in the MLP-BP and RMLP-BP hardware implementation. It is essential to mention that the training parameter was empirically defined.
Synthesis Results
This section presents synthesis results for linear and nonlinear prediction techniques. Three key metrics are analyzed: area occupation, throughput, and power consumption. This work's throughput (R s ) has a 1:1 ratio with frequency (MHz). All synthesis results analyzed here use a Xilinx Virtex-6 xc6vlx240t-1ff1156 FPGA, with 301, 440 registers, 150, 720 6-bits look-up tables (LUTs), and 768 digital signal processors (DSPs) that can be used as multipliers.
Firstly, we carried out analyses for the linear regression technique varying the M value from 1 to 3, 6, and 9, implemented in a 32-bit floating-point format. Secondly, we present the synthesis analysis values for MLP-BP using signed fixed-point configurations with the following bit widths: 18.14, 16.12, and 14.10. Finally, we also provide an analysis by increasing the number of implementations (NI) in parallel from 1 to 3 and 6, thus, increasing the number of variables processed in parallel. Tables 2-4 show the synthesis results for the linear regression prediction technique with 1, 3, and 6 parallel implementations, respectively. The first column of each table highlights the M value. The second to seventh columns present the area occupation on the FPGA. The second and third display the number of registers/flip-flops (NR) and their percentage (PNR), and the fourth and fifth, the number of LUTs (NLUT) and their percentage (PNLUT). Finally, the sixth and seventh indicate the number of multipliers (NMULT) and their percentage (PNMULT). The last two columns show the processing time, t s , in nanoseconds (ns), and the throughput, R s , in mega-samples per second (Msps). To demonstrate the linear behavior of our hardware proposal, we provide a linear regression model for Table 4. Figures 16-18 show NR, NLUT, and R S results. It is essential to mention that linear regression models return a coefficient of determination called R 2 . The R 2 rate represents the quality of the linear regression model, i.e., it demonstrates the obtained data variance. Commonly, R 2 is expressed on a scale from 0% to 100% (or a scale from 0 to 1 for normalized values). Concerning the NR, the plane f NR (NI, M) can be described by f NR (NI, M) ≈ −1439 + 510.7 × NI + 309.5 × M;
Linear Prediction Techniques
the coefficient of determination is R 2 = 0.8553.
and R 2 = 0.8863. Finally, the the plane f R s (NI, M) presents the throughput in Msps, is presented in the plane f R s (NI, M), and is described as and R 2 = 0.8372. According to the t s results presented in Tables 2-4 and Figure 18, a significant reduction in throughput is noticeable as M increases. Increasing the number of circuits in the cascading sum (CS) submodule results in a more significant critical path and, thus, a more considerable sampling time (t s ). However, the throughput increases proportionally to NI for a fixed value of M.
It is observable that there is a linear increase in the number of resources used as M and the NI grow. As presented in Table 4, for NI = 6 and M = 9, 46% of the NLUT are occupied. On the other hand, for smaller values such as M = 3 and NI = 6, the NLUT occupied is 21.53%. Additionally, it is possible to increase the NI using the remaining resources. However, there is no guarantee that there will not be large throughput losses.
Therefore, it is relevant to mention that the parallel FPGA implementations of the linear regression can achieve high throughput, as required in the TI scenario. On the other hand, these implementations result in high hardware area occupation. Considering that TI is still under development, high processing speed and intelligent use of resources are crucial.
Nonlinear Prediction Techniques
Commonly, MLP-based implementations use the hyperbolic tangent function. However, using this function resulted in a 28% occupation of the FPGA memory primitives for an MLP of four inputs, four neurons in the hidden layer, and one neuron in the output layer (with N = 1). For N = 6, it could occupy ≈68% of the memory primitives, making the tanh function unfeasible due to its high hardware implementation cost. The activation function that we use in this work is ReLU, since its hardware implementation does not require the use of memory primitives. As previously described, Equation (7) describes the ReLU function. Tables 5 and 6 show the hardware area occupation and throughput results for the MLP-BP and RMLP linear prediction techniques. The analyses for both techniques use a Virtex-6 FPGA. As presented in the first columns (T.W), they are implemented for different unsigned fixed-point bit widths. The results displayed in Tables 5 and 6 make it possible to plot surfaces demonstrating the hardware behavior concerning the area occupation and throughput. Figures 19 and 20 present the relationship between the NI and the number of bits in the fractional part (W) with the number of registers (NR) for the MLP and RMLP, respectively.
The f NR (NI, W) planes can be expressed by with R 2 = 1, and and R 2 = 0.9835.
for R 2 = 0.9935, and for R 2 = 0.9899. Equations (18) for R 2 = 1. Regarding the throughput (R s ) presented in Tables 5 and 6, it is observable that the R s does not vary significantly for a fixed NI and a varying bit width (T.W). For a fixed bit width (T.W) and a varying NI, the throughput values have a linear increase proportional to the NI value. Nevertheless, it is also necessary to mention that the t s value has a low variance because the MLP and BP structures adapt well to parallelism. Hence, the circuit provides good scalability without considerable performance losses. Compared to the linear regression discussed in Section 5.1, the MLP shows better flexibility.
The area occupation decreases as the bit width (T.W) and NI parameters also decrease. Reducing these parameters also reduces the modules' circuits to store or process data. The multipliers (NMULT) are the most used resource, reaching up to ≈42% of occupation when NI = 6. In addition, the MLP and RMLP result in a similar hardware area occupation, using less than 43%, 27%, and 2% of multipliers, LUTs, and registers, respectively. Given that, or the current design and chosen FPGA, the maximum value of NI feasible to implement would be 9 or 10. The throughput would remain close to the current range. Nevertheless, this analysis used only the Virtex-6 DSPs. It is important to emphasize that the available LUTs can implement multipliers, permitting an increase in the parallelization degree and throughput.
We also performed the synthesis for the MLP and BP algorithms separately to verify the hardware impact of each of them. Table 7 presents an MLP-only implementation, while Table 8 presents a BP-only implementation. Given that most of the works in the literature do not implement the BP or any training algorithm on hardware, we provide a complete analysis of the modules implemented separately. The MLP, for NI = 6, occupies only 3.82% and 19.53% of the LUTs and multiplies (PNMULT), respectively. It also achieved a throughput of ≈188 Msps. Hence, the low resource usage shows that our approach provides good scalability and high performance for applications that do not require online training and only use the MLP module. The synthesis results show that the hardware proposal occupies a small hardware area. As can be seen, the MLP uses less than 20% and 4% of multipliers and LUTs, respectively. Meanwhile, the BP occupies less than 4% multipliers and LUTs and reaches more than 39 Msps. Thus, it is possible to increase the architecture parallelization degree due to the unused resources, consequently enabling the acceleration of several applications that relies on massive data processing [58]. In addition, the unused resources can also be used for robotic manipulators with more degrees of freedom and other tools [59]. The low hardware area occupation also shows that smaller, low-cost, and low-consumption FPGAs can fit our approach for IoT and M2M applications [60].
Therefore, for the linear and nonlinear regression with BP implementations, the throughput results reached values up to ≈98 Msps. These values make it possible to use these solutions in problems with critical requirements, such as TI applications [9,10,[29][30][31]. Figures 19-24 show that the MLP and RMLP techniques have similar results for NR, NLUT, and R s . The similarity observed between the results is expected due to the RMLP architecture being similar to the MLP, except for the inputυ[sT.W](n), which is now delayed by a time sample t s . Therefore, the following sections will only focus on the MLP and MLP-BP results, as it provides better scalability for increasing the NI.
Validation Results
This work uses bit-precision simulation tests to validate the proposed hardware designs for the prediction techniques described in the previous section. Bit precision simulation is performed by a dynamic nonlinear system characterized by a robotic manipulator system with 6 degrees of freedom (DOF), i.e., rotational joints, called Phantom Omni [61][62][63][64]. Nonetheless, only the first three joints are active [64]. Therefore, the Phantom Omni can be modeled as a three-DOF robotic manipulator with two segments (L 1 and L 2 ) interconnected by three rotary joints (θ 1 , θ 2 , and θ 3 ), as shown in Figure 25. Based on the description provided by [63], the Phantom Omni parameters on the simulations carried out were defined as follows: L 1 = 0.135 mm; L 2 = L 1 ; L 3 = 0.025 mm; and L 4 = L 1 + A for A = 0.035 mm. In addition, the dynamics of the Phantom Omni can be described by nonlinear, second-order, and ordinary differential equations, as follows: where θ(t) is the vector of joints expressed as τ is the vector of acting torques which can be described as M(θ(t)) ∈ R 3×3 is the inertia matrix, C θ(t),θ(t) ∈ R 3×3 is the Coriolis and centrifugal forces matrix, g(θ(t)) ∈ R 3×1 represents the gravity force acting on the joints, θ(t), and f θ (t) is the friction force on the joints, θ(t) [61][62][63][64]. Figure 26 shows the angular position for each joint of the three-DOF Phantom Omni robotic manipulator, that is, θ 1 , θ 2 , and θ 3 . It is possible to observe the trajectory of each joint concerning its angular position as a function of the number of samples received. The mean square error (MSE) between the actual and predicted data is used to define the reliability of the results generated by the proposal and can be defined as where E q m(X) is the value of the mean square error, N s is the number of samples,(X)(i) is the i-th sample estimated value, and (X)(i) is the i-th sample current value.
The following subsections present the validation results for the implemented linear and nonlinear prediction techniques.
Linear Prediction Techniques
We compared the θ 1 (n) signal generated by our proposed FPGA architecture with one from a Matlab implementation for the linear prediction techniques. Figures 27-30 show the results. We developed a Matlab version using a double-precision floating-point. In contrast, our hardware design uses a single-precision floating point. As can be observed, the results shown for the hardware implementation are similar to the Matlab version, despite reducing the hardware bit-width by half. Table 9 and Figure 31 present the MSE between the software (64-bit floating-point based on IEE754) and hardware (32-bit floating-point) implementations for the LR prediction techniques, using N s = 4000 data samples, 80 frames, and 50 samples per frame. As can be observed, the two implementations are equivalent, i.e., the MSE is significantly small. Table 9. Mean square error (MSE) between the software implementation and the proposed hardware implementation for LR technique. Afterwards, we performed an MSE analysis by varying the hardware bit-width from 18.14 to 16.12 and 14.10. The analysis was carried out for N s = 4000 data samples, 80 frames, and 50 samples per frame. Figure 35 and Table 10 show the resultant MSE. As can be observed, similarly to linear prediction techniques, the MSE between the software and hardware versions is also small for nonlinear techniques. The proposed hardware implementations for prediction techniques have a similar response to the double-precision (64-bit) software implementation, even using fixed-point with fewer bits, such as 14.10. Furthermore, fewer bits may allow the implementation of the proposed method on hardware with limited capacity resources. Thus, the number of resources available could define the number of bits used to implement a technique. After analyzing the MSE, it is possible to see that both linear and nonlinear techniques perform well in the current test scenario. However, as previously mentioned, linearregression-based techniques may not be the most suitable for the TI landscape due to scalability issues seen in Section 5.1. Hence, in the following section, this work will focus on the results of the MLP-BP.
Comparison with State-of-the-Art Works
In this section, a comparison with state-of-the-art works is carried out for the following hardware key metrics: throughput, area occupation, and energy consumption. The implementations presented were developed on the Virtex-6 FPGA with T.W = 14.10 bits. Table 11 shows the MLP processing speed and throughput for our work and other works in the literature. As can be seen, the columns present the number of implementations (NI), the fixed-point data precision (T.W), the MLP and MLP-BP processing speed, and the throughput in Msps.
Throughput Comparison
The work proposed in [50] is an MLP with a 12-12-2 topology (twelve inputs, twelve neurons in the hidden layer, and two neurons in the output layer) deployed with a 24-bits fixed-point format. The MLP training is offline, and it reaches a throughput of 113.135 Msps and 115.875 Msps for the Virtex 6 XC6VLX240T and the Artix-7 XC7A100T FPGAs, respectively. The high performance achieved is due to the pipeline used in their proposed hardware design, reducing the system's critical path and increasing the maximum frequency. Unlike [50], our proposal uses online training, and using a pipeline-based architecture is not feasible due to the chain of delays intrinsic to this approach that can reduce the sample's accuracy during online training. Nevertheless, the throughput value of our architecture can improve as the number of implementations grows, increasing the number of samples processed per second without impacting its maximum clock.
The design proposed in [51] implements a 7-6-5 MLP with offline training on the Artix-7 35T FPGA. It achieved a throughput of 3.7 Msps, but the number of clock cycles required to obtain a valid output reduces the throughput compared to other works. Meanwhile, the work presented in [52] proposes a 12-3-1 MLP on a Zynq-7000, also with offline training, capable of reaching a maximum throughput of 1.85 Msps. The small throughput (compared to other works) may be related to the use of high-level synthesis (HLS), which usually results in a non-optimized implementation. The architecture presented in [53] is a 12-7-3 MLP with a 24-bit fixed-point data format and offline training. The maximum throughput achieved was 27.89 Msps and 25.24 Msps for the Virtex 6 XC6VLX240T and Artix-7 XC7A100T FPGAs implementations, respectively. Table 12 presents a speedup analysis performed for all works presented in Table 11. The first column presents the NI in our architecture, while the second to seventh columns are the literature works compared with ours.
Throughput work Throughput re f defines the speedup, where Throughput work represents the throughput of our proposal and Throughput re f represents the literature reference throughput.
The results were obtained only for the MLP-BP implementation. As shown in Table 12, the implementation seen proposed by [50] achieves a higher speedup. However, our proposal offers good scalability that allows increasing the NI and enables higher throughput, reducing this difference even with an implementation that uses online training embedded in the platform. Moreover, our approach reached a higher throughput than the other works, reaching speedup rates of up to 52×.
In addition, it is vital to mention that a higher frequency speed in MHz does not mean a higher throughput. Conversely, the throughput is commonly related to the parallelism degree. For example, the MLP speed in [51,52] have the lowest throughput even for a high-frequency speed (Table 11). In these cases, the speedup was up to 26× and 52× for [51] and [52], respectively.
In [53], the throughput value is 27.89 and 25.24 Msps, for an MLP with offline training and NI = 1. Meanwhile, even implementing the training algorithm in hardware, our work achieves speedup rates of up to 3×.
In [50], a pipeline scheme reduces the system's critical path and increases the throughput. However, it does not provide online training, which could reduce its performance. Meantime, our proposed architecture provides online training, adapting to different scenarios. In addition, it would not be feasible to use a pipelined scheme since the samples have a temporal dependence.
Hardware Area Occupation
The area occupation comparison was based on a hardware occupation ratio defined as .
The superscripts work and ref represent the resource information regarding our work and the compared work, respectively. Meanwhile, N hardware represents the primitives, such as the number of LUTS, registers, multipliers, or the number of block random access memory (BRAM). Table 13 shows the area occupation for our work and works in the literature. The second and third columns present the NI and fixed-point data precision (T.W). From the third to sixth columns, we present the number of LUTs (NLUT), the number of registers (NR), the number of multipliers (NMULT), and the number of BRAMs (NBRAM). The work presented in [51] uses an Artix-7 35T FPGA for the implementation, occupying 3466 LUTs, 569 registers, and 81 multipliers. The proposal shown in [52] uses 4032 LUTs, 2863 registers, 28 multipliers, and 2 BRAMs. The architecture proposed in [53] was implemented in two FPGAs using the sigmoid activation function, occupying 21,322 LUTs, 13,546 registers, 219 multipliers, and 2 BRAMs for the Virtex 6 XC6VLX240T FPGA, and 21,658 LUTs, 13,330 registers, 219 multipliers, and 2 BRAMs for Artix-7 XC7A100T. Tables 14-17 present the hardware ratio, R occupation , regarding our proposed architecture. As shown in Tables 14-17, our proposal uses online training and implements up to six replicas of the same technique in parallel. For most cases, it requires fewer resources, evidencing efficient use of hardware. For a scenario where NI is 1, except for the works presented in [51,52], which have low throughput (see Table 11), our proposal maintains a good advantage over the other proposals. For a scenario where NI is 6, the present work has a high consumption of hardware resources compared to the other works. However, this is a strategy adopted to increase the throughput of the proposal. Furthermore, unlike other proposals, our design does not occupy any BRAMs as we use the ReLU function, thus improving the design's scalability for flexible implementation in different scenarios, such as using TI systems with more DOFs, such as six or nine DOF.
Dynamic Power Consumption
Dynamic power is the primary factor for a digital circuit's energy consumption. It can be expressed as where N g is the number of elements (or gates), F clk is the maximum clock frequency, and V DD is the supply voltage. Given that the operating frequency of CMOS circuits is proportional to the voltage [65], the dynamic power can also be described as The number of elements, N g , can be defined by the FPGA primitives used to deploy the architecture, i.e., N g = NLUT + NR + NMULT. Tables 18 and 19 present the operating frequency and dynamic power analysis results regarding N g . Concerning the dynamic power, we present the reduction rate, S d , achieved by our proposal according to the following: where the N ref g and F ref clk are the number of elements and the maximum clock frequency of the work we are comparing. At the same time, N work g and F work clk are the number of elements and the maximum clock frequency of our work. Unlike the works in the literature, our hardware proposal uses a fully parallel layout, requiring one single clock cycle per sample processing. Therefore, the maximum clock frequency is equivalent to the throughput, F work clk ≡ R s . We assume that all proposals operate at the maximum frequency that the platform can reach. Thus, for an NI = 1, our design reduced power consumption by more than 1200× compared to the one proposed by [50]. Overall, our proposal reduced the power consumption compared to other work in most case scenarios. Therefore, IoT projects that require low power consumption can use our method without affecting their performance.
For NI = 6, we can observe a similar power consumption compared to [53] due to their proposal's small clock value and not providing online training.
Lowering the use of BRAMs to zero is a highlight of this work. This reduction is possible due to the implementation of the ReLU function. Unlike other proposals that make use of functions, such as sigmoid, this strategy provides an advantage in terms of scalability of the proposal, which can be scaled to various scenarios without compromising the use of BRAMs. The fully parallel computing strategy proposed in the present work does not spend clock time accessing the RAM block, and this can increase throughput and decrease power consumption.
Conclusions
This work introduced a method for implementing prediction techniques in parallel to reduce the latency of TI systems using FPGA, thus enabling local devices to be used in conjunction with haptic devices. The hardware-based method minimized the data processing time of linear and nonlinear prediction techniques, showing that reconfigurable computing is feasible for solving complex TI problems.
We presented all the implementation details and the synthesis results for different bit-width resolutions and three different numbers of implementations in parallel (one, three, and six). In addition, the proposal is validated with a three-DOF Phantom Omni robotic manipulator and evaluated regarding hardware area occupation, throughput, and dynamic power consumption. In addition, we also presented comparisons with state-of-the-art works.
Comparisons demonstrate that a fully parallel approach adopted for linear regression and nonlinear prediction techniques can achieve high processing speed. However, linear regression techniques have low scalability and may not be a good path for the TI area. Nonlinear prediction techniques achieve a throughput of up to ≈52× while also reducing power consumption by ≈1300×. Furthermore, despite the high degree of parallelism, the proposed approach offers good scalability, indicating that the present work can be used in TI systems, especially for the nonlinear prediction techniques. | 9,406.6 | 2022-05-01T00:00:00.000 | [
"Computer Science"
] |
An R package for data mining chili pepper fruit transcriptomes
Background: Open data sharing is instrumental for the advance of biological sciences. Gene expression is the primary molecular phenotype, usually estimated through RNA-Seq experiments. Large scope interpretation of RNA-Seq results is complicated by the extensive gene expression range, as well as by the diversity of biological sources and experimental treatments. Here we present “ Salsa ”, an auto-contained R package for extracting useful knowledge about gene expression during the development of chili pepper fruit. Methods and Results: Data from 168 RNA-Seq libraries, comprising more than 3.4 billion reads, were analyzed and curated to represent standardized expression profiles (SEPs) for all genes expressed during fruit development in 12 chili pepper accessions. Accessions have representatives of domesticated varieties, wild ancestors and crosses, covering a broad spectrum of genotypes. Data are organized in a relational way, and functions allow data mining from the level of single genes up to whole genomes, grouping profiles by different criteria. Those include any combination of expression model, accession, protein description and gene ontology (GO) term, among others. Also, GO enrichment analysis can be performed over any set of genes. Conclusions: “ Salsa ” opens endless possibilities for mining the transcriptome of chili pepper during fruit development.
Background
Measurements of gene expression constitute the primary molecular phenotype. RNA-Seq experiments [1] allow genome-wide estimation of the relative level of gene expression in a particular specie, organ, tissue or even single cells [2]. Gene expression profiles through time estimate the transcriptome landscape of organ developing programs. Phenomena as seed development [3], senescence [4] and aging [5] had been shown to be conserved in plants. In particular, the development of fleshy fruitsan indispensable part of the human diet, is probably conserved throughout the angiosperms [6].
There are various databases to query gene expression profiles, as the NCBI Gene Expression Omnibus [7] or TiGER [8]; however, mining gene expression databases [9] remains a challenge [10], basically because the heterogeneity of organisms, experimental conditions and methods employed to obtain those profiles. Comparisons of expression profiles between genes within a single experiment is also complicated by the fact that transcript abundance varies in orders of magnitude [11].
Here we present "Salsa", an auto-contained R [12] package that allows genomewide mining of a large collection of more than 313,000 standardized expression profiles (SEPs), representing expression profile changes during fruit development in chili pepper (Capsicum annuum L.). With this application you can look for sets of genes having a particular description or annotation, expressed in one or more accessions and following a specific expression pattern, etc. It also includes functions for the statistical analysis and visualization of SEPs, Gene Ontology (GO) enrichment analyses and web browsing facilities for genes and GO terms.
Even when "Salsa" is of primary interest for Capsicum research, it will also be useful for researchers interested in fruit development in other Solanaceae, or even to perform comparative analysis of fruit development with taxonomically distant species.
At first sight it might appear excessive to devote an R package to the data mining of a single data set. However, data can be mined with highly different emphases, looking for specific phenomena in the multidimensional space formed by the time profile of almost 30 thousand genes expressed in 12 genotypes of different origin. Considering and analyzing the data from different angles could provide novel insights for a better understanding of transcriptome's complexity during fruit development.
Implementation
The main factors that hinder comparison between gene expression profiles are the heterogeneity of data sources -different species, organs, tissues, and treatments (environmental conditions, chemicals, mutants, time of development, etc.) as well as differences in data curation and statistical analyses. To alleviate these factors and achieve an equilibrium between generality and specificity, we focus in a set of 12 accessions of a single specie (Capsicum annuum L.), growing the plants under uniform conditions and sampling the fruits at fixed times, from flower to maturity [13].
To be able to compare expression profiles from different genes and accessions, we improved the method published in [14] to include a False Discovery Rate (FDR) [15] of approximately 1% when comparing any pair of expression profiles. The method only contrast adjacent time intervals, and determine if gene expression increases (I), decreases (D) or remains steady (S) by applying the method described in [16]. Given that 7 time points are taken into consideration (six time intervals), this categorization results in a total of 3 6 = 729 discrete expression models. Finally, gene expression is standardized over time to produce "Standardized Expression Profiles" (SEPs). Full details of the method are presented in [13] and commented in Additional file 1.
Data in "Salsa" follow the relational database paradigm [17]; gene identifier ("id") and three other variables link the seven data.frames, providing an efficient framework for data querying (for details see section 2.1 in Additional file 1).
At the core of the package is the function "get.SEP", which allows the selection of sets of expression profiles using general and flexible criteria. These criteria include the selection of genes by description, accessions, expression model, time of maximum expression and expression level (for details see section 2.2 in Additional file 1). Having selected one or more sets of SEPs the user can summarize, plot and analyze the SEPs groups. Summarization and plotting include the calculation of confidence intervals for each one of the expression times, which grant evaluation of differences in relative expression levels among SEPs at each one of the seven time points sampled.
The problem of testing global differences between two SEPs is solved in function "analyze.2.SEPs" by estimating Euclidean distances between and within each one of the SEPs and comparing the mean of the distances between the two SEPs with the mean of the distances within each SEP via a t-test. This approach reduces the dimension of the problem of simultaneously testing seven time points to the unidimensional comparison of two means, offering a powerful approach to decide if two SEPs can be considered equal.
Additional data mining facilities in "Salsa" include summarization of expression profile for each one of the genes, web browsing per gene or GO term as well as GO enrichment analysis. GO enrichment is implemented for arbitrary sets of genes in a single GO term (function analyze.GO) or for the whole set of GO terms in a single aspect (function analyze.all.GO). GO enrichment is performed using Fisher's exact test, and a filter includes the calculation of FDR when all terms in a GO aspect are analyzed.
"Salsa" is implemented in R (≥ 3.4.4) and thus it is platform-independent and does not have any restrictions to be used. The binary file to install the package is presented as Additional file 2.
Results and Discussion
"Salsa" offers many opportunities to find interesting insights in the chili pepper transcriptome during fruit development. A priori, it is difficult to delimitate such universe of possibilities, but it involves as a first step to find an 'interesting' set of genes to study. Further steps will involve a more detailed analysis of that gene set, guided by the biological intuition of the researcher.
Two examples of interesting sets of genes that could be analyzed are: 1) genes with a highly concordant and specific expression profile within one or more accessions, but with different and also concordant expression profiles in other accessions; 2) genes associated with a particular GO biological process with contrasting expression patterns between accessions or groups of accessions. In fact, the possibilities of analyses are limitless, and depend on the researcher's interest.
We decided to illustrate, as an example of data mining, the comparison of expression profiles between two accessions. R code and detailed explanations are presented in section 3 of Additional file 1, while here we present and explain the significance of the results through the figures produced by the package.
Given that the main point of the example is to show the package possibilities, we are not going to discuss in depth the biological implications of findings; in [13] we have described the way to discover interesting facts by employing a set of R data and functions that culminated in the "Salsa" package after careful generalization and testing.
Comparing gene expression profiles between accessions with contrasting fruit size. We begin our analysis by isolating, as separate SEP data.frames, the genes expressed in accessions, "AS" (Ancho San Luis), of the domesticated accession set, which produces very large and moderately pungent fruits and "SR" (Sonora Red), a wild accession with very small and highly pungent fruits.
The majority, > 89%, of the genes were consistently expressed in both accessions, while small percentages, < 3% and < 7% of the total number of genes, were exclusively expressed in "AS" and "SR", respectively. Figure 1 presents the plot of the average SEPs for each accession, as well as for the set formed by both of them. Figure 1 shows that the mean SEPs in "AS" and "SR" significantly differ at some points of the fruit development. For this plot we employed a very stringent threshold for the estimation of confidence intervals (CI) for the means; an Error Type I of α = 1 × 10 −4 , which implies a 99.99% of confidence. CIs for the mean of each group at each time are shown as thin vertical lines over the circles that stand for the means per time and accession group. Looking at the CIs, we see that the mean SEPs of "AS" and "SR" are highly different at time points 10, 20, 40 and 50 DAA. It is important to consider that the plot of average SEPs, as the one presented in Figure 1 for "AS" (red line) and "SR" (blue line), does not indicate uniformity of expression profiles for individual genes; in fact, the mean of the SEPs hides the large diversity of individual expression profiles among genes (see Figure 3 in Additional file 1).
Finding sets of genes with divergent expression between the two accessions. The divergence of mean SEP expression between "AS" and "SR", observed in Figure 1, entails high differences between the transcriptomes of the two accessions; in particular, the peak of mean expression is found at 10 DAA for "AS", while for "SR" it happens ten days later, at 20 DAA. Peak of mean expression signals maximum transcriptional activity, and is a hallmark in time for each individual gene.
To dissect transcriptome differences between "AS" and "SR", we selected the sets of genes with simultaneous peak expression at each one of the seven sampled time points. This produces a total of 7 × 7 = 49 gene sets (see Box 3 in Additional file 1). Figure 2 shows the matrix of percentages of genes with peak expression at each one of the times.
The total of genes expressed in both accessions was 24720. Of these, 3672, representing a proportion of 3672/24720 ≈ 0.1485 or 14.85%, have their peak expression at 0 DAA in both accessions That figure is presented in the bottom left hand-side of the matrix in Figure 2. The green dashed line in Figure 2 is over the cases where the peak expression coincides in time in both accessions, and we can see that, except for 0, 10 and 60 DAA, the corresponding percentages are small (less than 2%), which partially explains the differences between mean SEPs observed in Figure 1.
Gene sets that are out the dashed green diagonal of Figure 2 are "interesting", because they present a pattern where peak expression is out of phase. One of the two sets of genes presenting the highest possible phase difference is the one formed by the 758 genes (≈ 3.07% of the total; top left hand-side corner in Figure 2) which peak at 0 DAA in "AS" (X-axis) while having such maximum at 60 DAA in "SR" (Y -axis).
Analysis of the "ASm0SRm60" gene set. To further show "Salsa" capabilities, we performed an in-depth analysis of the set of 758 genes (≈ 3.07% of the total), which presents its maximum mean expression at the mature flower (0 DAA) in accession "AS", while having such peak at the mature fruit (60 DAA) in "SR". We denote that gene set as "ASm0SRm60". Figure 3 presents the mean expression SEPs in the ASm0SRm60 set.
In Figure 3 we can notice the high difference in phase and contrasting mean standardized expression in the set ASm0SRm60. Mean SEP in "AS" (red line) presents a high peak at 0 DAA, suddenly decreasing from 0 to 10 DAA and then presenting a relatively steady state from 10 up to 60 DAA, forming an 'L' shaped expression profile. On the other hand, mean SEP for "SR" (blue line) presents an almost mirrored L shape with a local maximum at 0 DAA and then decreasing from 0 to 10 DAA -where the global minimum of mean expression is reached. From 20 DAA up to 50 DAA mean expression in "SR" stays relatively steady, suffering then a sudden increase to reach the peak of mean expression at 60 DAA. 53 of the 758 genes in ASm0SRm60 (≈ 6%) are transcription factors (TF), and Figure 8 in Additional file 1 shows that these genes display a highly significant difference between "AS" and "SR" only at 0 and 60 DAA, showing a low and not significant steady state between 10 and 50 DAA.
The expression pattern of ASm0SRm60 genes is intriguing because it reverses peak expression from the first stage of fruit development -the mature flower at 0 DAA in domesticated accession "AS", to the last stage -fully mature fruit at 60 DAA in wild accession "SR". To understand the biological relevance of this set of genes we performed GO enrichment analyses by running function "analyze.all.GO" with a FDR threshold of 10% with categories Biological Process (BP), Cell Component (CC) and Molecular Function (MF).
In the first row of Table 1 we can see that a total of 106 genes from the set of 758 in "ASm0SRm60", i.e., ≈ 14%, are annotated in the BP 'Transport' (GO:0006810), while the expected number of such genes under the independence hypothesis is only 70. This implies that small molecule transport is higher in the mature flower (0 DAA) in the domesticated accession "AS", while conversely it is higher in the mature fruit (60 DAA) in the wild accession "SR" (see Figure 3).
We are not going to extend here the discussion of the biological relevance of the results in Table 1; nonetheless, it must be clear that "Salsa" capabilities grant detailed and deep mining of the chili pepper transcriptome during fruit development (see Additional file 1 for more details).
For example, the function "gene.summary" gives a plot as well as a numeric summary of mean SEPs for any one of the 29946 genes present in the data. The gene with identifier 3018 was one of the transcription factors found within the "ASm0SRm60" set, and Figure 4 shows the plot resulting from running "gene.summary(id = 3018)".
In Figure 4 we observe that the mean expression per set of SEPs for that gene resembles the mean expression of the genes in the "ASm0SRm60" set, shown in Figure 3. That correspondence is apparent in that the TF with identifier 3018 presents a higher expression in the domesticated set "D" at 0 DAA -set which includes accession "AS", while presenting a higher expression in the set "W" at 60 DAA -set which includes accession "SR"; i.e., the expression of gene with identifier 3018 has in general (and not only for accessions "AS" and "SR") a reversed expression pattern in "D" and "W" accessions.
For brevity we have deepened in the mining of only one of the 49−7 = 42 gene sets that diverge in peak expression between the two accessions (see Figure 2). Certainly, more profound and also more varied searching can be tried with innumerable initial selections of sets of accessions, expression times, expression levels or gene categories. A possibility is to widen the analysis by using the function "browse.gene()" (see Figure 10 in Additional file 1). With that function the user can look for orthologous to the Capsicum genes, and link to other genomic or transcriptomic databases. Also in "Salsa" it is easy to obtain the structure of networks of co-related expression, which can then be studied by graph theory [18] or plot with specialized software [19], as for example the "igraph" R package [20].
Conclusions
We have shown that "Salsa" opens endless options for mining the transcriptome of chili pepper during fruit development. In silico analyses can suggest interesting hypotheses, which could then be experimentally tested; for example, one possibility is to use SEP similarity to predict a small set of transcription factors candidates to be regulating a given gene. Figure 1 Plot of mean Standardized Expression Profiles (SEPs) in groups formed by accessions "AS" (in red), "SR" (in blue) and the SEPs including all genes from both accessions (in grey). Thin vertical lines over the circles marking each mean are the 99.99% (α = 1 × 10 −4 ) confidence intervals (CI's) for the means. Plot obtained with function "SEPs.plot()".
Figure 2
Matrix of percentages of genes peaking at each one of the 49 possible combinations of seven times in accession "AS" (X-axis) and seven times in accession "SR" (Y -axis). The percentage of genes simultaneously presenting the peaks is shown at each intersection, while the size of the circles at each intersection corresponds with the proportion of genes. The green dashed line at the diagonal presents the proportion of genes with identical peak in both accessions. | 4,090.8 | 2020-12-21T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
From $\mathcal{N}{=}\,4$ Galilean superparticle to three-dimensional non-relativistic $\mathcal{N}{=}\,4$ superfields
We consider the general $\mathcal{N}{=}\,4,$ $d{=}\,3$ Galilean superalgebra with arbitrary central charges and study its dynamical realizations. Using the nonlinear realization techniques, we introduce a class of actions for $\mathcal{N}{=}\,4$ three-dimensional non-relativistic superparticle, such that they are linear in the central charge Maurer-Cartan one-forms. As a prerequisite to the quantization, we analyze the phase space constraints structure of our model for various choices of the central charges. The first class constraints generate gauge transformations, involving fermionic $\kappa$-gauge transformations. The quantization of the model gives rise to the collection of free $\mathcal{N}{=}\,4$, $d{=}\,3$ Galilean superfields, which can be further employed, e.g., for description of three-dimensional non-relativistic $\mathcal{N}{=}\,4$ supersymmetric theories.
Introduction
In recent years, one can observe the growth of interest in the non-relativistic (NR) fieldtheoretic models, in particular those describing NR gravity and NR supergravity, e.g., in the framework of the so-called Newton-Cartan geometry [1,2,3,4,5].Until present, the NR supersymmetric framework [3,4,5] has been basically developed for D = 2+1-dimensional case, 1 which corresponds to the exotic version of Galilean symmetry with two central charges [6,7,8,9,10].In this paper we address the next physically interesting case of N = 4, D = 3+1 supersymmetric extension of Galilean symmetries.Due to the distinguished role of N = 4 supersymmetric Yang-Mills theory (see e.g.[11]), this kind of extended supersymmetry merits as well an attention in the NR case.
Similarly as in the relativistic case, one can study the NR N = 4, D = 3+1 supersymmetric theories by following several paths: i) One can start with NR N = 4, d = 3 Galilean superalgebras (for arbitrary N see [12]) and then construct their superspace and superfield realizations.In this way we obtain the universal tool for constructing NR supersymmetric field theories.
ii) The NR field theories can be reproduced by performing the non-relativistic contraction c → ∞ (c is the speed of light) in the known relativistic non-supersymmetric, as well as supersymmetric field theory models (see, e.g., [4,13,14]).One of the advantages of such a method is the possibility to derive the proper NR contractions of relativistic action integrals.
iii) For a definite type of (super)symmetric framework one can consider the dynamics associated with particles, fields, string, p-branes, etc.An important role in such a list is played by the free classical and first-quantized (super)particle models, with the property that their first quantization leads to the classical (super)field realizations (see, e.g., [15,16]).
In our case, we will look for the free superparticle models invariant under N = 4, d = 3 Galilean supersymmetry.One can mention that in the relativistic case this way of deriving free superfields from the classical and first-quantized superparticles with extended N = 4, D = 4 Poincaré supersymmetry was already proposed in [17]. 2n this paper we will follow the path iii).We will consider the most general NR N = 4, d= 3 Galilean superalgebra, introduce the corresponding N = 4 Galilean supergroup and its cosets, construct the relevant nonlinear realizations and use the associated Maurer-Cartan (MC) oneforms to build NR N = 4 superparticle models.They will be subsequently quantized to obtain the NR superfields providing realizations of N = 4 Galilean supersymmetries.Note that in such a setting the original coset parameters are treated as the D=1 world-line fields.However, the whole formalism could be equally applied along the lines of path i), with the coset parameters treated as independent NR superspace coordinates.
As a prelude to our considerations, we will describe the Galilean symmetries and their supersymmetrizations in a short historical survey.
The Galilean theories describe the low energy, non-relativistic dynamical systems 3 , which can also be obtained as non-relativistic limit (c → ∞) of the corresponding relativistic theories (see, e.g., [26]- [30]).Such a contraction limit, applied to D = 4 Poincaré algebra (P µ , M µν ; P µ = (P 0 , P i ), M µν = (M i = 1 2 ε ijk M jk , N i = M i0 )), after shifting and rescaling where H stands for non-relativistic Hamiltonian and B i for the Galilean boosts, yields "quantum" d = 3 Galilean algebra [31] 4 [J i , J j ] = iε ijk J k , (1. 2) The central charge M = m 0 describes a non-relativistic mass which can be identified with the relativistic rest mass.
Because bosons and fermions occur in both relativistic and non-relativistic settings, one can consider the non-relativistic supersymmetry as well.The first proposal for supersymmetrization of Galilei algebra (1.2) was given in [32], where N = 1 and N = 2, d= 3 Galilean superalgebras were presented.The N = 1, d= 3 Galilean superalgebra is an extension of relations (1.2) by complex NR USp(2) ≃ SU(2) supercharges S α , Sα ∶= (S α ) † (A → A † denotes Hermitian conjugation) which satisfy the relations 5 {S α , Passing to the N = 2 d= 3 Galilean superalgebra [32] is accomplished by adding to the N = 1 Galilean superalgebra generators (J i , P i , B i , H; S α , Sα , M) the second pair of complex SU(2) supercharges Q α , Qα ∶= (Q α ) † , subject to the following relations: (1.4) In the relations (1.4), (1.3), besides the central charge M, there appears the new central charge Y.The N = 2, d= 3 Galilean superalgebra can be derived from N = 2, D= 4 Poincaré superalgebra (a = 1, 2) (plus the commutation relations with Poincaré and internal R-symmetry U(2) generators) by taking the c → ∞ contraction limit with M = m 0 .In general, the N = 2, D= 4 Poincaré superalgebra is endowed with one complex central charge Z or, equivalently, two real central charges, Z = X + iY . 6.Before taking the NR limit c → ∞, the Galilean supercharges Q α and S α (see (1.3), (1.4)) should be identified with the following linear combinations of two N = 2 Weyl supercharges in (1.5) where and Q± α = (Q ± α ) † .Also, we should postulate the following c-dependence of the central charges in (1.5) If X is finite in the contraction limit, it merely generates the shift H → H + X in the relations of the N = 2, d= 3 Galilean superalgebra (see the first relation in (1.4)).
In the present paper we consider N = 4, d= 3 Galilean superalgebra with all possible central charges.It will be obtained by the c → ∞ contraction procedure from the general N = 4, D= 4 relativistic Poincaré superalgebra [37] which involves 6 complex central charges Z AB = −Z BA (A, B = 1, 2, 3, 4).Correspondingly, the N = 4, d= 3 Galilean superalgebra obtained in the c → ∞ limit involves 12 real central charges 7 .If these central charges are numerical, then, using a suitable redefinition of supercharges by an unitary 4 × 4 matrix, one can cast the antisymmetric 4×4 complex matrix of six central charges Z AB = −Z BA (A, B = 1, 2, 3, 4) into a quasi-diagonal Jordanian form [38,39] 6 Since N = 2 D= 4 Poincaré superalgebra is covariant under the phase transformation of Weyl supercharges one could think that one real central charge is enough in N = 2 case.However, as was found by studying concrete dynamical models [33,34], it is the complex N = 2 central charge Z = X 1 + iX 2 what actually matters.It amounts to two physical real central charges: the topological magnetic charge X 1 and the non-topological electric charge X 2 .Only if these charges take constant eigenvalues, i.e. are numerical, they can be rotated to the single central charge by the phase transformations just mentioned. 7In fact, the NR N = 4 Galilean superalgebra involves 13 central charges if we take into account the Bargmann central charge M = m 0 obtained from the leading terms in the asymptotic expansion of P 0 and X in c (see (1.1) and (1.8)).
where Such a structure of the internal sectors survives in the non-relativistic limit; one can therefore consider N = 4, d= 3 Galilean supersymmetric theories with the internal sectors USp(2)⊗USp(2) (four real Galilean central charges) or USp(4) (a pair of real Galilean central charges). 10n the most general N = 4 case, when we deal with six complex central charges, the central charge 4×4 matrix can be written as follows The central charges, besides bringing in the mass parameters, are also capable to simplify the formulation of N ⩾ 2 supersymmetric gauge theories.In particular, recall that N = 4, D= 4 Yang-Mills theory with one central charge and internal symmetry broken to O(5), contrary to N = 4, D= 4 supersymmetric Yang-Mills theory with SU(4) R-symmetry and without central charges, permits an off-shell superspace formulation which does not require harmonic variables [40,41].
The plan of the paper is as follows.In Sect.2, following [12], we derive the general N = 4, d= 3 Galilean superalgebra, which contains 12 independent real central charges and the additional thirteenth Bargmann central charge describing the rest mass.As in [26,27,28,29], in this derivation we employ the NR contraction c → ∞ of relativistic N = 4, D= 4 Poincaré superalgebra.In Sect. 3 we calculate the MC one-forms on the coset G H, where G = SG(3; 4 12) (see footnote 9) and stability subgroup H is given by SU(2) ≃ O(3) and USp(4) generators.In Sect. 4 we study the G-invariant actions linear in MC one-forms associated with central charges.For different choices of the central charges these actions describe various models of N = 4, d= 3 Galilean superparticles.We consider the phase superspace formulation of these superparticle models and present complete set of first and second class constraints.The first class fermionic constraints generate the non-relativistic N = 4 κ-gauge transformations which act in the non-physical part of the Grassmann coordinate sector.In Sect. 5 we quantize the model.Using super Schrödinger realization of quantum phase superspace algebra, we obtain as the quantum solutions of the model a set of free N = 4, d= 3 Galilean superfields.In Sect.6 we present an outlook, in particular, we describe briefly the alternative ways of constructing the N = 4 Galilean superparticle models.Concluding, we hope that our paper will contribute to the issue of superfield description of the interacting non-relativistic N = 4, d= 3 supersymmetric field theories. 11 2 General Galilean N = 4, d= 3 superalgebra with central charges The N = 4, D= 4 Poincaré superalgebra is spanned by the following generators iv) The set of 6 complex central charges Z AB = −Z BA , ZAB = (Z AB ) † , or equivalently the set of 12 real central charges X AB = −X BA , Y AB = −Y BA , where ) 11 For examples of supersymmetric extensions of QED and Yang-Mills Galilean theories see [42]- [44]. 12We define D = 4 sigma-matrices as follows: . Always in this paper we use weight coefficient in (anti)symmetrization: and the remaining non-zero commutation relations read Here α is some real parameter.If we choose α = 1 , it defines the chirality of supercharges (see (2.8)) and so identifies A as the generator of axial symmetry. 13n order to perform the non-relativistic contraction of N = 4, D= 4 Poincaré superalgebra to the limit describing N = 4, d= 3 Galilean superalgebra one should rewrite the superalgebra (2.3)-(2.8) in the new fermionic Weyl basis14 where the real 4×4 matrix In this paper we choose the following explicit form of Ω: where The relations (2.10) break manifest Lorentz symmetry O(3, 1) to O(3) (spinorial scalar product a α b α is U(2)-invariant) and the internal symmetry U(4) is broken to its subgroups which depend on the choice of central charges [39].
The supercharges (2.10) by definition satisfy the subsidiary symplectic-Majorana conditions [47] ( (2.12) The full set of supercharges Q ±a α , Q± αa ; Q ±ã α , Q± αã can be split into the holomorphic sector (Q ±a α , Q ±ã α ) and the antiholomorphic one Q± αa , Q± αã ; these both sectors are related by the subsidiary conditions (2.12), thus revealing the quaternionic structure of the pairs of complex supercharges related by Hermitian conjugation (see [48,45]).Due to the constraints (2.12) one can choose as unconstrained sets of linearly independent supercharges the generators from either holomorphic or antiholomorphic sectors.The N = 4 superalgebra spanned by the generators from holomorphic sector is however not self-conjugate.In order to define the complete selfconjugated Hermitian basis one should choose the full set of pairs of supercharges, which are related by Hermitian conjugation.For the choice (2.11) of the matrix Ω these self-conjugated pairs are In this paper we will use the supercharges belonging to the holomorphic sector, i.e.Q ±A α (A = (a, ã) = 1, ..., 4).They transform linearly under the USp(4) ≃ O(5) R-symmetry, which defines the compact R-symmetry sector of N = 4, d= 3 supersymmetry with one central charge Z corresponding to the following choice of 4×4 central charge matrix (1.10) In the holomorphic basis the non-vanishing relations (2.3), (2.4) can be represented as where The relations inverse to (2.17) are where It can be pointed out that X a b and Y a b are "real" with respect to the symplectic pseudoreality conditions similar to (2.12) and following from (2.18), The commutators (2.5) for the generators (2.10) can be rewritten as follows Further, let us decompose the generators of internal symmetry SU(4) as The projections (2.24) of T A B satisfy the relations The constraint (2.26) amounts to the conditions for the generators T ±A B , So, the set of generators T + contains 10 independent generators which are symmetric in their indices The set T − involves 5 independent operators forming a traceless antisymmetric matrix Using (2.1), we find that operators (2.28), (2.29) satisfy the following algebra Thus the original internal SU(4) symmetry generators T A B , decomposed according to the relations (2.24), do split into the ones generating USp(4) and the coset SU(4) USp(4): This decomposition of the su(4) algebra provides an example of symmetric Riemannian pair (h (3) , k (3) ): [h (3) , h (3) ] ⊂ h (3) , [h (3) , k (3) ] ⊂ k (3) , [k (3) , k (3) ] ⊂ h (3) . (2.33) The commutators (2.7) are rewritten in the new basis as where the 4×4 matrix defines the fundamental 4 × 4 representations of the USp(4) algebra given by the supercharges enlarge the matrices (U A B ) to the fundamental representations of SU(4) algebra which interchange the + and − projections.
Let us make a comment on the case of α ≠ 0 in (2.8), (2.9).Choosing α = 1 , one finds (2.37) Now we are prepared to define the N = 4, d= 3 Galilean superalgebra by making use of the NR contraction procedure.One rescales the relativistic supercharges as The physical rescaling of the bosonic generators of the algebra o(1, 3)⊕u(4), where (P µ , M µν ) ∈ o(1, 3) and (T ±A B , A) ∈ u(4), is performed as follows (2.39) where m 0 is the relativistic rest mass.The rescaling of the central charges is given by the formulas (see also (1.8)) where X AB , Y AB are defined in (2.17) and the operators X AB = −X BA , Y AB = −Y BA satisfy the symplectic pseudoreality conditions .We will firstly perform the c → ∞ contraction for a simple choice of the central charge matrix.
i) Jordanian quasi-diagonal form of central charge matrix
Let us consider the special case with central charge matrix in the reduced form (1.9) where the central charge matrix (2.14) is recovered at The rescaling (2.40) takes the more explicit form for this choice Substituting these expressions, as well as (2.38) and (2.39), into the superalgebra relations (2.20), (2.21) with Z a b = Z ãb = 0 , and making there the c → ∞ contraction, we obtain (2.46) , and the indices are chosen so that a = 1, 2 correspond to A = 1, 2 and ã = 1, 2 to A = 3, 4.
ii) General central charge matrix
In the general case with non-zero off-diagonal central charges X a b = −X ba and Y a b = −Y ba , the last lines in (2.44) and (2.46) are replaced, respectively, by the relations It is easy to check that the rescaling (2.38) preserves the symplectic-Majorana conditions (2.10) and in the limit c → ∞ one obtains the following Galilean form of N = 4 symplectic-Majorana conditions 15( Due to (2.18), off-diagonal central charges satisfy the following pseudo-reality conditions We point out that the constant m 0 can be considered as an additional thirteenth central charge, i.e. in fact the superalgebra (2.44)- (2.46)
iii) Internal symmetry sectors
After c → ∞ contraction (2.23), the covariance relations of the supercharges with respect to NR O(3) rotations J ij and Galilean boosts B i are written as Using substitutions (2.38), (2.39), the contraction of the relations (2.34) leads also to the covariance relations of supercharges with respect to the internal symmetry generators h For what follows, it will be useful to have the generators T +A B ∈ usp(4) in the splitting usp( 2)⊕usp( 2) basis.This notation corresponds to the following coset decomposition such that From (2.39) it follows that the coset generators k (3) are rescaled (h ) and in the limit c → ∞ we get the inhomogeneous extension of usp(4) ≅ o(5) internal algebra [h (3) , h (3) ] ⊂ h (3) , The five commuting generators T −A B of k (3) describe a kind of curved internal momenta.Thus in the contraction limit c → ∞ one gets the following N = 4 Galilean internal inhomogeneous symmetry algebra (2.59) We will denote the corresponding inhomogeneous group by IUSp(4) .The abelian generator A can be added to the ideal k (3) , so extending it to six-dimensional one. 16he action of the IUSp(4) generators in the USp(2) ⊗ USp(2) splitting basis, , as well as on the central charges, can be easily found from the relations (2.52).For instance, the commutation relations between T +a b and central charges are given by and by similar formulas for Y a b , Y 1 , Y 2 .It follows from these relations that the full set of central charges splits into two USp(4 The first relation in (2.53) amounts to the following set of relations in the splitting basis (2.61) The commutation relations between T −A B and the central charges or, in the splitting basis, The commutation relations between the U(1) axial generator A and the central charges have a similar structure.: Our last remark concerns the N = 4 Galilean algebra with the diagonal choice (1.9), (2.41) for the central charge matrix.Recalling (2.44) -(2.46), we observe that in this case N = 4 Galilean algebra (with suitable restriction of R-symmetry algebras taken into account) reduces to the sum of two N = 2 Galilean superalgebras spanned by the supercharge pairs (Q a α , S a α ), (Q ã α , S ã α ), with common generators H and P i .The only way to avoid such a splitting is to switch on the off-diagonal central charges as in (2.47).If we consider an extended N = 4 Galilean algebra, with the R-symmetry generators T +A B included, the N = 2 subsectors in (2.44) -(2.46) will be intertwined by the generators T +a b , e.g., In this case, the splitting into two N = 2 algebras arises only when we eliminate the generators T +a b from the R-symmetry algebra.
To avoid a possible confusion, note that the R-symmetry is described by the group of outer automorphisms of superalgebras and its generators do not appear in the r.h.s. of the (anti)commutators (distinctly from central charges).Therefore, when constructing the specific models, we can restrict the R-symmetry group to some of its subgroup.The maximal Rsymmetry group USp(4) ∼ O(5) can be ensured in two distinct cases: either for the choice (2.14) with Z being USp(4) ∼ O( 5) invariant (the same if Z is an operator or a number), or for the generic choice (1.10), with 5) vectors (see (2.60)), and with two O(5) singlets (X 1 + X 2 ), (Y 1 + Y 2 ) accommodating the remaining two central charges.
In the second case one has an additional freedom to eliminate, without breaking O(5) covariance, either all Y central charges or all X central charges, and further choose, e.g., X 2 + X 1 = 0 or Y 2 + Y 1 = 0 .As was already mentioned, with the general option (1.10) the choice of numerical central charges necessarily breaks O(5) R-symmetry down to O(3) .
iv) Hermitian basis
One can alternatively formulate NR N = 4, d = 3 superalgebra by using NR contraction of Hermitian pairs of supercharges which are self-conjugate with respect to Hermitian conjugation (see (2.13)).We define the set of unconstrained independent supercharges spanning N = 4 Galilean Hermitian superalgebra as follows (2.69) The Hermitian form of N = 4, d= 3 Galilean superalgebra (2.67)-(2.69)permits to obtain the generalized positivity conditions for the Hamiltonian H. From first two formulas in (2.67) one derives that for any normalized state Ψ⟩ belonging to the Hilbert space of physical states of the models the following conditions hold (2.70) In dynamical models (like those of Sect.4) the central charges X 1 , X 2 are represented on the normalized states Ψ⟩ by the mass-like parameters m 1 , m 2 , so from (2.70) one gets the lower bound on the energy values 3 Nonlinear realizations of N = 4, d= 3 Galilean supersymmetries In the nonlinear realization of N = 4, d= 3 Galilean supersymmetries we will assume that the linearization subgroup H involves the 3-dimensional space rotations generators J ik , the internal symmetry USp(4) generators T +A B and the abelian generator A 0 .All other generators are placed in the coset G = SG(3; 4 12) H.Some of the parameters belonging to G can be relocated into the linearization subgroup H just by nullifying the respective coset parameters.The coset element G can be written explicitly as where The factors The odd generators satisfy the symplectic-Majorana conditions (see also (2.12)) 3) The Grassmann coordinates dual to these odd generators satisfy similar pseudo-reality conditions Being dual to the relations (2.49), the reality conditions for the tensorial central charge coordinates read The full set of the left-covariant MC one-forms is given by where A straightforward calculation yields The remaining part of (3.7) is as follows We can write the formula (3.7) in the following way where T (K) stand for all coset G generators, and ω(K) denote the corresponding MC one-forms.We obtain where k 2 ∶= k i k i .The MC one-forms describing the whole coset G are defined as follows and can be calculated by the formulas (3.2) and (3.6).We observe that and, further, We see, in particular, that Let us find supersymmetry transformations of the coset coordinates.For this purpose we will use the well known formula iG −1 (ε ⋅ T )G = G −1 δG + δh, where T denotes the collection of coset generators and δh defines induced transformations of the stability subgroup h ind = 1 + δh (see, e.g., [49]).
Supersymmetry transformations generated by the left action of generators where ε α a , ε α ã are odd constant parameters, lead to the following transformations of the coset coordinates: The second half of the odd left shifts, those generated by S a α , S ã α , lead to the transformations where η α a , η α ã are the appropriate odd parameters.It follows from (3.18) and (3.20) that the three-vector k i and "harmonic variables" u are inert under all supersymmetry transformations.
From the form of the supersymmetry transformations (3.18) follows that the set of coordinates (t; ξ α a , ξ α ã ; h 1 , h 2 , h a b) is closed under the action of Q -supersymmetry, while this supersymmetry does not act on the remaining coordinates ( Alternatively, S -supersymmetry (3.20) leaves inert the subset (t; ξ α a , ξ α ã ; h 1 , h 2 , h a b) and transforms the remaining coordinates ( This split of the full set of coset parameters into two subsets, each closed under the action of one half of the supersymmetries and inert under another half, is due to the choice of coset parametrization (3.1), (3.2) with the particular order of the factors G 2 and G 3 , G = . . .G 2 G 3 . . . .In [36], there was used another parametrization, G = . . .G 3 G 2 . .., and the separation of Q -and S -transformations into two sectors could not be seen.
The closure of the transformations (3.18) and (3.20) generates all the bosonic transformations of G which do not belong to the stability subgroup H.The transformations of subgroup H are realized as some linear homogeneous maps of the coset fields and MC 1-forms.The abelian generators T −A B do not appear in the closure of fermionic generators, so the left shifts by these generators should be considered separately.The corresponding transformations of the coset parameters can be found explicitly, using the formulas (3.15).The coset parameters u A B are changed only by the pure shifts.Actually, in this paper we will not make use of these T −A B transformations.
We also observe that all MC forms ω(K) (see (3.12)) transform linearly under H transformations and are inert with respect to the odd transformations (3.18) and (3.20).4 The phase-space formulation of N = 4, d = 3 Galilean superparticle model and κ-gauge freedom Let us describe the mechanical system on the coset G with evolution parameter τ and with all parameters of G promoted to the d= 1 fields: t = t(τ ), We shall deal with the simplified situation, with all internal coordinates u A B being suppressed, which means that we transfer the generators T −A B into the stability subgroup and use the "truncated" MC one-forms ω(K) .We will not employ the strict invariance of the superparticle actions under these abelian outer automorphisms, as well under the full compact R-symmetry USp(4).Only the symmetries under some particular subgroups of the latter, as well as the O(3) space symmetry generated by J ij , will be respected.
Simplest bosonic case: Schrödinger NR particle
As the instructive step we consider the standard bosonic Schrödinger particle.We recall how to derive the action of non-relativistic massive particle, which, after quantization, leads to the non-relativistic Schrödinger equation.
Such an action is obtained from the MC one-form (4.5), which in the pure bosonic case is given by Selecting the rest mass as the normalization factor and omitting a total τ derivative, we obtain It leads to NR particle model studied in [26,36].
The action (4.7) provides the canonical momenta and the vanishing canonical Hamiltonian: The expressions (4.8) imply the first-class constraint defining free NR energy-momentum dispersion relation called free Schrödinger constraint which, after quantization in the Schrödinger realization, gives the non-relativistic Schrödinger equation for a free NR particle of mass m 0 .
The superparticle model with vanishing off-diagonal central charges
As the next step, we consider the action with the Lagrangian density taken as a linear combination of the MC one-forms associated with central charges described by Jordanian quasidiagonal form of the central charge matrix where a, m 1 , m 2 , µ 1 , µ 2 are real constants.The choice of these parameters specifies the explicit form of odd constraints, including the first class ones generating local κ-symmetries.
Using the expressions of the MC forms (3.12), (4.5) and omitting total derivative terms, we get from (4.12) the Lagrangian where α were defined in (4.3).Without the loss of generality, the terms proportional to a in (4.13) can be omitted because they can be re-absorbed by the redefinition of m 1 and m 2 .Therefore we will put a = 0 (see the same condition in [36], assumed, however, for another reason).Then the Lagrangian (4.13) produces the following bosonic momenta and the fermionic ones where p x αβ ∶= p xi (σ i ) αβ .Using the canonical Poisson brackets, {t, we find the non-vanishing Poisson brackets of the classical NR supersymmetry generators (4.17):The definitions (4.15) of fermionic momenta lead to the constraints: Using (4.18), we obtain the non-vanishing Poisson brackets for the system of constraints (4.21) and (4.22) We will be interested in the NR superparticle models possessing local fermionic κ symmetry [51,52], after imposing suitable relations between the parameters of the model (see, e.g., (4.26), (4.27) below).In the N = 2 , d = 2 case this kind of NR superparticles was considered in [36].We recall that in the phase space formulation, κ symmetry is generated by the first class odd constraints.
Let us determine the values of central charges in the model which imply the first class odd constraints.For that purpose we should calculate the determinant of the 16-dimensional matrix of the Poisson brackets of fermionic constraints (4.23), in the presence of the bosonic constraint (4.21), and assume that this determinant becomes zero. Defining we find that the determinant of the matrix P of the fermionic constraints (4.23) is given, modulo a multiplicative constant, by the expression Thus, first class odd constraints are present at least under one of the following two conditions If the condition (4.26) is valid, half of the odd constraints linear in D ξ a α , D θ a α are first class.Explicitly, these constraints are The Poisson brackets involving the constraints (4.28) form the following set We see that the full set of the original constraints (D ξ a α , D θ a α ) is equivalent to the set (F ξ a α , D θ a α ), where F ξ a α are first class, and D θ a α are second class.The analysis of the second half of the odd constraints (with tilded indices) is performed quite analogously.If the condition (4.27) holds, the constraints are first class, with the following Poisson brackets with all odd constraints The set of constraints (D ξ ã α , D θ ã α ) is therefore equivalent to the set (F ξ ã α , D θ ã α ), with F ξ ã α being first class and D θ ã α second class.In Section 5 we will study the quantization of the superparticle model defined by the action (4.12), (4.13) possessing κ-symmetries due to the presence of conditions (4.26) and (4.27).Using (4.22), we obtain the following explicit form of the first class constraints (4.28) and (4.30) generating κ-symmetries We notice that, if we specialize our discussion of constraints to one sector only, with either index a or index ã, we recover the model with smaller N = 2 Galilean supersymmetry.Such models in d = 2 case have been studied in [36] for the case of N = 2, d = 2 Galilean supersymmetry with only one central charge.The d = 2 models of [36] can be obtained as a special case of our model with only one of the N = 2 sectors retained.
The constraints (4.32) generate local κ-transformations of an arbitrary phase space function X by the following Poisson bracket where κ α a (τ ) and κ α ã (τ ) are local Grassmann parameters.The κ-transformations (4.33) of the variables in the Lagrangian (4.13) with a = 0 are as follows Under these transformations, the variation of Lagrangian (4.13) (with a = 0) is Using local transformations (4.34) one can choose the gauge ξ a α = 0, ξ ã α = 0 .In such a gauge, the rigid Q -transformations (3.18) should be accompanied by the appropriate compensating gauge transformations (see, e.g., [36]).In this case, as well as in other cases considered below, we will not impose such gauges, reserving it for more complicated N = 4 NR superparticle models still to be constructed, e.g., those formulated on external electromagnetic background.
The superparticle model with all central charges incorporated
We can add to the action (4.12) at a = 0 the additional terms associated with off-diagonal central charges where n a b and ν a b are constants with the reality conditions as for USp(2) ⊗ USp(2) ≃ O( 4) bispinors (see (2.49), (3.5)) These bi-spinorial constants can be represented as internal four-vectors (isovectors) where 4) invariance in the sector described by off-diagonal central charges, will be discussed in Sect.6.
The Lagrangian in (4.39) is given by the following expression In comparison with (4.13), the Lagrangian (4.40) contains two additional terms, which involve only derivatives of ξ's.Therefore, the momenta p t , p xi , p θ a α , p θ ã α are the same as in (4.14), (4.15), whereas the momenta p ξ a α , p ξ ã α acquire additional terms as compared with (4.15).Then it follows that the bosonic constraint T ≈ 0 (see (4.21)) and the fermionic constraints D θ a α ≈ 0, D θ ã α ≈ 0, defined by (4.22), remain the same, whereas fermionic constraints D ξ a α ≈ 0, D ξ ã α ≈ 0 will acquire additional terms in comparison with (4.22): For the model with Lagrangian (4.40) the Noether charges generating the NR supersymmetry transformations (3.18) and (3.20) contain, in comparison with the expressions (4.17), some additional terms and take the form
Analysis of the constrains
The determinant of the Poisson brackets matrix P of the fermionic constraints (4.41), (4.22) defined in (4.24) in the case under consideration looks more complicated than (4.25).It is equal, up to a multiplicative constant, to the following expression: where ν ∶= T is the first class constraint (4.21), and the 4-vector w a b = i(σ M ) a bw M is defined by It should be added that n = 1 2 n a bν a b = n M n M and ν, ŵ are the length squares of O(4) internal symmetry vectors, i.e. one gets that n ≥ 0, ν ≥ 0 and ŵ ≥ 0.Moreover, n = 0 leads to n a b = 0 as well as ν = 0 leads to ν a b = 0; similarly, ŵ = 0 implies w a b = 0 .
The odd first class constraints generating κ-symmetry, are present provided that The constants n a b, ν a b enter the expression (4.45) and the equation (4.48)only through two quantities, ν and ŵ defined by (4.46).Since (4.45) is not factorized, in contrast to (4.25), resolving eq.(4.48) is a more complicated task.
The condition (4.48) is necessary for (any number of) odd first class constraint.The full number of such constraints is found by solving the characteristic equation which determines the eigenvalues λ of the matrix P (see analogous consideration, e.g., in [53,54]).In (4.49), λ describes the spectral parameter and I is the unit matrix.The number of first class constraints is equal to the number of solutions λ = 0 of the characteristic equation (4.49).
In the presence of k odd first class constraints among sixteen constraints D A , the equation (4.49) has the form In the model considered here the characteristic equation (4.49) coincides with the equation (4.48) in which the substitutions p t → (p t − λ) and m 0 → (m 0 − λ) are performed.Using the expression (4.45)18 the characteristic equation (4.49) can be written in the following form where As we see from (4.51), the condition (4.48) implies the presence of at least four odd first class constraints in the total set of sixteen fermionic constraints (4.41), (4.22).
The condition A = 0, together with (4.48), lead to the presence of eight odd first class constraints.The condition A = 0 requires vanishing those terms in (4.52) which are proportional to p t : An additional condition which stems from A = 0 is the vanishing of the remaining constant term: For m 1 = m 2 it leads to the condition n a bw a b = 2n M ν M = 0. Further, one can show that B ≠ 0 and C ≠ 0 in (4.51) due to non-vanishing constant coefficients in (4.53) and (4.54) in front of (p t ) 2 and p t .
Thus, by definite choices of central charges, we can recover the cases, when the number of odd first class constraints is quarter or half the total number of odd constraints.Note that, up to a sign, the algebra of the fermionic constraints (4.23), (4.42) coincides with the NR superalgebra (4.19), (4.44) and the number of the first class constraints equals the number of preserved supersymmetries in BPS configurations.Therefore, respective models describe BPS configurations preserving 1 4 or 1 2 of NR supersymmetry.
In the last part of this Section, we will consider in detail two special cases.
The case when half of odd constraints is first class
This particular example is specified by the following condition on isotensorial central charges: or, equivalently, In this case the vanishing of the quantity (4.45) (with T ≈ 0) requires that The relation (4.59) is obeyed provided at least one of two conditions or both of them are fulfilled.The conditions (4.60), (4.61) are the obvious generalizations of (4.26) and (4.27).Now we present the full set of the constraints which occur when the conditions (4.60), (4.61) and (4.58) are valid.
The first class bosonic constraint (4.21) represents the Schrödinger equation as in the previous cases.
Fermionic constraints , where (F ξ a α , F ξ ã α ) are defined by the following expressions The complete set of non-vanishing Poisson brackets for the constraints (4.62), (4.63) reads We see that if the conditions (4.60), (4.60) and (4.58) are valid, the constraints (D θ a α , D θ ã α ) are second class, while the constraints (F ξ a α , F ξ ã α ) are first class and generate κ-symmetries.Substituting the expressions (4.41) into the κ-symmetry generators (4.62), (4.63), we obtain them in the following explicit form As opposed to the constraints (4.32) in Sect.4.2, in the considered case the constraints (4.65) mix two USp(2) sectors characterized by untilded and tilded USp(2)-indices.It turns out, however, that this model for m 1 ≠ m 2 and µ 1 ≠ µ 2 is just the model of Sect.4.2 in disguise.To show this, recall that the full Lagrangian (4.40) is formally invariant under the simultaneous O(5) rotation of the d = 1 fields and the set of coupling constants m 1 , m 2 , µ 1 , µ 2 , n a b, ν a b, which gives an opportunity to pass to O(5) frame where these constants are reduced to some minimal set. 19It is important that the coupling constants are divided into the O(5) singlets m 1 + m 2 , µ 1 + µ 2 and O(5) vectors (n a b, m 1 − m 2 ) and (ν a b, µ 1 − µ 2 ); then the condition (4.58) means the vanishing of some particular linear combination of the O(4) vector components of these two O(5) vectors.The fifth component of the O(5) vector containing the O(4) vector (4.58), is This quantity is zero just as a difference of the conditions (4.60), and(4.61)!The rest of these conditions, their sum, expresses the particular O(5) invariant (m 1 + m 2 ) through other one, To construct a system with eight first-class constraints, which would be non-equivalent to the system of Sect.4.2 and involve the constants n a b, ν a b which cannot be removed, one needs to break explicitly the USp(4) ≃ O(5) covariance in the space of coupling constants.The simplest option is to assume The USp(4) ∼ O(5) covariance is also explicitly broken in a system with four first-class constraints corresponding to 1 4 BPS states.We will consider it as the second example.
The case when quarter of odd constraints is first class
Our second example is characterized by non-vanishing off-diagonal central charges, with all quasi-diagonal ones vanishing: 5 Quantization of the model and N = 4, d = 3 Galilean superfields In this section we present the canonical operator quantization of our model.We will introduce the (super)Schrödinger realization of quantum phase coordinates and obtain the superfield description of N = 4, d = 3 Galilean states.For this purpose we will quantize the second class constraints by the Gupta-Bleuler (GB) procedure [55,56], without introducing for them the Dirac brackets.
We will consider two versions of our model: the first one with eight first class constraints (introducing 1 2 BPS states or fraction 1 2 of unbroken supersymmetry) and the second one with four first class constraints (introducing 1 4 BPS states or fraction 1 4 of unbroken supersymmetry).We will use, instead of the symplectic-Majorana real quantities, the complex Hermitian conjugate Grassmann coordinates, which are defined by (3.4) in the following way The corresponding complex momenta, which are explicit solutions of the conditions (4.16), are Poisson brackets of these phase superspace variables are given by {ξ α , p In the quantization procedure, we will use the graded coordinate representation with the following super-Schrödinger realization for the momenta: (5.4)
Even first class constraint (5.9) yields the Schrödinger equations for all component fields.Odd first class constraint (5.10) provide the following four superwave equations for the superfield (5.13) Here we have introduced the operators ∆ θα , ∆θ α , ∆θα , ∆θ α which do not depend on ξ-variables, and the covariant derivatives for N = 8 extended one-dimensional supersymmetry which form two mutually anticommuting with constants m 1 , m 2 playing the role of central charges.It is straightforward to check that the integrability condition for the equations (5.14) is just the Schrödinger equations (5.9) (we mention that the conditions (4.26), (4.27) are valid).
To summarize, physical states of the considered model are described by the two-chiral superfield Φ 0 (t, x i , θ α , θα ) with the following component expansion In (5.18) all component fields are complex functions of t and x i which satisfy the Schrödinger equation (5.9).Thus the presented model results in five complex scalar fields A(t, x i ), B(t, x i ), B(t, x i ), B [αβ] (t, x i ), C(t, x i ) which describe spin 0 states; one complex vectorial field B (αβ) (t, x i ), which accommodates spin 1 states; and four spinorial fields Ω α (t, x i ), Ωα (t, x i ), Λ α (t, x i ), Λα (t, x i ) corresponding to spin 1 2. It is easy to see that we obtain in such a way equal number of 8 bosonic and 8 fermionic component fields.
It is worth to emphasize that the description of physical states by the two-chiral superfield (5.18) is consistent with the possibility of imposing the gauges ξ a α = 0, ξ ã α = 0 on the local transformations (4.34), as was mentioned in Sect.4.2.
The models with non-vanishing off-diagonal central charges
Like in the previous subsection, we will use the complex variables (5.1), (5.2).
The models with eight odd first class constraints
We consider firstly the case with condition (4.58).The wave function has the form (5.8) and the wave equations are derived from even first class constraint (5.9) as well as odd first and second class constraints, the last ones quantized by the Gupta-Bleuler quantization method.
Second class constraints have the same form as in (5.7), while the solution of the constraints (5.11) leads to the reduced superwave function (5.12), (5.13).
For simplicity we will restrict our study to the case of parallel four-vectors n where (see (4.58)) Other choices of the constants n a b and ν a b satisfying (4.58) lead to the same set of physical states.
Using complex variables (5.1), (5.2), the odd first class constraints (4.65) can be brought to the form These expressions generalize the ones given in (5.6).Imposing the constraints (5.10) on the superwave function (5.12), we obtain that the superfield (5.13) satisfies the following generalization of four superwave equations (5.14) where and D ξ α , Dξ α , D ξ α , Dξ α are defined in (5.15).If the conditions (4.60), (4.61) and (4.58) are valid once again, the integrability condition for the system of equations (5.23), (5.24) is the free Schrödinger equations (5.9).The unconstrained superfield can be found as in the previous subsection: we apply the expansion (5.17) and again find that all superfield components in this expansion are expressed through the derivatives of single two-chiral superfield Φ 0 (t, x i , θ α , θα ) (see (5.18)).Therefore, the considered model with off-diagonal central charges has the same physical fields content as the previously studied model with the diagonal central charges only.
Finally, we would like to notice that both types of chiral superfields describing superwave functions in this Section are essentially on-shell since they require Schrödinger equation as the integrability condition of the relevant odd first class constraints.
Conclusions
In this paper we considered the N = 4, d= 3 NR superparticle models with twelve constant central charges transforming in certain representations of USp(4) ∼ O(5) and USp( 2 It should be added that rest mass m 0 , describing the Bargmann central charge in Galilean sector, can be treated as the thirteenth central charge, which does not break any R-symmetry. The superparticle action, constructed in Sect. 3 and Sect. 4 of the present paper, is linear in the MC one-forms associated with central charges.The numerical coefficients in front of the central charge MC one-forms provide the numerical values of central charges.In our further work we plan to consider also alternative action densities as nonlinear functions of MC forms, which would permit, e.g., a generalization of (4.36) to the action manifestly invariant under the O(4) internal symmetries.In particular, following the construction of the model for free relativistic massive particle 20 , one can replace the action (4.36) by the USp(2)⊗USp(2)-invariant action depending on all eight off-diagonal central charges where k 1 , k 2 , k 3 are constant and ω a b (X) , ω a b (Y ) are defined in (3.12).The system described by the action S 1 + S ′ 2 produces the same fermionic constraints (4.41) where, however, n a b and ν a b are not constant anymore: they become the canonical momenta for the tensorial central charge coordinates h a b and f a b (see Sect. 3, eqs.(3.2) and (3.12)).So, although the fermionic constraints have basically the same form in both models (see (6.1) and (4.36)), in the case of the action (6.1) the group parameters h a b and f a b are introduced as the dynamical tensorial central charges coordinates.In such a way we deal with an extension of the bosonic target space sector (t, x i ) describing NR space-time to an extended target space with auxiliary central charge coordinates (t, x i ; h a b, f a b).Additional coordinates h a b, f a b enter into new three bosonic constraints which fix n = n M n M , ν = ν M ν M , n M ν M (see (4.46)) by (k 1 ) 2 , (k 2 ) 2 , (k 3 ) 2 .In such a way we obtain a sort of Kaluza-Klein (KK) extension of the superparticle model, with auxiliary KK bosonic dimensions represented by central charge coordinates.Analysis of this modified N = 4 NR superparticle model will be given elsewhere (for an early attempt in this direction see [57]).
In the future we plan also to examine another way of preserving internal symmetry by using the harmonic type variables u b a and u b ã which occur in the coset space parametrization (3.2), as well as the "genuine" harmonic variables defined for the R-symmetry group USp(4). 21urther direction for the future study is to couple the NR superparticle presented here to electromagnetic, YM and supergravity backgrounds.It can be important for the following reason.The energy momentum dispersion relations in our model for arbitrary spin states are described by the free Schrödinger equation depending on the same mass parameter m 0 .One can argue that, after switching on the background fields, other central charges will also become dynamically active and will contribute to the modification of Schrödinger equation.
) where a, b = 1, 2 (ã, b = 1, 2) are the left (right) USp(2) ≃ SU(2) spinor indices.The four complex central charges Z a b constitute complex O(4, C) isovector Z M = 1 2i (σ M ) ãb Z bã , where (σ M ) ãb are D = 4 Euclidean Pauli matrices σ M = (σ i , i1 2 ).If Z M = 0 (i.e., the central charge matrix is reduced to (1.10)) we deal with the decomposition of N = 4 Galilean superalgebra into the direct sum of two N = 2 Galilean superalgebras, each possessing USp(2) automorphism; if Z M ≠ 0 the decomposition of N = 4 Galilean supersymmetry into such a sum of two N = 2 superalgebras is not possible.As we will see, in the absence of central charges the full compact internal R-symmetry in the NR case is U(1)⊗USp(4) as opposed to U(4) of the relativistic N = 4, D = 4 superalgebra.If the central charges take numerical values, the presence of off-diagonal supercharges (1.10) provides the breaking of USp(2)⊗USp(2) ≃ O(4) ⊂ USp(4) internal symmetry (still preserved by the diagonal central charges) down to the exact O(3) or O(2) internal symmetries which form diagonal subgroups in the product O(3)⊗O(3) = O(4).
generators and 10 symmetric ones G A (s) B = 1 2 (G A B + G B A ) describe the coset U(4) O(4).The axial U(1) generator A = G A (s) A can be separated out, i.e.U(4) = SU(4) ⊗ U(1), where SU(4) generators T A B = G A B − 1 4 δ A B A are traceless, T A A = 0, and satisfy the relation
. 43 )
The set of non-vanishing Poisson brackets between the classical supersymmetry generators (4.43) involves the relations (4.19) and, in addition, the following Poisson brackets{Q a α , Q b β } = 2i ǫ αβ n a b , {Q a α , S b β } = {Q b β , S a α } = −2 ǫ αβ ν a b .(4.44)The Poisson brackets (4.44) are classical counterparts of the anticommutators (2.47).We see that the constants n a b and ν a b of the general model (4.40) reappear at the level of Poisson brackets in place of the central charges X a b and Y a b .
.69) In this case it follows from (4.47) that w a b = n a b.Further, vanishing of the quantity (4.45) required for the presence of odd first class constraints leads to the condition (ν) 2 = 4(m 0 ) 2 n .(4.70)If we wish to have eight odd first class constraints with the conditions (4.69), the relations (4.55) and (4.70) (as consequences of the condition A = 0) are valid, and they imply that n = 0, ν = 0 , whence ŵ = 0 and, further, n a b = ν a b = 0 .Thus, if non-vanishing central charges n a b, ν a b are present and m 0 ≠ 0, we can only obtain four odd first class constraints.Let us separate now odd first and second class constraints.The initial fermionic constraints (D ξ a α , D ξ ã α ; D θ a α , D θ ã α ) are equivalent to the set (G a α , F ã α ; D θ a α , D θ ã α ), where (G a α , F ã α ) are defined by the following expressions
. 7 )
Second class constraints (5.7) form the Hermitian conjugate pairs D θα , Dθ α and D θα , Dθ α , what permits us to apply Gupta-Bleuler quantization.In accord with this quantization technique we impose on wave function half of second class constraints, i.e., Dθ α and Dθ α .
F
α Φ = Fξ α − 2m 0 n ν Fξ α Φ = Fξ α Φ = 0 .(5.31)Using (5.23),(5.24),we obtain that the equations (5.30), (5.31) amount to the following pair of equations for the superfield (5.29) Ω = 0 , (5.32)Dξ α − nξ α − 2 θβ p x β α − 2i νθ α Ω = 0 .(5.33)One can consider its general expansion with respect to Grassmann coordinates ξα , ξα Ω = Ω 0 + ξα Ω α + ξα Ωα + ξα ξα Ω 1 + ξα ξα Ω1 + ξα ξβ Ω αβ + ⋯ , (5.34) )⊗USp(2) ≃ O(4) ⊂ O(5) internal R-symmetry groups.The maximal U(4) R-symmetry group of relativistic N = 4 , D = 4 superalgebra in the NR contraction limit is reduced to a semi-direct product of the compact R-symmetry group USp(4) ≃ O(5) and some abelian six-dimensional commutative ideal.In the dynamical framework of our superparticle model, after quantization the central charges are identified with constant parameters of the underlying world-line Lagrangian.Depending on the specific non-vanishing values of these central charges, we are left, before any Hamiltonian analysis, with different fractions of unbroken internal symmetry, G int ⊂ USp(4) ≃ O(5), namely a) If only one central charge Z (see (2.14)) is present, the maximal R-symmetry O(5) of NR N = 4 superalgebra remains in the model; the central charge is O(5) singlet; b) If we have two quasi-diagonal central charges (see (1.9)) the internal symmetry is broken to G int = USp(2)⊗USp(2) ≃ SU(2)⊗SU(2) ≃ O(4).The central charges are presented by four USp(2) singlets m 1 , m 2 , µ 1 and µ 2 ; c) Adding the off-diagonal central charges described by two arbitrary constant O(4) isovectors n a b and ν a b (see (4.38)) radically changes the situation.At m 1 ≠ m 2 and µ 1 ≠ µ 2 the set of twelve constant central charges determines two O(5) vectors (n a b , m 1 − m 2 ) and (ν a b , µ 1 − µ 2 ) and two O(5) singlets (m 1 + m 2 ) and (µ 1 + µ 2 ).The O(5) frame can be fixed so that one of these O(5) vectors carries only one non-zero component, say m 1 − m 2 .There still remains O(4) covariance which can be further restricted in such a way that another O(4) vector ν a b (see (4.38)) will have only one non-zero component, ν a b = ǫ a bν .Thus in such R-symmetry frame we end up with five independent constant central charges and O(3) as the residual R-symmetry group; d) If m 1 = m 2 and/or µ 1 = µ 2 , the O(5) covariance is reduced to O(4).Hence we can choose the frame where O(4) isovector n a b contains only one non-zero component, n a b = ǫ a bn , and the residual R-symmetry group is O(3).The second non-parallel O(4) vector ν a b can be split into O(3) singlet and vector parts, with only one non-zero vector component, ν a b → (ǫ a b ν, δ a bν 2 ), that reduces R-symmetry O(3) to the minimally possible one, given by O(2).Thus in this particular frame we end up with six (or less) independent constant central charges and O(2) as the minimal exact internal symmetry in our model.
.19) The Poisson brackets (4.19) are the classical counterparts of the anticommutators (2.44)-(2.46).We see that in the model (4.13) the parameters m 1 , m 2 and µ 1 , µ 2 generate the constant central charges X 1 , X 2 and Y 1 , Y 2 . 17Canonical Hamiltonian of the model (4.13) is vanishing as in the bosonic case H 1 = p t ṫ + p xi ẋi + p ξ .67) Thus it follows that the conditions (4.58) and (4.60), (4.61) lead to the vanishing of one out of two independent O(5) vectors in the space of coupling constants and as well relate with each other some O(5) invariant combinations of these constants.Thus these conditions preserve O(5) covariance, and one can still use the O(5) rotations in order to choose the frame where n a b (or ν a b) are zero.In such a frame, due to (4.58), both O(4) vectors are zero and we are left with the constraints (4.26), (4.27) as the only remaining ones.So for the off-diagonal central charges satisfying the conditions (4.58) our model becomes identical to the one considered in Sect.4.2.
.73) Thus, if the condition (4.70) is valid, four constraints F ã α , defined in (4.72) are first class.The constraints D θ a α , D θ ã α and G a α , defined in (4.71) are second class.
as in the previous cases.The remaining odd constraints are second class constraints G a α , defined in (4.71), and first class constraints F ã α , defined in (4.72).The second class constraints (4.71) coincide with the constraints (5.21), (5.23) taken at m 1 | 13,477.8 | 2018-03-08T00:00:00.000 | [
"Physics"
] |
Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.
In CPES, power quality analysis is not restricted to the classic problems such as the definition and classification of disturbances, and measurement protocols and standards [7,8] have also been well established. Therefore, building tools for extracting the fine-grained information inside the power quality data accurately is of great interest to power quality researchers. In this paper, we take the harmonic distortion, which is the most common type of power quality disturbance, for example, low-order harmonics have been extensively discussed in the literature [9][10][11] because they contribute the most to related problems such as overheating the rotating machines and degrading the performance of electronic equipment in the traditional power system. However, as more renewable energy sources are put into use and microgrids are becoming widespread across the country, excessive interharmonic pollution has been injected into the system [12], which may indicate many characteristics of the power equipment within the system [13], and problems caused by frequency deviation and noise are also getting severe [4]. Therefore, compared to the methods only aiming at low-order harmonics and regardless of the frequency and noise, resolution-enhanced harmonic and interharmonic measurement can provide a more comprehensive understanding of the power quality.
Frequency estimation is an important factor to harmonic and interharmonic measurement [14,15]. The standard IEC (International Electrotechnical Commission) 61000-4-7 [8] recommends Fast Fourier Transform (FFT) and the subgroup method as the tools measuring the harmonics and interharmonics, with 5 Hz as the frequency resolution. However, the frequency of interharmonics is not an integer of the fundamental frequency, and fundamental frequency deviation is unavoidable in general; therefore, harmonic and interharmonic measurement needs higher frequency resolution. Additionally, noise and time-varying nature of the power signals should also be considered in the design of the measuring devices and algorithms. Distributed and advanced sensors equipped with advanced algorithms bring us a deeper and more accurate comprehension of the whole system [15][16][17][18][19][20]; however, they also make us confront a tradeoff between accuracy and computational complexity. High-resolution methods usually need considerable computational cost, thus mostly applied in off-line processing, and low-resolution methods such as FFT-based methods are more likely to be implemented in real-time hardware in contrast. In recent years, much attention has been devoted to harmonic and interharmonic estimation, and many methods have been proposed to achieve a better tradeoff [21][22][23].
Conventional methods for harmonic and interharmonic analysis can be divided into two categories: parametric and non-parametric methods. In contrast to non-parametric methods, parametric methods such as multiple signal classification (MUSIC), the Prony method, estimation of signal parameters via rotational invariance techniques (ESPRIT) [24][25][26][27] achieve higher frequency resolution and is more suited for the measurement of interharmonic. However, accurate estimation of these algorithms highly depends on the prior information of the model order, and the algorithms are time-consuming and not robust to noise and outliers in general. Discrete Fourier transform (DFT) as the most commonly applied non-parametric method is time-efficient and robust enough; however, its frequency resolution is restricted by the observation interval, and its result may be deteriorated by spectral leakage caused by interharmonic and fundamental frequency deviation [28].
Modern methods based on artificial neural networks (ANN) [9,10], adaptive linear neuron (ADALINE) network [11,21], independent component analysis (ICA) [29,30] and empirical mode decomposition (EMD) [31,32] have emerged in this field recently. ANN and ADALINE can be applied in real-time for time-varying power signals; however, traditional ADALINE and ANN methods are only capable of calculating the harmonic component's amplitude and phase angle with the frequency known a priori, and noise and interharmonics have to be pre-filtered for accuracy, thus limiting the measurement of interharmonics. ICA-based methods formulate the harmonic and interharmonic estimation into a single-channel independent components extraction problem, and extract pure sinusoidal signals for subsequent processing, in order to reduce the computation burden. [30] proposes to leave the computation to the design stage to obtain the best separation row offline, and achieves accurate estimation results at the sacrifice of the adaptability to noise. EMD-based methods such as improved EMD with masking signals (IM-EMD) [32] also aim at extracting single-frequency harmonics from distorted time-varying power signals, but the masking parameters for IM-EMD are not consistent in different conditions and lack of self-adaption.
Considering the tradeoff between accuracy and computational complexity, numerous improvements for the above methods have been offered. One method alone has its intrinsic weakness, and thus hybrid methods like two-stage ADALINE [21], exact model order ESPRIT (EMO-ESPRIT) [33] and ESPRIT-assisted adaptive wavelet neural network (EA-AWNN) [22] have been proposed to compensate for each other's weakness, thus achieving accuracy and time-efficiency simultaneously. Two-stage ADALINE integrates Prony method to help ADALINE locate every frequency component accurately. EMO-ESPRIT utilizes a model order estimation algorithm to provide prior information to ESPRIT for accurate estimation. To handle time-varying signals with higher accuracy, EA-AWNN leverages ESPRIT's accurate estimation results to train the adaptive wavelet neural network in real-time. Furthermore, principles of compressive sensing have also been applied to DFT-based waveform analysis (CS-DFT) for higher frequency resolution [23], though the estimation accuracy is restricted by discrete-valued frequency estimates. The main idea of these methods is to first estimate the frequency using high-resolution methods, and then to estimate the amplitude and phase angle with fast and adaptive techniques.
This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. It is based on the single-channel version of Robust ICA [34,35] (SC-RICA) to extract harmonic and interharmonic components efficiently at first, then leveraging the results computed by the previous stage. The high-resolution frequency is estimated from three DFT samples [36,37] with little additional computation, and, finally, the amplitudes and phases are calculated using the ADALINE [21] network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals, and thus provides a deeper insight of the (inter)harmonic sources and even the whole system. The superiority of the proposed method is compared with those researched before, and the interference caused by fundamental frequency deviation and the presence of noise are also considered.
The remainder of this paper is structured as follows. Section 2 introduces the power quality analysis framework in networked environment. Section 3 formulates the problem and presents the basic principle of single-channel ICA (SCICA) and ADALINE. Section 4 describes every stage of the proposed method, and how the single-channel version of RobustICA assists ADALINE in extracting harmonic and interharmonic components from distorted power signals. The simulated and laboratory experiments are presented in Section 5 to evaluate the performance of the proposed method, the distorted power signals are acquired using a prototype system, and high-resolution methods such as EMO-ESPRIT, MUSIC and CS-DFT are used for comparison. Finally, the conclusion is given in Section 6.
Analysis Framework
As discussed in Section 1, existing smart meters are mainly used for billing purposes and demand monitoring, and due to the lack of computing resources and waveform measurement devices, they still cannot fulfill the demands such as power quality analysis in appliance level and the interaction between computerized instrumentation and physical facilities. Therefore, metering devices should be modified to make them truly smart and be dispersed throughout the system. Inspired by the work of nonintrusive load monitoring (NILM) initiated by George W. Hart [38], a nonintrusive, high-performance monitor can be connected with the total load using the standard revenue meter socket interface. This extension of the meter leads to very easy installation, removal, and maintenance, and there is no need to build lines especially for power quality analysis.
Wireless sensor networks (WSNs) are the most promising technology that connects the "last mile", which refers to the connection that provides substations and consumers the access to high-speed and wide-bandwidth core network [39]. Wireless communication technologies help achieve remote
Theoretical Background
It is assumed that the power signal can be transformed to the frequency domain with the line spectrum, and the frequency components are finite and sparse. In this section, the high-frequency-resolution harmonic and interharmonic estimation will be disassembled into three stages: harmonic and interharmonic extraction, frequency estimation, amplitude and phase angle estimation. The harmonic and interharmonic extraction stage will be formulated into a single-channel blind source separation problem, and the amplitude and phase angle estimation will be transformed to a single-hidden-layer neural network training problem. There will be a preliminary study on the carefully selected method for each stage, and principles of related methods will be also introduced.
Problem Statement
The nth sample of a power signal with sampling rate f s can be expressed by a multisine waveform: where f m is the frequency of the mth harmonic/interharmonic component, A m and ϕ m are its corresponding amplitude and phase angle, and ε(n) is the white Gaussian noise with zero mean. The goal of harmonic and interharmonic estimation is to calculate the frequency, amplitude and phase angle of every component in an accurate and fast way. As the sampled power signal sequence y(n) = [y(n), y(n − 1), ..., y(n − N + 1)] T with dimension N is formed by a linear combination of the fundamental frequency component and its harmonics/interharmonics, and harmonics/interharmonics meet the condition that they are orthogonal and thus statistically independent of each other, extracting harmonic/interharmonic components from a single measurement channel can be treated as an extreme case of an underdetermined ICA problem. It is also known as single-channel independent component analysis. The work [34,44] explains when and how standard ICA can perform source separation from a single sensor and how SCICA can be applied in the analysis of electroencephalogram (EEG) and electrocardiograph (ECG) signals. The work [29,30] applies SCICA for harmonic component extraction from power system signals. Referring to the above works, an improved SCICA is proposed in this paper to speed up the algorithm and is proven robust enough to Gaussian noise.
In order to achieve resolution-enhanced frequency analysis at the expense of little computation, DFT-based high-resolution methods are preferred. In this paper, once single-frequency waveforms are obtained, the resolution-enhanced frequency can be directly calculated from only three DFT samples, the related signal processing techniques are presented in detail in [36,37]. Notably, it can be proved that the algorithm needs little additional computation and is robust enough to noise aiming at single-frequency signals. With the high-resolution frequencies and the original signal, the final problem is how to determine the amplitude and phase angle of each frequency component accurately in real-time. ADALINE has been widely applied in the harmonic estimation due to its self-adaption to noise and rapid convergence. In addition, the traditional ADALINE's shortcoming can be overcome by the integration of proper frequency estimation methods just like the work in [21]. Therefore, ADALINE is a good choice for amplitude and phase angle estimation.
Principle of Single-Channel Independent Component Analysis
Independent component analysis (ICA) is a widely-used blind source separation method and can estimate independent sources based on the model of the mixture without any further prior knowledge. The process of applying single-channel independent component analysis (SCICA) to single-sensor signals can be regarded as adding harmonic/interharmonic extraction filter to a time-delayed model of signals.
In order to construct a time-delayed model for the ICA problem, the observed time series y(n) with dimension N can be separated into a sequence of contiguous blocks: x(n) = [y(n), y(n − 1), ..., y(n − D + 1)] T (2) where D is the number of the blocks, and the superscript T represents transposition. x as the mixed signal matrix can be decomposed into the matrix product of mixing matrix and independent sources: where E D×D denotes the mixing matrix that linearly combines the sources to form a mixture of harmonics and interharmonics, s is a D × N matrix comprising the statistically independent sources such as pure sine waveforms with disjoint frequency spectra. To estimate the s from x, the inverse equation can be expressed byŝ (n) = H · x(n) (4) where H = E −1 is the separation matrix, andŝ is the estimation of the original sources. Thus, a standard ICA algorithm can be applied to x(n) and learn the sparse features of the measured signal, and the most common method to obtain the separation matrix is the FastICA algorithm. FastICA uses nonGaussianity as the statistical property of the signals for source separation, and Kurtosis is one of the classic measures for nonGaussianity estimation of zero-mean random variables. For better estimation, x should be preprocessed using principal component analysis or Whitening technique to make all the vectors of x have unit variance and uncorrelated.
To optimize the ICA algorithm, the fixed-point and gradient algorithm based on Kurtosis have been proposed in the literature. The kurtosis is the fourth standardized moment and also one of the classic measures of whether there are problems with outliers in a data set, namely, the estimation of non-Gaussianity for ICA. However, the presence of sub-Gaussian or super-Gaussian sources brings increased estimation error and computational complexity to the algorithms, and saddle points and spurious local extrema in the contrast functions are also not considered. Therefore, referring to the RobustICA algorithm [35] in this work, the Kurtosis contrast function is used as the objective function, and the optimal step size technique is integrated to complement the cost efficiency and robustness. In addition, preprocessing techniques are unnecessary, which simplifies the process.
Principles of ADALINE
ADALINE is a single-hidden-layer neural network with a linear transfer function and has been extensively used in signal processing, control systems and error cancellation. Recently, this technique also emerges in the harmonic and interharmonic analysis [11,21,45]. Regardless of the zero-mean noise, the estimated signalŷ(n) with known fundamental frequency and unknown amplitudes, phase angles can be represented as: Then, substitute m cosφ m , m sinφ m , 2πnm f 0 / f s withŴ 2m−1 ,Ŵ 2m ,θ m so that where m andφ m denote the estimated amplitudes and phase angles of the signal, and Based on each given input and target vector, the adaptive linear neuron network adjusts the weights and biases at each time step to minimize the sum-squared error of recent input and target vectors. The amplitude and phase of the mth frequency component are expressed by: Traditional ADALINE is only capable of computing the amplitudes and phase angles with known frequency as above. However, the presence of fundamental frequency deviation and interharmonics is usually not negligible in the practical situation, and it will cause serious interference to the convergence speed and accuracy of traditional ADALINE. Therefore, in this paper, the frequency deviation and interharmonics have been considered as the power signal is modeled, and gradient-or Jacobian-based methods have been tested for training the adaptive linear neuron network.
Proposed Method for Harmonics and Interharmonics Measurement
In this section, detailed description and improvement of the proposed method will be presented based on the preliminaries. It will also be discussed how the proposed method obtains a better fusion of harmonic and interharmonic extraction, frequency estimation, amplitude and phase angle estimation. In the end of this section, the overall process for the measurement of harmonic and interharmonics will be sketched.
Harmonic and Interharmonic Extraction
In this paper, unlike methods that use the FastICA algorithm to obtain the separation matrix, the algorithm named RobustICA is tested to achieve better performance on convergence speed, and sphering and whitening is not required as preprocessing as opposed to most ICA algorithms. The RobustICA technique is based on optimization of the Kurtosis contrast function, and it uses the optimal step-size technique to optimize the Kurtosis in the search direction. In particular, kurtosis contrast can be used in the source extraction with super-Gaussian sources included. Kurtosis is one of the classic measures for estimation of non-Gaussianity for ICA. Denote the Kurtosis of α by Kurt(α) and it is defined by: where E[·] denotes the mathematical expectation. Due to the coefficients such as step size, the learning rate in the iteration process directly affects the convergence speed, and the balance between convergence speed and accuracy is highly dependent on the step size. The optimal step size that maximizes the absolute Kurtosis function based on exact line search can be represented by where h is a separating vector, and the search direction g is typically the gradient g = ∇ w Kurt(h). One iteration of RobustICA performs an optimal step-size optimization as follows: step 1: The optimal step-size polynomial coefficient is given by can be obtained from the measured signal and the values of g and h. step 2: Extract optimal step-size polynomial roots {µ i } 4 i=0 . step 3: Select the root µ opt leading to the absolute maximum of the contrast along the search direction. step 4: Update h = h + µ opt g.
The stopping criterion for extracting one component is similar to that of FastICA. However, due to not applying the dimension reduction tricks like PCA, after every harmonic/interharmonic related component is extracted, the iterations for the remaining components will be a waste of time. In order to speed up this process, and also for the convenience of frequency estimation in the next stage, an additional stopping criterion is set according to the spectral energy ratio of each extracted component.
The proposed method SC-RICA is summarized as follows: step 1: Construct the SCICA model. step 2: Update the separating vector h iteratively using the optimal step-size method as mentioned. step 3: Extract harmonic or interharmonic componentŝ = h · x with the optimized separating vector. step 4: Calculate the DFT coefficients S of the extractedŝ and estimate the spectral energy ratio around the grid k p with the peak value of the magnitude. step 5: If the spectral energy ratio exceeds the empirical threshold, loop from step 2. If not, break the loop and outputŝ, S, k p of every extracted component.
Frequency Estimation
The mth extracted harmonic/interharmonic component is close to the single-tone waveform observed under white Gaussian noisê where the superscript (m) indicates the index of the extracted component. The parameter of a sinusoidal waveform observed under white Gaussian noise is typically estimated with a coarse to fine strategy. First, DFT coefficients without interpolation are calculated for a coarse frequency estimate and the peak value of them is determined. Then, a resolution-enhanced search around the peak is provided. Assuming the frequency of the extracted component is where k p is the index of the maximum DFT magnitude coefficient and |δ| < 1/2 represents the space between the true frequency point and the nearest discrete grid point. N is the sample size of the signal. The target at this stage is to estimate δ from samples around the peak in the DFT spectrum. As mentioned above, after calculating the DFT coefficients of a single-tone waveform observed under white Gaussian noise, most of the signal energy will focus around the grid of k p in the Fourier domain due to spectral leakage. Additionally, f m generally is not exactly at the grid of k p unless f m is a multiple of the frequency resolution f s /N. However, leveraging the information around the grid of k p such as its neighbor k p−1 and k p+1 , a more accurate frequency estimation will be obtained. The method proposed by Candan [37] is summarized as below: step 1: Calculate the windowed DFT coefficients of the extracted harmonic/interharmonic component: step 2: Find the index of the maximum DFT magnitude coefficient: step 3: Select a window and calculate the window dependent function: where w(n) is the real-valued window, and f w (α) is the first derivative of f w (α). step 4: Calculate the bias correction factor of the selected window function: where step 5: Estimate δ using the following equation: step 6: Estimate f m of the extracted component with the function given in Equation (12).
The result of the first two steps has been already obtained in the stage of harmonic and interharmonic extraction. Further proof and derivation can be found in [36,37]. The method of this stage obtains resolution-enhanced frequency estimation without much additional computation, and it been proved robust in the presence of noise. Due to some interfering components that may be not filtered out completely, the application of the window function alleviates the problem to some extent.
Amplitude and Phase Angle Estimation
Once all the high-resolution frequency components are obtained, the estimation of amplitudes and phase angles can be considered as learning a linear or nonlinear mapping between appropriate inputs and outputs. Unlike traditional ADALINE, the fundamental frequency deviation and interharmonics have been considered at this stage. Regardless of the zero-mean noise, the estimated signalŷ can be represented as:ŷ Then, substitute m cosφ m , m sinφ m , 2πnf m / f s withŴ 2m−1 ,Ŵ 2m ,θ m , so that wheref m , m ,φ m denote the estimated frequencies, amplitudes and phase angles of the signal, and W = [Ŵ 1Ŵ2 · · ·Ŵ 2m−1Ŵ2m ] T ,X(n) = [sinθ 1 cosθ 1 · · · sinθ m cosθ m ] T . In this situation, the neural network architecture is constructed withX(n) as the input and the original signal y in the time domain as the target output. In addition, there is no bias connected to the network layers, and the transfer function is linear rather than hard-limiting, which allows its outputs to take on any value. The mean squared normalized error with regularization can be taken as the cost function where the subscript i indicates the index of a vector element, N denotes the sample size of the power signal, M denotes the number of frequency components, and λ is the weight decay parameter that controls the relative importance of the two terms in the definition of J(Ŵ).
Our goal is to minimize J(Ŵ) as a function ofŴ. To train the neural network, we initialize eacĥ W i to a small random value near zero, then update the weight values using some network training function, such as gradient-or Jacobian-based methods. As the network's weight, input and transfer functions have derivative functions, a scaled conjugate gradient backpropagation is used to calculate derivatives of the cost function with respect to the weight variable.
One iteration of gradient descent updates theŴ as follows: where β denotes the learning rate. By repeatedly taking steps of gradient descent to reduce the cost function J(Ŵ), and setting appropriate stopping criterion, the neural network will be well trained and W will contain the information of amplitudes and phases. Then, the amplitude and phase of the mth frequency component are expressed by This hybrid method consists of three stages: harmonic and interharmonic extraction, frequency estimation, amplitude and phase angle estimation. Extracting pure sinusoids is the guarantee of the resolution-enhanced frequency estimation, and frequency estimation is an important factor to accurate harmonic and interharmonic measurement. In summary, the overall process for the measurement of harmonics and interharmonics can be sketched as follows: Train the neural network until meeting the stopping criterion. step 6: Calculate amplitudes and phase angles of extracted components of the measured power signal.
Performance Evaluation
The experiments are organized as follows: Section 5.1: synthesized power signals including harmonics, interharmonics are analyzed, fundamental frequency deviation and noise are also considered. Section 5.2: PWM (pulse width modulation) VSI (voltage source inverter) induction motor drives as loads in an IEEE 14 bus system are simulated by SimPowerSystems TM (MathWorks, Natick, MA, USA), and the emitted harmonic and interharmonic disturbances are measured and then analyzed. Section 5.3: laboratory experiments are conducted on a prototype system which is designed according to the proposed power quality analysis framework in networked environment. At the moment of switching, the microgrid from grid-connected mode to isolated mode, and the estimation results of the fundamental frequency variation are analyzed and compared in the presence of harmonics. In addition, field experimental signals with harmonic and interharmonic currents injected by regularly fluctuating loads such as laser printers are acquired and analyzed. Section 5.4: the results of simulation and field experiments are analyzed, and the characteristics of the proposed method are also discussed.
Measurement of Synthesized Harmonics and Interharmonics with Frequency Deviation and Noise
In order to illustrate the performance of the proposed method, synthesized power signals including harmonics and interharmonics are generated, and the methods are implemented in MATLAB with a desktop personal computer having an Intel Core i5-3470 processor ( Intel Corporation, Santa Clara, CA, USA) and an 8-GB random access memory. The parameters of the signal are listed in Table 1. There are eight components represented as component numbers from one to eight, and they are spectrally disjoint. It can be seen from Table 1 that the fundamental frequency deviation has been considered within a 50 Hz power system, and 0.1 Hz as the fundamental frequency deviation is an acceptable assumption. The synthesized power signals mainly consist of fundamental, 3rd, 5th harmonics and five interharmonic components around them. To ensure that the synthesized signals are close to real power signals, the amplitudes of harmonics and interharmonics range from around 3% to 60% of the fundamentals. For comparison, white Gaussian noises with zero mean are randomly generated and added to the pure synthesized signals, and the signal to noise ratio (SNR) is set to 40 dB. K synthesized signals have been sampled with size N = 1000 and sampling frequency 5 kHz. Three methods are chosen: EMO-ESPRIT [33], MUSIC [25], and CS-DFT [23]. As the computation times of all four methods have positive correlations with the number of harmonic and interharmonic components in the synthesized signals, and all the synthesized signals are composed of eight harmonic/ interharmonic components, for a fair and reasonable comparison, the model orders of MUSIC and EMO-ESPRIT are strictly set to 16, and the dimensions of the autocorrelation matrix are tuned to be as small as possible without sacrificing the estimation accuracy. Similarly, CS-DFT terminates the algorithm when the iteration number of the support recovery exceeds 16, and the interpolation factor of CS-DFT is set to 10 as [23] suggested. In terms of the proposed method, it stops the harmonic extraction when 16 components have been extracted and terminates the adaptive linear neural network training when the gradient is below 1.00 × 10 −6 .
The tested signals in Section 5.1 are synthesized with the parameters in Table 1 so that the actual values of parameters such as amplitudes and phase angles are known. Therefore, we can use the difference between the actual and estimated value to evaluate the algorithms. The total relative errors (TRE) of amplitude estimation is used for performance evaluation and defined as where A actual (k) means the measured component's amplitude of the kth sampled signal. The total relative errors of amplitude estimation using different methods are depicted in Figure 2. It can be observed that the proposed method achieves better amplitude estimation than the others, and its TRE varies from 0.04% to 1.08% in this case, and the value of TRE is highly related to the amplitude ratio of harmonic to fundamental, the TRE of harmonics and interharmonics with relatively high ratio is small, and vice versa. However, in the presence of interharmonics and frequency deviation, the performance of CS-DFT is less stable than others, as the actual frequency component is not lying on the fine grids in the frequency domain. The average computation times of EMO-ESPRIT, MUSIC, CS-DFT and the proposed method are also shown in Table 2.
CS-DFT EMO-ESPRIT MUSIC Proposed
Average To evaluate the amplitude estimation accuracy under different signal-to-noise ratios (SNRs), the algorithms have also been implemented on the system constructed as follows. The signals are synthesized using the parameters given in Table 1 on the personal computer. Then, the arbitrary function generator AFG3102 (Tektronix, Beaverton, OR, United States) receives the instructions through ethernet and outputs the signals with additive zero mean Gaussian white noise. Noisy signals are collected by oscilloscope DSO7032B (Agilent, Santa Clara, CA, United States) via BNC (Bayonet Neill-Concelman) connector and sent to another PC for subsequent processing.
The signals of each signal-to-noise ratio have been acquired 100 times, and the total relative amplitude error in this case is defined as the total relative error of the average amplitude of all the frequency components. The total relative amplitude errors calculated using EMO-ESPRIT, MUSIC, CS-DFT and the proposed method under different signal-to-noise ratios are depicted in Figure 3. According to IEC standard 61000-4-7 [8], the signal-to-noise ratio is limited to not below 20 dB in the power system, so that the SNR in this case is set from 20 dB to 60 dB. It can be noticed that the total relative amplitude errors of EMO-ESPRIT, MUSIC and the proposed method significantly decrease as the SNR increases, although the proposed method does not obviously overmatch the other two methods when the SNR is beyond 60 dB and the noise is negligible. The proposed method is much more robust when the SNR is between 20 dB and 60 dB, and SNR in this range is very common in ordinary power systems. On the contrary, due to the relatively low frequency resolution of CS-DFT, which is restricted by the discretized and uniformly-spaced frequency grid, the presence of fundamental frequency deviation and interharmonics causes inevitable amplitude estimation error to CS-DFT results. Although CS-DFT has the potential to deal with low-SNR power signals, the frequency obtained by support recovery will not fall on the nearest frequency grid with high probability if the SNR is below 20 dB, and thus the performance of frequency estimation will be significantly weakened.
Simulation Results from the PWM VSI Induction Motor Drive
Adjustable speed drives are widely used in various industries and can be regarded as main loads, adjustable speed drives usually utilize interlinked frequency converters which are just the sources of interharmonics. These interharmonics vary with the load frequency due to the propagation of load current into the DC (direct current) link. To evaluate the proposed method on experimental signals closer to operating conditions, a PWM (pulse width modulation) VSI (voltage source inverter) induction 3HP motor drive as a load in an IEEE 14 bus system is built using SimPowerSystem TM , and the interharmonics produced by PWM adjustable speed drive are considered not negligible. The induction motor drive mainly consists of a three-phase AC (alternating current) supply, a PWM inverter which is built using a universal bridge, an induction motor driving a mechanical load and a three-phase diode rectifier converting AC to DC. The simulation of this system is discretized with a 2 µs time step and lasts for 1 second, and the motor speed set point is 1000 rpm. Specific simulation parameters are shown in Table 3. The distorted signal depicted in Figure 4 is captured at the steady operation period, after reducing the sampling points through downsampling, and the data sampled at 5 kHz is sent to the algorithms. The resolution-enhanced harmonic and interharmonic measurement techniques such as MUSIC, EMO-ESPRIT, CS-DFT and the proposed method generally assume that the distorted signals are sparse in frequency domain. According to the theoretical interharmonic frequency of the source current of this motor drive [13], the current signals captured at the steady operation period can be considered to be sparse with line spectrum. Because the frequency spectrum of the waveform at the acceleration period is not a line spectrum and of no concern in this paper, only the waveform at the steady operation period is set as input to algorithms. In order to be seen clearly, Figure 5 plots the frequency spectrum below 500 Hz and there are much more harmonic and interharmonic components not shown in the figure, and for simple comparison, a zero-padded DFT is plotted in the frequency spectrum. As actual harmonic and interharmonic components in the signals of Section 5.2 are unknown, simply calculating the parameters such as amplitudes and phase angles for comparison is meaningless. Therefore, different from TRE, reconstruction error [9,22] of the measured signals as a more suitable candidate is used as the evaluation criterion where y re is the signal reconstructed using the result produced by some method, y ms is the measured signal and y rms is the root mean square of y ms . The statistical characteristics of the reconstruction error computed from 100 random samples of the measured signals are shown in Table 4, and high-resolution methods such as CS-DFT, EMO-ESPRIT, MUSIC are used for comparison. Because the model order of the measured signal in this case is large and unknown, it is assumed that the model order for MUSIC is 100. As CS-DFT and the proposed method are based on iteration, only appropriate stopping criterion is set. The result shows that the proposed method outperforms the other methods in terms of the reconstruction error, and CS-DFT also performs well because the actual frequency components are just lying near the interpolated frequency grid, although with relatively low frequency resolution. However, EMO-ESPRIT and MUSIC achieve unsatisfactory performance because the order of the covariance matrix is hard to tune, and, for MUSIC, the estimated model order is different from the actual signals. To fulfill the demands such as power quality analysis in appliance level and the interaction between computerized instrumentation and physical facilities, a deeper insight of the (inter) harmonic sources can also be obtained. The theoretical interharmonic frequency f ih of the source current of this motor drive is given by the following expression f ih = |(p 1 m ± 1) f 1 ± p 2 n f 2 |, where m = 0, 1, 2..., n = 1, 2, 3..., p 1 = 6, p 2 = 2 are known a priori and denote the numbers of pulses of the rectifier and inverter, respectively, f 1 denotes the fundamental frequency of the supply, and f 2 is the load frequency. Therefore, the load frequency can be estimated from the frequency spectrum. From the spectrum presented in Figure 5, fundamental frequency, 5th, and 7th harmonics are the main components of this distorted signal, and there are interharmonics with frequencies of 120.5 Hz, 179.5 Hz, 279.5 Hz, 320.5 Hz, 420.5 Hz. f 1 = 50.0 Hz is the fundamental frequency; therefore, f 2 , estimated with the above equation, is about 35.3 Hz and very close to the load frequency calculated from the stator current waveform. It can be seen that the accurate measurement of harmonic and interharmonic with the proposed method has revealed some intrinsic information of the electric equipment, and it provides useful knowledge for subsequent analyzing such as harmonic and interharmonic source location or fault location, and assists the control center to achieve a more sophisticated control for (inter)harmonic mitigation or even power quality enhancement. Conceivably, this kind of processing and analyzing techniques will be significantly beneficial for digital modernization.
Laboratory Experiments on the Prototype System
The prototype system for power quality analysis in networked environment has been built under the laboratory environment. The nonlinear loads are distributed according to the application scenarios. For instance, fluorescent lighting and uninterrupted power supply (UPS) systems form the main loads for commercial users, laser printers and personal computers are chosen as the loads in offices, and the loads of AC/DC rotors emerge for industrial users. Voltage and current signals of all the consumers are simultaneously acquired by a National Instruments (Austin, TX, USA) Ethernet RIO Expansion Chassis NI 9148 equipped with voltage modules (National Instruments NI 9225, 50 kS/s sample rate, 300 Vrms measurement range, 24-bit resolution) and current modules (National Instruments NI 9227, 50 kS/s sample rate, 5 Arms measurement range). The signals are further processed by intelligent information processing techniques, and then the harmonic and interharmonic estimation results are transmitted through ZigBee network, and CC2430 from Texas Instruments (Dallas, TX, USA) is chosen as the core chip of the ZigBee end device. Given the possible power failure, the ZigBee end device can be powered by either an external power supply or an internal battery. All of the power quality data in the laboratory will be aggregated to ZigBee coordinators and further processed in the control center deployed on a personal computer. Furthermore, the control center can also be considered to be an information publishing platform that is capable of announcing power-quality-related alerts, electricity usage, price and so on. In this prototype, web services for obtaining the harmonic and interharmonic estimation results are deployed on the Internet, and for demonstration purposes, visualized results are depicted on a remote tablet computer (Nexus 10, Google, Mountain View, CA, USA) in real-time. All of above components constitute the prototype for power quality analysis in networked environment, and the laboratory experiments as follows are conducted in this prototype system.
The first result discussed in this section refers to the estimation of the fundamental frequency variation in the presence of harmonics. This experiment is conducted in the prototype system mentioned above, and a SANTAK (Shenzhen, China) C3K UPS serves as a standby. Initially, the microgrid where a DC motor is running is connected to the power grid. Then, the external supply is interrupted around the time of 4.0 s. Finally, the UPS serves as the main supply to the end.
To capture the slight variation of the fundamental frequency under the distortion of harmonics, only current signal acquisition devices are deployed. To evaluate the estimation result in the presence of harmonics, the fundamental frequency of the distorted current signal is computed with the proposed method, EMO-ESPRIT and MUSIC. The result of CS-DFT is not depicted because the frequencies obtained by CS-DFT are discrete in a fine frequency grid, and the interpolation factor of CS-DFT is advised to be not much larger than the order of 10 in [23]; otherwise, the numerical conditioning tends to get worse. In this paper, the interpolation factor of CS-DFT is set to 10, which means its frequency resolution is 0.5 Hz. This resolution is inadequate for tracking the slight variation of fundamental frequency. Therefore, aiming at the estimation of the fundamental frequency variation, EMO-ESPRIT and MUSIC are chosen for comparison. The result is depicted in Figure 6. From the computed value of the fundamental frequency, it can be seen that the variations estimated by all methods are much more stable when the UPS takes over because the system of the microgrid is less complicated, and there is a sudden change at around 4.0 s just as the external supply interrupts. Because all the methods belong to batch processing techniques, the analyzed window length and moving step are set to 200 and 40 ms, respectively. Although the computed frequency varies slightly from about 49.99 Hz to 50.03 Hz, both EMO-ESPRIT, MUSIC and the proposed method track the actual frequency well with little deviation, and it can be inferred that the result of the proposed method is more in accordance with the ground truth.
The second result of this section refers to the parameter estimation of another important type of load. Experimental signals comprising harmonics and interharmonics are acquired and the laboratory setup is given in the prototype mentioned above. Apart from the PWM VSI induction motor drive, regularly fluctuating loads such as welder machines and laser printers can also be harmonic and interharmonic sources. Here, laser printers are used as the fluctuating loads in the system, the load is fed by a 220 V, 50 Hz single-phase AC supply with a miniature circuit breaker. For current waveform measurement, NI 9227 as a current input module is in series with the AC supply and the loads, and gives a multi-channel measurement of the waveforms. Then, an Ethernet RIO with an NI 9227 inserted sends the multi-channel waveform data to the power quality analysis units for subsequent processing.
The current waveform depicted in Figure 7 is a measured time waveform of the regularly fluctuating load from the chosen channel. It can be observed from the spectrum in Figure 8 that, firstly, the amplitudes calculated by the proposed method are not equal to the peak values of the zero-padded DFT. They are usually larger than the peak values due to the spectrum leakage. Secondly, the frequencies calculated by the proposed method are also not exactly on the discrete frequency grid of the zero-padded DFT. In addition, there are mainly fundamental, 3rd, and 5th harmonic components each with two interharmonics distributed around them, and the frequency distribution is similar to the mathematical model of the regular fluctuating load [13]. Thus, the modulation frequency can be inferred to be around 34.0 Hz. Apart from the applications presented in Section 5.2, non-intrusive appliance load monitoring is highly dependent on the information extracted from this waveform-level data which is also addressed for power quality analysis in networked environment, and the monitoring and control will be zoomed to the appliance level. It can be inferred that the success of this research will create truly smart PQAs or smart meters.
Discussion
Simulation and laboratory experiments have been conducted in this work. Apart from the enhanced resolution and accuracy shown in the experiments, some other technical characteristics should be noticed.
Compared to classic and state-of-art methods with high frequency resolution, the proposed method is remarkably accurate in parameter estimation. This proves that extracting harmonic and interharmonic components is beneficial to subsequent processing, and resolution-enhanced frequency can be acquired with only three DFT coefficients. Furthermore, the proposed method is relatively adaptable to noise and fundamental frequency deviation. It achieves the best performance under SNR between 20 dB and 60 dB among the four methods observed from Figure 3, and the noise in this range is quite common in electric power systems [8]. Although MUSIC, EMO-ESPRIT achieve better performance than the proposed method when SNR exceeds 60 dB and CS-DFT prevails when SNR is below 20 dB, they are not engineering-practical enough. This is mainly due to the careful selection and improvement of the method in each stage. In the harmonic and interharmonic extraction stage, unlike traditional ICA, sub-Gaussian or super-Gaussian sources are considered in SC-RICA. In the parameter estimation stage, DFT and ADALINE are robust to noise in nature. Furthermore, the frequency of the proposed method lies in the continuous domain. This provides a more precise result than CS-DFT in the presence of fundamental frequency deviation and interharmonics.
From the view of computational complexity, the proposed method is less demanding than other existing high-resolution methods, and utilizes the least computing resource among these four methods without sacrificing the accuracy. Relatively, MUSIC, EMO-ESPRIT and CS-DFT are still suitable for offline analysis, limited by their high computational burden [14,23]. As for the proposed method, although it is a hybrid, cascading method, it has been examined that one separation row can be achieved in only one iteration with the help of optimal step-size optimization in the harmonic and interharmonic extraction stage, and the separation results are fully acceptable for the subsequent processing. In addition, fine frequencies can be estimated with little additional computation using the method described in Section 4.2, and ADALINE can obtain a fast convergence in no more than 10 iterations. Specifically, the most computation-intensive part is the harmonic and interharmonic extraction. To speed up the convergence of ICA, the optimal step-size optimization is performed using an exact line search technique, although the computational complexity per iteration of RobustICA (5D · N + 12N) is more than twice the FastICA (2D · N + 2N). The convergence of RobustICA is remarkably faster than that of FastICA, and, generally, the acceptable separation vector can be achieved in no more than one iteration. In addition, ADALINE and FFT are highly parallelizable tasks, and they can both be implemented efficiently on a field-programmable gate array (FPGA). In contrast, the iterative greedy support recovery of CS-DFT adds too much additional computation, and the computation times of subspace methods such as MUSIC and EMO-ESPRIT highly depend on the dimension of the covariance matrix.
However, some restrictions have been found during the experiments. The individual components are extracted iteratively and the ADALINE network's size and convergence speed are dependent on the number of components. Therefore, the proposed method still needs much time when there are excessive harmonic and interharmonic components. Furthermore, the proposed method is a batch processing technique and mainly focuses on the power signals with a line spectrum. More precisely, short-term steady-state measurements of the harmonic and interharmonic is a concern in this paper. From the experiment results, it could be inferred that the proposed method is appropriate to be implemented on power quality analyzers for high-resolution analysis of harmonics and interharmonics, and after fine-grained information being accurately extracted, many power-quality-related applications will profit in this networked environment for power quality analysis.
Conclusions
In this paper, power quality issues in a cyber-physical energy system have been addressed and a resolution-enhanced approach to harmonic and interharmonic estimation has been proposed for power quality analysis in networked environment. Considering that microgrids are widespread, and with higher harmonic and interharmonic distortion in the future, a power quality analysis framework is designed deploying power quality analyzers closer to the consumers for lower-level sensing and control, and the wireless sensor network is also emphasized for more effective data transmission. In such a framework, the proposed method utilizes the single-channel version of RobustICA to extract harmonic and interharmonic components in a time-efficient way. Then, high-resolution frequencies are obtained from three DFT samples with little additional computation. Finally, amplitudes and phases are calculated with the ADALINE network to improve the adaptivity to noise. The proposed method has been tested on synthetic and experimental harmonic and interharmonic signals, and accurate amplitude estimation and resolution-enhanced frequency estimation can be achieved time-efficiently in the presence of noise and fundamental frequency deviation. From the view of accuracy and computational complexity, the proposed method obtains a better tradeoff compared to the existing methods, and it is more suitable to be implemented in hardware for high-resolution measurement of harmonic and interharmonic. Although the proposed method reduces the computational burden considerably, the computation time is still much higher than most of the DFT-based techniques. Therefore, future work will be focused on the further reduction of the computational time. | 11,012.4 | 2016-06-27T00:00:00.000 | [
"Engineering"
] |
Assessment of Genetic Stability in Human Induced Pluripotent Stem Cell-Derived Cardiomyocytes by Using Droplet Digital PCR
Unintended genetic modifications that occur during the differentiation and proliferation of human induced pluripotent stem cells (hiPSCs) can lead to tumorigenicity. This is a crucial concern in the development of stem cell-based therapies to ensure the safety and efficacy of the final product. Moreover, conventional genetic stability testing methods are limited by low sensitivity, which is an issue that remains unsolved. In this study, we assessed the genetic stability of hiPSCs and hiPSC-derived cardiomyocytes using various testing methods, including karyotyping, CytoScanHD chip analysis, whole-exome sequencing, and targeted sequencing. Two specific genetic mutations in KMT2C and BCOR were selected from the 17 gene variants identified by whole-exome and targeted sequencing methods, which were validated using droplet digital PCR. The applicability of this approach to stem cell-based therapeutic products was further demonstrated with associated validation according to the International Council for Harmonisation (ICH) guidelines, including specificity, precision, robustness, and limit of detection. Our droplet digital PCR results showed high sensitivity and accuracy for quantitatively detecting gene mutations, whereas conventional qPCR could not avoid false positives. In conclusion, droplet digital PCR is a highly sensitive and precise method for assessing the expression of mutations with tumorigenic potential for the development of stem cell-based therapeutics.
Introduction
Stem cell-based therapies show promise for regenerative and rare disease treatment.[1].In particular, human induced pluripotent stem cells (hiPSCs) have garnered attention as starting materials for cell therapy owing to their pluripotency and capacity for large-scale manufacturing [2].However, the complex and diverse processing steps involved, such as expansion and differentiation into target cells, often lead to the occurrence of various genetic mutations and pose challenges in terms of genetic instability [3].From a regulatory perspective, it is essential to assess the genetic stability of not only the source cells but also the final differentiated target cells during the manufacturing process.
To ensure the genetic stability of stem cell-based products, regulatory guidelines from the United States Food and Drug Administration (FDA), the European Medicine Agency (EMA), the Korea Ministry of Food and Drug Safety (MFDS) and the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) strongly recommend the use of appropriate testing methods [4][5][6][7].Currently, the most commonly used methods for assessing genetic stability include karyotyping, fluorescence in situ hybridization (FISH), and comparative genomic hybridization (CGH) arrays.However, these conventional testing methods have limitations, including difficulties in handling large-scale cell differentiation, extended processing times, and low resolution, which make it difficult to detect small structural changes or subtle abnormalities at the chromosomal level and to rely on data related to certain genes or regions of interest.In addition, it is becoming increasingly important to understand the mechanisms that control DNA integrity as DNA damage/repair processes [8] or chromatinolysis [9] occur, which could eventually lead to cancer.This renders the detection of new variations or genetic abnormalities challenging [10][11][12].To address these limitations and achieve a more sophisticated assessment of genetic stability, the introduction of high-resolution and sensitive testing methods is essential.Recently, optical genome mapping (OGM) and nextgeneration sequencing (NGS) have been increasingly used to overcome the limitations of conventional methods [13].OGM is suggested as a suitable alternative to detect the genomic structural variations of the examined patient [14].OGM detects genomic structural variants and monitors tumor-related copy number changes, potentially serving as biomarkers [14].NGS methods offer a higher resolution than conventional genetic stability assessment methods, enabling the efficient analysis of large genomes in a shorter timeframe than, for example, Sanger sequencing.Thus, NGS has been widely utilized in the fields of genomics research, diagnostics, and drug development [15][16][17][18][19].
In this study, genetic variations arising during the expansion of hiPSCs and their differentiation into cardiomyocytes (CMs) were systematically examined using established techniques, such as karyotyping and chromosomal microarray analysis (CMA), along with recent contemporary analytical methods, including whole-exome sequencing (WES) and targeted sequencing.After the identification of genetic mutations, the detected variants were validated and scrutinized for false positives by droplet digital PCR (ddPCR).This study aimed to evaluate genetic stability by cultivating and differentiating cells across three batches, considering potential genetic mutations during manufacturing and anticipating variations across passages [20].
The results of this study are expected to enhance the accuracy and reliability of safety evaluations by comparing traditional tests with new technologies.Additionally, this study aims to contribute to the improvement and standardization of genetic stability assessments in stem cell therapy, providing a solid foundation for its clinical application.Furthermore, the validation of ddPCR effectiveness and the proposal of applicable criteria for assessing the stability of stem cell therapies are expected to contribute to enhanced standardization and regulatory compliance in genetic stability assessments.
Generation and Cardiac Gene Expression of hiPSC-Derived CMs (hiPSC-CMs)
To compare the genetic stability of hiPSCs and hiPSC-CMs, depending on the passage of hiPSC expansion during biomanufacturing, we differentiated hiPSCs into CMs at three different passages.Furthermore, to compare genetic stability across batches, we differentiated three batches of hiPSCs for each passage (early, intermediate, and late) (Figure 1A).The hiPSC-CMs were successfully generated from both the early-batch (EB) and intermediate-batch (IB) hiPSCs.The contractile beating of hiPSC-CMs with multiple contractile points was observed from 2 weeks after differentiation in both the EB and IB passage groups, followed by synchronized beating at 4 weeks.After 4 weeks of differentiation, the hiPSC-CMs gathered and detached from the bottom of the plate, gradually forming a reticulated cardiac sheet (Figure 1B).Conversely, in the late-batch (LB) hiPSCs, cells detached from the bottom of the plate before reaching 2 weeks of differentiation, preventing successful differentiation.Consequently, an analysis of the LB group was not possible in subsequent experiments.The successful differentiation of hiPSCs into CMs was confirmed by mRNA expression patterns using reverse transcription quantitative real-time PCR (RT-qPCR).First, the expression level of the undifferentiated/pluripotent gene POU5F1 was significantly decreased in hiPSC-CMs after 2-4 weeks of differentiation compared to that in hiPSCs (Figure 1C).For the cardiogenic mesoderm marker ISL1, the expression level increased in hiPSC-CMs at 2 weeks but subsequently decreased at 4 weeks of differentiation
Cytogenetic Analysis
In the cytogenetic analysis of hiPSCs, G-banding was used to detect chromosomal variations and large structural abnormalities, which revealed no apparent cytogenetic abnormalities upon visual inspection (Figure 2A).These findings indicate a normal karyotype.However, CytoScanHD chip analysis, utilized for identifying subtle structural abnormalities, uncovered a 1.7 Mbps gain in genomic copy numbers at chromosome 20q11.21(Figure 2B).The observed B-allele frequency changes at 20q11.21 suggested an unequal replication of the two alleles on chromosome 20.This finding was corroborated by alterations in the Log2Ratio values.Positive Log2Ratio values indicated an increase in DNA replication on the respective chromosomes.Thus, these changes in B-allele frequency and Log2Ratio indicate an increase in DNA quantity resulting from alterations in replication within the chromosomal region.The copy number variant (CNV) on chromosome 20q11.21identified in hiPSCs encompassed the cancer-related gene ASXL1, as per the ClinVar database.Variants of this gene were detected across all groups (EB1, EB2, EB3, IB1, IB2, and IB3), regardless of cell passage and differentiation into CMs (Figure S1).This discovery suggests a pathogenic impact on the genetic or functional aspects of this region and highlights the persistence of this CNV in hiPSCs under different conditions.
Whole-Exome Sequencing
Using WES, we identified various single base-pair variants in the exons of hiPSCs and hiPSC-CMs.The average throughput depth coverage of the analyzed WES was approximately 100×.To identify putative high-impact mutations, we utilized ensemble variant effect predictor (VEP) software.Subsequently, upon verification with the Catalog of Somatic Mutations in Cancer (COSMIC) database, two mutations in the MUC4 c.8032_8033insA; p. (Pro2678fs) and KMT2C c.2263C >T p. (Gln755*) genes were identified as tier 1 variants (Figure 3A, Table 1).Notably, these mutations were consistently detected in both early and intermediate passages of hiPSCs and persisted throughout the differentiation process for up to 4 weeks.
Targeted Sequencing
We detected genetic variants in the exons of 344 solid tumor-related genes in hiPSCs and hiPSC-CMs using targeted sequencing (Figure 3B).The average throughput depth of the targeted sequencing was 300×.A total of 15 variants were identified in 344 genes, including 9 missense, 3 nonsense, and 3 frame-shift mutations.Among them, BRD4 c.3818G > A, CEBPA c566_568delinsACC, KRT32 c.1205_1206 inv, MADCAM1 c.800_801delinsCC, MYC c.857A > T, PRIM2 c.857_860delinsCTTG, RAD54L c.1093_1169 + 15dup, RREB1 c.2942T > G, and TSC1 c.3106G > T consistently appeared, even when hiPSCs were passaged or differentiated.In contrast, the frequencies of genetic variants in NOTCH4 c.17_18insTCTGCTG, ZNF141 c.973_975delinsTCA, ARID1A c.1137 + 2T > A, BCOR c.1487_1500del, FRG2B c.31_33delinsTAG, and SDHA c.1415A > C varied depending on cell passage and differentiation.We particularly focused on the BCOR variant, which was absent in the early passages of hiPSCs but was detected in intermediate passages and persisted throughout the differentiation processes (Table 1).The identified variant caused a frame shift, which led to the premature termination of the BCL-6 corepressor protein.Pathogenic structural variations persisted in hiPSC-CMs, although they were not detected by karyotyping or CytoscanHD analysis.
Real-Time PCR and ddPCR
PCR analyses were conducted to quantify the precise expression levels of KMT2C and BCOR variants identified through WES and targeted sequencing.The KMT2C and BCOR variants in hiPSC-CMs were detected using each Taq-man MGB probe (Figure 4A).In contrast to the ddPCR results (Figure 4B), false-positive results were observed for both KMT2C and BCOR mutations in the no template control (NTC) and wild-type (WT) 10 fg groups using real-time PCR (Figure 4C), although they were undetectable.Despite employing identical primer/probe sets in real-time PCR and ddPCR, the expression levels of the KMT2C and BCOR variants in real-time PCR did not exhibit the mutant-type (MT) concentration-dependent increase in the WT 10 fg, WT 5 fg + MT 5 fg, and MT 10 fg groups.In the real-time PCR analysis, the detected levels of KMT2C variants were similar in all sample groups.In contrast, ddPCR analysis demonstrated an increase in KMT2C variant expression in the EB and IB groups, which was proportional to the differentiation period.The expression levels of the BCOR variants were significantly different between the EB and IB groups, according to both real-time PCR and ddPCR.In contrast to the targeted sequencing and ddPCR results, real-time PCR detected BCOR variants in the EB group, indicating false positives.BCOR variants were detected at higher levels in the IB group than in the EB group, regardless of PCR type.These results suggest that real-time PCR leads to false positives and unclear intergroup comparisons when applied to the detection of low-copy variants, whereas ddPCR facilitates a more accurate detection of small variants, such as KMT2C and BCOR.
Validation 2.6.1. Precision
Precision was evaluated for the detection of genetic variants using four groups: NTC, WT 10 fg, WT 5 fg + MT 5 fg, and MT 10 fg.Inter-person precision involved two analysts (A and B) who independently measured samples three times each, with a total of six repetitions.The relative standard deviation (%RSD) was calculated as the mean of the variant copies and the standard deviation of the population.The %RSD values were 3.13-5.67%for KMT2C and 8.01-8.75% for BCOR, indicating precision between the analysts because the coefficient of variation (%RSD) value was' within 20% (Table 2).
Robustness
Robustness was assessed for the test method under eight different annealing temperatures for PCR, ranging from 53 to 57 • C. Three groups were measured: WT 10 fg, WT 5 fg + MT 5 fg, and MT 10 fg.The R 2 value for the slope of copies/µL was used to assess robustness at each temperature.The results showed reliable data, regardless of annealing temperature.KMT2C and BCOR robustness testing indicated that the results were consistent across the annealing temperature range (53-57 • C), with all determination coefficients (R 2 values) being 0.99 or higher.
Limit of Detection (LOD)
To evaluate the LOD, we prepared 12 analytical samples by conducting a two-fold serial dilution of the MT control.The LOD estimation test was performed in triplicate at least three times.The %RSD values of the results from three repeated experiments were examined, and the lowest concentration with %RSD < 5% was determined to be the LOD.Based on this criterion, the LOD for the KMT2C variant was determined to be 6.78 copies/µL, while the LOD for BCOR was 3.82 copies/µL (Figure 5B).
Specificity
We prepared five different test groups with the expected ratios by mixing the WT and MT controls: 1 to 0, 0.75 to 0.25, 0.5 to 0.5, 0.25 to 0.75, and 0 to 1.In each group, gene mutations were specifically expressed, depending on the ratio of the MT.Both the KMT2C and BCOR assays showed determination coefficients exceeding 0.99 in the linear regression line, confirming the ability of the assays to selectively detect genetic mutations (Figure 5C).
In summary, precision, robustness, LOD, and specificity testing demonstrated the reliable and consistent performance of the genetic analysis method, indicating its suitability for detecting genetic variants under different conditions and between different experimenters.
Discussion
The selection of starting materials for stem cell-derived therapy is a crucial step in the application of regenerative medicine.In particular, the number of passages of the starting cells is an important factor.Numerous studies have underscored the impact of the passage number of source cells on both the differentiation efficiency and functionality of the final differentiated product [21][22][23].However, researchers have established passage numbers that vary widely based on the final differentiated cell type and donor information, and there are currently no standardized criteria.In this study, three passages of hiPSCs were used to differentiate the cells into CMs.Among them, late-passage hiPSCs failed to differentiate, as they detached from the plate surface before mesoderm formation.The failure to differentiate was attributed to cellular senescence.Based on this, subsequent genetic stability assessments were conducted in the early and intermediate passage groups.
The reprogramming of hiPSCs has been reported to lead to the deletion of tumor suppressor genes, posing a hurdle for the clinical application of cell therapies [24,25].Moreover, it has been reported that significant and minor variations can occur during passaging and the differentiation process [26].Therefore, we evaluated the genetic stability of identical hiPSC and hiPSC-CM samples using four different methods.Karyotyping results indicated the chromosomal stability of the hiPSCs.In contrast, CytoscanHD analysis (Affymetrix, Santa Clara, CA, USA) indicated a 1.7 Mbp large insertion variant in chromosome 20, specifically in the ASXL1 gene, which was not observed in the karyotyping analysis.The ASXL1 gene is involved in histone modification and chromatin remodeling, and heterozygous mutations in this gene can lead to premature truncations, potentially resulting in myeloid leukemia and Bohring-Opitz syndrome [27,28].The utilization of the hiPSC line with this mutation as a source cell for regenerative medicine raises concerns about its potential for cancer induction.
Although the CytoscanHD chip method has the advantage of having a higher resolution than karyotyping, it has limitations in detecting small and rare single-nucleotide variations (<25 bp) that are not targeted by probes embedded in the chip [10].To compensate for this, NGS methods have recently been used to determine the genetic stability of stem cell therapies [13].In this study, we investigated small variations, such as structural variations and single-nucleotide variations, which were not identified by cytogenetic analysis, using targeted sequencing and WES methodologies.BCOR encodes an epigenetic regulator involved in cell differentiation that condenses chromatin during chromatin remodeling [29,30].Mutations in the chromatin regulator BCOR have been reported as causative factors for the development of neural and hematological tumors, non-small cell lung cancer, and endometrial carcinoma [29,31,32].Additionally, genetic variants of KMT2C identified through WES have been reported to interfere with the process of opening chromatin in the DNA repair system, potentially leading to tumor development [31,32].Based on these findings, it is evident that even 1 bp genetic variants can pose a risk of tumorigenicity.Thus, high-resolution genetic screening is essential for utilizing the hiPSCs as starting cells for cell-based therapies.
Nevertheless, employing NGS as a method for genetic variation detection in clinical contexts has several limitations, including the sensitivity of the mutation detection [33], complexities in genomic region sequencing [34,35], limitations in databases for interpreting novel or rare mutations [36], restrictions in identifying structural genetic variations and copy number variations owing to coverage limitations [37], and the potential occurrence of false positives [38].Thus, secondary validation using PCR is imperative for variants discovered by NGS.
In this study, the presence and expression levels of variants identified by WES and targeted sequencing were confirmed by real-time PCR and ddPCR.In contrast to the real-time PCR results, the ddPCR results showed no false positives in the control group.Moreover, in the hiPSC-CM group (at 0, 2, and 4 weeks), the ddPCR results clearly demonstrated a significant increase in variant expression levels during the differentiation period.These results were achieved through the execution of independent PCR within numerous droplets, ranging from thousands to tens of thousands.Furthermore, the automated procedures involved in droplet formation and result analysis mitigate experimental inaccuracies, thereby ensuring the consistency and reproducibility of the results [39,40].Consequently, ddPCR has extensive applications in diverse domains, including environmental surveillance [41,42], pharmaceutical candidate substance evaluation [43], and quality and safety evaluation within the food industry [44].However, previous studies have shown that ddPCR follows the principles of a Poisson distribution, which limits its upper quantification range compared to real-time PCR [45].Positive droplets may become saturated at high template concentrations.To improve accuracy, the sample can be diluted for analysis or real-time PCR can be considered.
This study underscores the inadequacy of relying solely on NGS outcomes to detect minute genetic variations during hiPSC cultivation and CM differentiation.Instead, we advocate the imperative use of ddPCR.However, a significant challenge in assessing genetic stability is the absence of established criteria for ddPCR, which is a highly sensitive and precise method.To address this gap, we conducted a validation study to establish the reliability and applicability of ddPCR in scrutinizing genetic stability, particularly in hiPSCs for cell therapy.Our results demonstrated precision among experimenters, robustness under various annealing temperatures, and high reliability based on the LOD value.Moreover, specificity testing confirmed the ability of ddPCR to selectively detect specific genetic variations.
Although this study focused primarily on the expression of BCOR and KMT2C, it is necessary to substantiate the additional genetic variations identified through NGS analysis in future studies.Furthermore, elucidating the clinical implications of the identified mutations requires an analysis of their actual influence on mRNA and protein expression, along with their impact on metabolism.Hence, further studies should systematically explore the functional ramifications of the detected mutations and address the potential considerations arising from their therapeutic use.
Furthermore, it is crucial to evaluate the pathogenicity of various mutations identified through NGS.To assess the correlation between pathogenicity and identified mutation, currently, several risk assessments are conducted by using the COSMIC database and various in silico tools such as PolyPhen-2 (Polymorphism Phenotyping v2), MutPred2, SIFT (Sorting Intolerant From Tolerant), and Mutation Taster.However, there is still a lack of established criteria for risk assessment, emphasizing the need for diverse assessment approaches and internationally harmonized standards.
In summary, to evaluate cell therapy stability, it is important to employ diverse analytical methodologies in a complementary manner to screen the entire DNA region.Additionally, to validate these results, we propose the utilization of ddPCR as a precise and reliable methodology.In this study, the genetic variations detected between the two passages of hiPSC-CMs exhibited divergence.However, inter-batch variations in genetic variant disparities were not significantly different.Hence, we propose the continuous utilization of ddPCR for the quality control of differentiated cells following the screening of genetic mutations in the starting cells.
hiPSC Culture
Parental hiPSCs (FSiPS1, passage number #32, obtained from the National Stem Cell Bank, KNIH, Cheongju, Republic of Korea), which were reprogrammed from adult human fibroblasts, were cultured in Essential 8 (E8) medium (Gibco, Carlsbad, CA, USA) at 37 • C and 5% CO 2 .For xeno-free conditions, vitronectin recombinant human protein (Gibco) was used to coat the six-well plates.In the process of initially thawing cryopreserved hiPSC stock, the ROCK inhibitor (5 µM Y-27632) was used to enhance the recovery of cells.Subsequently, the ROCK inhibitor was removed, and passaging was conducted.The culture medium was changed daily, and the cells were subcultured (1:7 ratio) using a Gentile non-enzymatic cell dissociation method with Versene solution (Gibco) every 5 days.In the subsequent differentiation process, we divided hiPSCs into three groups based on passages: (i) EB (Passage 1) from the initial thawing of hiPSCs immediately after obtaining the parental hiPSCs (FSiPS1) from KNIH, (ii) IB (Passage 11) from 10 passages after the initial thawing of FSiPS1, and (iii) LB (Passage 21) from 20 passages after the initial thawing of FSiPS1.Three independent cell differentiation experiments were performed for each group.
Differentiation of hiPSCs into CMs
The hiPSC-CMs were generated by differentiating hiPSCs from three different passages into cardiac lineage cells using the Gibco TM PSC Cardiomyocyte Differentiation Kit (Gibco), according to our previous report [46].Briefly, hiPSCs were detached from the six-well plate using Accutase TM (Innovative Cell Technology, San Diego, CA, USA) and resuspended for singularization.Then, 1 × 10 5 cells/well were seeded in a vitronectin-coated 12-well plate in E8 medium.The E8 medium was replaced every day for 4 days to allow for the sufficient proliferation of hiPSCs, and then the following media were sequentially applied: CM differentiation mediums A and B and CM maintenance medium.After 10 days of differentiation, the beating of CMs was observed.
Karyotyping
hiPSCs were cultured in a vitronectin-coated T-25 flask, and KaryoMAX TM Colcemid TM Solution (Gibco) was added when the culture reached 80% confluence.After incubating at 37 • C and 5% CO 2 for 1 h, the cells were detached with Accutase TM (Innovative Cell Technology).After centrifugation, the cell pellets were gently resuspended in 5 mL of 0.075 M potassium chloride solution (Sigma, St. Louis, MO, USA) and incubated at 37 • C for 25 min.After cell fixation with Carnoy's fixative solution (ratio of methanol to acetic acid = 3:1), the supernatant was removed by centrifugation.The cell pellet was placed on a slide and treated with 50% H 2 O 2 (Sigma) at 24 • C for 3 min, followed by incubation at 60 • C for 30 min.After incubation, the slides were stained with Giemsa solution (Sigma).A karyotyping analysis of hiPSCs was performed using Gendix Software (Seoul, Republic of Korea).
CytoscanHD Chip Analysis
To detect CNVs, genomic DNA (gDNA) was extracted from hiPSC-CMs using a QIAamp DNA Mini Kit (QIAGEN, Hilden, Germany).Then, 250 ng of gDNA was digested with Nsp1 for 2 h at 37 • C. The digested DNA was purified and ligated with primer/adaptors at 16 • C for 3 h.Amplicons were generated by PCR using the primers provided by the manufacturer (Affymetrix, Santa Clara, CA, USA) on the ligation products.PCR was conducted according to the following protocol: 94 • C for 3 min, 30 cycles of 94 • C for 30 s, 60 • C for 45 s, and 65 • C for 15 s, followed by extension at 68 • C for 7 min.The PCR products were then purified and digested for 35 min at 37 • C to fragment the amplified DNA.The fragmented DNA was then labeled with biotinylated nucleotides through terminal deoxynucleotide transferase for 4 h at 37 • C. DNA was hybridized to a pre-equilibrated CytoscanHD chip (Affymetrix) at 50 • C for 16-18 h.After washing and scanning the CytoscanHD chips, data analysis was performed using AGCC software 4.0 (Affymetrix), followed by a filtration of cancer markers based on the pathogen region from ClinVar.
Whole-Exome Sequencing
One microgram of input gDNA used in the targeted sequencing analysis was fragmented using an LE220 focused-ultrasonicator (Covaris, Woburn, MA, USA) to produce fragments of 150-200 bp.The fragmented gDNA samples were enriched using SureSelect XT Human All Exon V6 (Agilent Technologies, Santa Clara, CA, USA), according to the manufacturer's recommendations.At each step of library enrichment, gDNA was purified using AMPure XP beads (Beckman Coulter, Krefeld, Germany).The quantification and quality of the library were measured with the Quant-IT TM PicoGreen TM dsDNA reagent and kit (Thermo Fisher Scientific, Waltham, MA, USA) and 1% agarose gel electrophoresis.Each DNA library was hybridized with SureSelect XT Human All Exon Capture Bait (Agilent Technologies) and eluted using Dynabeads TM MyOne TM Streptavidin T1 (Invitrogen, Waltham, MA, USA).The captured library was amplified by Veriti TM 96 well Thermal Cycler (Applied Biosystems, Waltham, MA, USA) and qualified using the TapeStation DNA Screentape D1000 (Agilent Technologies).The amplified products were pooled in equimolar amounts and diluted to a final loading concentration of 10-15 nM according to the Sureselect XT target enrichment system protocol.The final libraries were sequenced on a NovaSeq (Illumina, San Diego, CA, USA) platform, and the paired-end sequence data were mapped to the human reference genome using the BWA mapping program.
Targeted Sequencing
Total gDNA was extracted from hiPSCs and hiPSC-CMs at 0, 2, and 4 weeks after differentiation using a QIAamp DNA Mini Kit (QIAGEN).After the quantification of the gDNA concentration, 200 ng of each gDNA sample was digested using a SureSelect Enzymatic Fragmentation Kit (Agilent Technologies) to create a library of gDNA restriction fragments.The enzymatically fragmented gDNA samples were enriched using ONCO AccuPanel (NGeneBio, Seoul, Republic of Korea) according to the manufacturer's recommendations.At each step of library enrichment, the gDNA was purified using AMPure XP beads (Beckman Coulter) to selectively bind nucleic acids based on their size.Prior to sample pooling, the quantity and quality of the library were measured using a Quibit dsDNA BR Assay Kit (Invitrogen) and 1% agarose gel electrophoresis.DNA libraries were pooled, hybridized with biotin-labeled RNA probes, and eluted using Dynabeads TM MyOne TM Streptavidin T1 (Invitrogen).The captured library was amplified by Veriti TM 96 well Thermal Cycler (Applied Biosystems), and diluted to a final loading concentration of 1.5 pM.Final libraries with 1% PhiX control were sequenced at a paired-end 150 bp (2 × 150 bp) read length on the MiniSeq platform (Illumina, San Diego, CA, USA).The FASTQ file containing the sequence data was analyzed using NGeneAnalySys v1.6.4 software (NGeneBio).
Control Template Generation
Control DNA sequences containing the target loci were generated by cloning them into a pBHA vector (Bioneer, Daejeon, Republic of Korea).DH5α competent cells (Thermo Fisher Scientific) were transformed with the resulting plasmids using a heat-shock procedure at 42 • C for 90 s, cultured on a selective agar plate with ampicillin (50 µg/mL), and incubated at 37 • C overnight.Plasmid DNA (pDNA) was extracted using an Exprep Plasmid SV Mini Kit (GeneAll, Seoul, Republic of Korea).The isolated pDNA was digested with the Bsal-HFv2 restriction enzyme (NEB, Ipswich, MA, USA) and purified using the Expin CleanUp SV Mini Kit (GeneAll).The final DNA fragments were verified by 1% agarose gel electrophoresis to confirm the successful generation of the control template DNA.
ddPCR Analysis
Total gDNA was extracted from hiPSCs and hiPSC-CMs at 0, 2, and 4 weeks after differentiation using a QIAamp DNA Mini Kit (QIAGEN), according to the manufacturer's recommendations.After the quantification of the gDNA concentration, the ddPCR assay was conducted in a 20 µL reaction volume, comprising 10 µL ddPCR supermix for probes (no dUTP; Bio-Rad Laboratories, Hercules, CA, USA), 1 µL of the template DNA sample, 900 nM of each primer (forward/reverse; Thermo Fisher Scientific), and 250 nM of each MGB probe (VIC/FAM; Thermo Fisher Scientific).The primer and probe sequences for the target genes are shown in Figure S1.The ddPCR mixture was loaded on ddPCR 96-well semi-skirted plates (Bio-Rad Laboratories).The plate was placed in an Automated Droplet Generator (Bio-Rad Laboratories) to partition the sample into droplets using Automated Droplet Generation Oil for Probes (Bio-Rad Laboratories).Next, the PCR was run using a thermal cycler (T100, Bio-Rad Laboratories) with the following cycling conditions: 95 • C for 10 min, 40 cycles of 94 • C for 30 s and 55 • C for 1 min, and 98 • C for 10 min.The number of droplets with or without a mutant target was read by a Droplet Reader (QX200, Bio-Rad Laboratories).The absolute number of copies was calculated using QX Manager Software 2.1 Standard Edition (Bio-Rad Laboratories) according to the Poisson correction.The quantification measurements of the target variants were presented as the copies/µL of the sample.
Evaluation of Differentiation into Cardiomyocyte Using RT-qPCR
Total RNA from hiPSCs and hiPSC-CMs at 0, 2, and 4 weeks after differentiation was extracted using the RNeasy Plus Mini Kit (QIAGEN) according to the manufacturer's recommendations.After dissolving the final product in DEPC-treated water, the RNA concentration was quantified.Complementary DNA (cDNA) was synthesized using 1 µg RNA and an iScript™ cDNA Synthesis Kit (Bio-Rad Laboratories) according to the manufacturer's instructions.RT-qPCR was performed using a cDNA template and a QuantiTect SYBR Green PCR Kit (QIAGEN).The PCR was run using a real-time PCR system (7900HT Fast Real-Time PCR System, Applied Biosystems) with the following cycling conditions: 50 • C for 2 min and 95 • C for 15 min, followed by 40 cycles of 94 • C for 15 s, 60 • C for 30 s, and 72 • C for 30 s.The primer sequences for the target genes are listed in Table S1.The relative expression of genes was calculated and expressed as 2 −∆∆Ct using ExpressionSuite Software Version 1.2 (Thermo Fisher Scientific).Expression values were normalized to the expression of 18S rRNA as a housekeeping gene.
Detection of Tumorigenic Variants Using Real-Time PCR
The quantification of variant copies obtained from ddPCR was compared with that of custom MGB primers/probes for real-time PCR using the same amount of gDNA.The 25 µL real-time PCR mixture consisted of 12.5 µL TaqPath ProAmp master mix (Thermo Fisher Scientific), 1 µL of DNA template, 900 nM of each primer (forward/reverse; Thermo Fisher Scientific), and 200 nM of each MGB probe (VIC/FAM; Thermo Fisher Scientific).The PCR was run using a real-time PCR system (7900HT Fast Real-Time PCR System, Applied Biosystems) with the following cycling conditions: 60 • C for 30 s for pre-read, 95 • C for 5 min, 40 cycles of 94 • C for 15 s, and 60 • C for 1 min for amplification, followed by 60 • C for 30 s for post-read.The quantification of the target variants was performed using SDS 2.4 software (Thermo Fisher Scientific).
Validation of ddPCR for hiPSC-CMs
In accordance with the 'Validation of Analytical Procedures Q2' of the ICH guidelines, the validation parameters necessary for the detection of genetic variants using ddPCR were determined and validated.ddPCR was determined to be suitable for the quantitative analysis of each genetic variant.The following parameters were validated: specificity, precision, robustness, and LOD.The primer and probe sequences for the target genes are shown in Figure S1.
Conclusions
Our study highlights the crucial role of high-resolution genetic analysis, including WES and targeted sequencing, followed by the validation of identified genetic mutations with ddPCR in stem cell-based cell therapy.During the differentiation of hiPSCs into cardiomyocytes, genetic variants have been observed through various genetic analyses.The identification of potential tumorigenic mutations underscores the need for robust genetic safety evaluations in hiPSC-based cell products.Validation through ddPCR not only confirms these mutations but also establishes a reliable method for assessing the genetic stability of hiPSC-derived cardiomyocytes.We propose the integration of high-resolution assays into standard safety evaluation protocols for hiPSC-based therapies, emphasizing the use of orthogonal validation methods to verify the potential tumorigenicity associated with genetic variants.This comprehensive approach is pivotal for advancing the translational potential of hiPSC-based cell therapeutics, ensuring safety and supporting their potential clinical applications.
Figure 2 .
Figure 2. Cytogenetic analysis of hiPSCs.(A) G-banding karyotyping results display the overall structure of chromosomes.(B) CytoscanHD results illustrate chromosome structure and changes in copy numbers.In the smooth signal of the entire chromosome, the orange line indicates a 'Gain'.The magnified view of this region (highlighted by the red box) precisely reveals the location of the variant within chromosome 20.B-allele frequency, weighted Log2Ratio, and allele difference data elucidate intricate variations in copy numbers in the identified mutation region.The table provides information on genetic details and clinical significance associated with the copy number variant using ClinVar.LOH, loss of heterozygosity; CN, copy number.
Figure 3 .
Figure 3. Next-generation sequencing in hiPSCs and hiPSC-CMs.(A) Mutations identified by WES.(B) Mutations identified by targeted sequencing.Genomic alterations are annotated according to the color panel on the right of the image.Missense mutation (brown), nonsense mutation (sky blue), inframe insertion mutation (green), splice site mutation (gray), and frame-shift mutation (yellow).The time frame for the analysis of samples is shown at the top of the image from early batch (red) and intermediate batch (blue), and the gene names of each identified mutation are listed on the left side of the image.
Figure 3 .
Figure 3. Next-generation sequencing in hiPSCs and hiPSC-CMs.(A) Mutations identified by WES.(B) Mutations identified by targeted sequencing.Genomic alterations are annotated according to the color panel on the right of the image.Missense mutation (brown), nonsense mutation (sky blue), in-frame insertion mutation (green), splice site mutation (gray), and frame-shift mutation (yellow).The time frame for the analysis of samples is shown at the top of the image from early batch (red) and intermediate batch (blue), and the gene names of each identified mutation are listed on the left side of the image.
Figure 4 .
Figure 4. KMT2C and BCOR variants expression using MGB TaqMan probes.(A) Illustration and sequences of primers and TaqMan MGB probes designed to determine the expression rates of the KMT2C and BCOR genes in wild-type (WT) and mutant-type (MT) conditions.The primer sequences are identical for both WT and MT.For the probes, WT is labeled with the VIC fluorescent dye (orange circle), while MT is labeled with the FAM fluorescent dye (blue circle).To enhance specificity, fluorescently labeled probes are quenched using the MGB (yellow circle)-eclipse quencher (gray circle).These probes are denoted PrWT and PrMT.Detailed sequences are provided in the table below.The variants were measured by two different molecular techniques: (B) ddPCR and (C) real-time PCR.The data represent the means ± SD of triplicates; * indicates p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001
Figure 4 .
Figure 4. KMT2C and BCOR variants expression using MGB TaqMan probes.(A) Illustration and sequences of primers and TaqMan MGB probes designed to determine the expression rates of the
Figure 5 .
Figure 5. Validation of KMT2C and BCOR variants using ddPCR.(A) Robustness.The control sample was run by ddPCR using several temperatures for the annealing/extension steps (53-57 • C).The positive droplets are represented in blue dot, whereas negative droplets are in black dot.Red lines indicate the separation between the different temperatures, and the pink line represents the set fluorescence threshold to distinguish positive and negative droplets.(B) LOD.Two-fold dilutions of the MT control were run in three replicates.The LOD was defined as the lowest concentration of target genetic variation that can be detected with a %RSD <5%.(C) Specificity.The R 2 results from the experiments with gradient mixed control were >0.99.Data are presented as means ± SD.LOD, limit of detection; Conc, concentration.
Table 1 .
Detection of KMT2C and BCOR variants using WES and targeted sequencing.
Table 1 .
Detection of KMT2C and BCOR variants using WES and targeted sequencing.
Type KMT2C c.2263C > T Concentration (Copies/µL)
KMT2C and BCOR genes in wild-type (WT) and mutant-type (MT) conditions.The primer sequences are identical for both WT and MT.For the probes, WT is labeled with the VIC fluorescent dye (orange circle), while MT is labeled with the FAM fluorescent dye (blue circle).To enhance specificity, fluorescently labeled probes are quenched using the MGB (yellow circle)-eclipse quencher (gray circle).These probes are denoted PrWT and PrMT.Detailed sequences are provided in the table below.The variants were measured by two different molecular techniques: (B) ddPCR and (C) real-time PCR.The data represent the means ± SD of triplicates; * indicates p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001 compared to the WT 10 fg group or 0 week of each hiPSC-CM group.VIC, VIC fluorophore; FAM, FAM fluorophore; Q, nonfluorescent quencher; MGB, minor groove binding; F, forward; R, reverse; PrWT, probe for wild type; PrMT, probe for mutant type; ddPCR, digital droplet PCR; KMT2C, histone-lysine N-methyltransferase 2C; BCOR, B-cell lymphoma 6 protein corepressor; NTC, no template control; Conc, concentration; Rn, normalized reporter. | 8,297.8 | 2024-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
An Analytical Model for Optimizing the Optical Absorption of Graphene-Based Two-Dimensional Multilayer Structure
Two-dimensional (2D) materials are promising but remain to be further investigated, with respect to their interesting usage in optoelectronic devices. These materials have far less than ideal absorption due to their thin thickness, limiting their deployment in practical optoelectronic applications. Graphene is a 2D material with honeycomb structure. Its unique and fantastic mechanical, physical electrical and optical properties make it to be an important industrial and economical material. In this work, a simple analysis is performed for the reflectance, transmittance, and absorption properties of multilayer thin film structures with graphene sandwiched in dielectric layers. Based on Maxwell’s electromagnetic wave theory and coupled Fresnel equations, we investigate how to get maximum absorption for a proper choice of media and graphene layers. Query ID="Q2" Text="Kindly check the corresponding author's affiliation is correctly identified. Numerical results show this absorption is controlled with matching thicknesses of layers, number of graphene layers, wavelength and angle of incident electromagnetic wave.
Introduction
Graphene, a one-atom thick sheet of carbon atoms, is one of the most promising nanomaterials and has interesting features and applications in optics, mechanics, and electronics [1][2][3][4][5]. Single-layer graphene with optical absorption of approximately 2.3% for incident light, is adequate for some photoelectric devices [6][7][8][9][10][11]. Of course, further applications need stronger absorption with enhanced interactions between graphene 1 3 and light. Thus, there is an urgent need for research and investigations to improve and develop absorption mechanisms. A most common way to control absorption through intense resonances is surface plasmon polaritons (SPPs) [12][13][14][15]. Crucial for the optical performance of small nano particles and ultra-thin structures is frequently that relevant surface plasmon excitations are available. Graphene can form strong SPPs with closely 100% absorption in the mid-infrared to terahertz regions with micro-nano structures or via controlling the chemical potential by an external gate or doping [12,[16][17][18][19]]. An optical switching mechanism by gated graphene layer coupling to external radiation through SPPs has been described in Ref. [20]. Furthermore, enhanced absorption of the graphene layer can be realized by guided modes [21][22][23][24][25] and meta-surfaces [26][27][28][29]. Other ways to enhance the absorption of a graphene sheet in the visible spectrum have been reported. In Ref [30], a maximum absorption of 60% was reported for a transverse magnetic (TM) wave with a monolayer resonant grating in the visible region. A broadband absorption enhancement (> 75%) was observed in a multilayer structure [31] and nearly 100% visible absorption for single layer graphene was demonstrated in a multilayer, film-based, attenuated total reflectance configuration [32]. Perfect absorption (100%) in the visible and infrared regions was exhibited in an absorption-cavity of graphene sandwiched in dielectric layers. High-sensitivity sensing is realized when the symmetrical structure is slightly broken [33].
Graphene-based sensors are another important area, where it has been suggested [34] that graphene ribbons can convert molecular signatures to electrical signals based on sensitive graphene plasmons to the molecular analytes. We have the possibility of controlling "optical" properties in graphene with a proper gate voltage [35] and/or doping, thus a multitude of possible mechanisms are available for sensing and tunable optics over a broad frequency range [36].
When we were to freely control the optical properties of a thin film structure with the thickness of a monolayer graphene, the maximum attainable light absorption could be dictated with the contrast of the surrounding media [37][38][39].
By tuning the geometrical properties and the effective dielectric function of the nanocomposite structure, the impedance of the system is matched to maximize the absorption [40][41][42][43].
Here we study another line of approach, not invoking surface plasmons and collective excitations, to determine optimal conditions for light absorption, i.e., the possibility to tune the optical properties of graphene layers by an appropriate choice of the dielectric environment. The proposed structure is simple and thus has the advantage of being non-polluting and reusable for experimental investigations. The simulated results have shown that the absorption is a function of incident angle and wavelength, dielectric thicknesses, and the number of graphene layers. We hope these results will provide useful suggestions for potential applications of graphene layers in the visible spectrum for optoelectronic devices.
Theoretical Model
This section summarizes our formalism for modeling multilayer structures. This calculation provides us with an effective approach for studying the transport of electromagnetic waves in an anisotropic layered device.
To calculate the reflectance, we apply Maxwell's equation is the electric field with amplitude E 0 , c is the speed of light and 0 is dielectric constant. We first consider a plane wave propagating through a nonabsorbing medium with refractive indexN 2 , which is incident on a nano medium with refractive index N 1 . All plane waves that normal incident on a plane boundary between two semi-infinite material regions, are reflected and transmitted independently of their state of polarization. The amplitudes of normally incident, transmission and reflection electric fields are, respectively, E i , E t , and E r (Fig 1). The tangential electric and magnetic fields must be continuous across the boundary. The Fresnel formulas for reflection and transmission of a light obliquely incident on a plane boundary are [44] Where E || and E ⟂ are the electric vectors parallel and perpendicular to the plane of incidence, i and t are incidence and transmittance angles and m is the ratio of the refractive index of medium 1 to medium 2 ( m = N 1 ∕N 2 ). Now, we consider our multilayer nano structure (Fig. 2). It was made of FTO glass, Silicon dioxide (SiO 2 ), graphene (Gr), silver (Ag) and gold (Au) layers. Based on Maxwell's electromagnetic wave theory and coupled Fresnel equations [45], we can express the total reflection coefficient of a TM wave as [33,44]: and the transmission coefficient as [46] where r 01 is the reflection coefficient at FTO/SiO 2 interface, r 12 is the reflection coefficient at SiO 2 /Gr interface, r 23 is the reflection coefficient at Gr/Ag interface, r 34 is the reflection coefficient at the Ag/Au interface, and r 1234 is the total reflection coefficient above the SiO 2 layer and k 1 is the longitudinal wave vector.
Using Snellʼs law-invariance ( N i sin i = N i+1 sin t ) and Eq. (2), the coefficients r i,i+1 can be obtained from [46]: where Here k i is the wave number component perpendicular to the interface of p-polarized light in medium i. For different layers, the parameters i , n i and d i are the relative dielectric constant, the refractive index and the thickness of different layers, respectively. is the incident wavelength and is the angle between the incident electromagnetic wave and the z direction.
Results and Discussion
Using a theoretical approach, we study optical transport through multilayered structures. Three types of samples were studied using different combinations; (a) FTO/SiO 2 /Gr/Ag/Air, (b) FTO/SiO 2 /Gr/Au/Air and (c) FTO/SiO 2 /Gr/Ag/Au. The graphene thickness is determined by the number of graphene layers L as d 2 = 0.34 nm ×L [4]. Here, we assume the refractive index of the glass is n FTO =1.518 and n SiO 2 = 1.46, n air = 1, n Au = 0.17 − 4.86i , n Ag = 0.43 + 2.455i . The refractive index of graphene is not a simple constant in different conditions such as graphene layers, incident wavelengths, substrates, doping density and so on. Here, we employ the refractive index of 2.6 + 1.3i in our calculation and simulation which is basic value for undoped and un-patterned graphite cited by researchers in the exploration of graphene properties [33,[48][49][50][51][52].
To realize how the optimal conditions can optimize the absorptance in ultrathin films, we summarize below our important results.
FTO/SiO 2 /Gr/Ag/Air
Calculations and simulations for the reflectivity as a function of incidence angle are shown in Fig. 3. This figure shows that the reflectivity by theory calculation decreased by increasing SiO 2 thickness and number of graphene layers. As the incidence angle increases from 0 to 90, the reflectivity curve decreases first to reach its minimum value at ≈ 80 • , then increases ( i → 90, cos i → 0, r → 1) when the thickness of SiO 2 is decreased from 1000 to 200 nm and the number of graphene layers decreases from 50 to 10. For higher incident angles, the electromagnetic wave is reflected back, so reflectivity increases.
The basic principle to enhance absorption in nano films is to reduce reflection and transmission of incident light from the thin absorption layer [53]. The simulation results of transmission, reflection and absorption curves can be found in Fig. 4, which exhibits what happens to the values of T, R and A when increases from zero to ∕2. A strong absorption peak occurs where = 73 . We see that at < 70 absorption increases due to reduced reflection, then at higher angles, reflectivity increases and thus we have a weak absorption.
In such a way, we plot reflectivity vs the thickness of SiO 2 and Ag layers (Fig. 5). This result shows a general trend of increasing reflectivity as the thickness of d 1 layer is increased up to 300 nm and the thickness of d 3 layer is increased up to 70 nm. As shown in Fig. 3, the calculated reflectance decreases as the thickness of the SiO 2 ( d 1 )and graphene layer ( d 2 ) increase. Therefore, while R depends on the thickness of layers, d 3 (= d Ag ) plays a more important role and increases the reflection of incident light.
FTO/SiO 2 /Gr/Au/Air
To enhance the light absorption and emission of 2D materials, a variety of optical structures are designed, for example, distributed Bragg reflector microcavities, metallic reflectors, dielectric super-absorbing, photonic crystal nanocavities, and plasmonic nanostructures. [47,[54][55][56][57][58]. Investigating and designing multilayer nanostructured materials is necessary to confine light within 2D materials, increase light absorption and improve the performance of optoelectronic devices. Now we replace the silver layer in the structure (a) with a gold layer [57][58][59][60]. Therefore, we consider a multilayered structure FTO/SiO 2 /Gr/Au/Air and carry out measurements on this new structure. In the near-infrared -to-visible range there is a necessity for broadband absorption enhancement.
As indicated in Eqs. 2-5, reflection coefficients r i,i+1 are not dependent on wavelength, but both the face and the dominator of the total reflection and transmission coefficients r 01234 and t 01234 have an exponential relation with the inverse of wavelength. Therefore, the behavior of reflectivity and transmittance coefficients and consequently absorbance depends critically on the wavelength range. Figure 6 shows that the absorbance value decreases approximately linearly with increasing incident wavelength and decreasing the incident angle. Instead of utilizing critical coupling as the absorption enhancement mechanism [47,61], we propose the use of suitable structure parameters such as incident angle which results in broadband absorption enhancement. The experimental total absorption in the graphene structures shows a large dependence on the incident angle and wavelength [47]. In order to begin the comparison between the absorbance of these structures, we plot A as a function of structure parameters in the following. As shown in Fig. 7, the absorption conditions are examined in terms of incident angle and incident wavelength for two structures, Au-coated (Gr/Au) and Ag-coated (Gr/Ag). Firstly, the comparison exhibits the role of layers in absorbance of these structures in which Au/air creates an enhanced and somewhat broader absorption than Ag/air structure. Secondly, the absorbance decreases as wavelength increases while the absorbance peak appears at = 73 for both structures.
FTO/SiO 2 /Gr/Ag/Au
We continue our discussion of the theoretical absorpation data for FTO/SiO 2 /Gr/Ag/ Au structure with the investigation of the structure parameter effect provided by the thickness of layers, number of graphene layers and incident angle. In Fig. 8, we observe that the absorbance peaks appear and vary with increase in the SiO 2 thickness and number of graphene layers. As shown, enhanced broad-band absorption is demonstrated in larger thicknesses and more layers of graphene for d 1 = 500 nm and L=58.
As shown in Fig. 4, transmittance decreases as the incident angle increases. In such a way, we plot transmittance as a function of the incident angle, Au layer thickness and Ag layer thickness. Transmittance decreases by increasing the magnitude of the incident angle up to 90 0 and Au layer thickness up to 80 nm, but plays a lesser role than d Au (Fig. 9a). At the same time, that T depends on the thickness of layers, d 4 (= d Au ) plays an important role (Fig. 9b) and can effectively manage the transmittance values. Thus, Au layer plays a more significant character than the incident angle and the Ag layer and controls the outside fields.
Finally, our calculations have shown that the wavelength and angle of the incident electromagnetic wave, relative dielectric constants n 1 , n 2 , n 3 , thickness d 1 , and number of graphene layers can effectively control the absorption of graphene (Fig. 10).
It should be mentioned that Van der Waals interaction between neighboring layers depends strongly on the number of layers. Understanding the interlayer coupling and its correlation effect can be paramount for designing novel graphene-based heterostructures with interesting physical properties [56]. In addition, research results have shown the important design parameters such as graphene plasmons [61], antimonene/graphene structures [55] and stacked period number of dielectric materials [53] can be effective strategies for devising to manipulate light and enhance absorption in multilayer heterostructures with various applications.
Conclusion
In summary, a multilayer graphene-based structure is proposed to achieve absorption enhancement. Large enhancement and tunability of light absorption in 2D materials is promising for ultra-thin optoelectronic devices that interact with light. We have shown the absorption A as a function of the relevant physical parameters. Through Maxwell's electromagnetic wave theory and coupled Fresnel equations, relationships are demonstrated, and three types of samples were studied with different combinations. To achieve continuous control of transmission, reflection and consequently absorption, the effects of wavelength and angle of the incident light, number of graphene layers and thickness of different layers are investigated. We found that absorption increases with the decrement of SiO 2 layer thickness and increment of Ag layer thickness. As expected, absorption is enhanced in the large number of graphene layers. In addition, Au coated structure creates an enhanced and broadband absorption than the Ag coated structure. Meanwhile, the wavelength and angle of the incident electromagnetic wave can be effectively controlled the light absorption and thus for a situation with an appropriate media one can approach higher absorption. | 3,407.6 | 2022-09-05T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
Getting Rid of the Usability/Security Trade-Off: A Behavioral Approach
: The usability/security trade-off indicates the inversely proportional relationship that seems to exist between usability and security. The more secure the systems, the less usable they will be. On the contrary, more usable systems will be less secure. So far, attempts to reduce the gap between usability and security have been unsuccessful. In this paper, we offer a theoretical perspective to exploit this tradeoff rather than fight it, as well as a practical approach to the use of contextual improvements in system usability to reward secure behavior. The theoretical perspective, based on the concept of reinforcement, has been successfully applied to several domains, and there is no reason to believe that the cybersecurity domain will represent an exception. Although the purpose of this article is to devise a research agenda, we also provide an example based on a single-case study where we apply the rationale underlying our proposal in a laboratory experiment.
Introduction
A leitmotif of the cybersecurity literature is the inversely proportional relationship between usability and security: more secure systems will necessarily be less usable (or, simply, less easy to use), and usable systems will be more vulnerable to threats [1,2]. While organizations lean toward more secure systems and practices, users prefer systems and practices that provide more usability, giving up on adopting a secure approach and exposing themselves to more risk, even when they are aware of existing threats [3].
The first concept to clarify is what is meant by usability. An official definition of usability is provided by ISO 9241:2018, which defines it as "the extent to which a product can be used by specific users to achieve specific goals with effectiveness, efficiency, and satisfaction in a given context of use" [4]. Many efforts have been made in the last halfcentury to improve the relationship between user and technology, to make the use of a device effortless and reduce the learning curve. With the spread of procedures that can be performed online, the need to develop user-centered technologies has increased exponentially. Today, it is common to use the Internet to perform tasks that were previously done in specific contexts, mainly outside the cyberspace. People are now purchasing services and products, obtaining information, and communicating through applications on personal devices. More than half of the world's population now uses the internet daily for numerous business and leisure activities [5], and the opportunities for cybercrime are naturally proportional to the prevalence of online activities. Cybercrime is steadily increasing, with a global annual cost of $600 billion [6]. An immediate example of an intervention to improve individual-technology interaction involving cybersecurity is saving login credentials with the aim to not have to remember them and enter the system without typing them. In the face of increased usability, this practice exposes to the likelihood that a malicious user gains access to the system and the library of saved passwords. Conversely, procedures such as creating complicated passwords or changing them constantly increase security
•
Malware: more commonly known as "computer virus" malware (short for malicious software), means any computer program used to disrupt the operations performed by the user of a computer. • Ransomware: a type of malware that restricts access to the device it infects, demanding a ransom to remove the restriction. For example, some forms of ransomware lock the system and require the user to pay to unlock the system, while others encrypt the user's files and require payment to return the encrypted files to plaintext. • Crypto-jacking: a computer crime that involves the unauthorized use of users' devices (computers, tablets, servers, smartphones) to produce cryptocurrency. Like many forms of cybercrime, the main reason behind them is profit. Unlike other threats, it is designed to remain completely hidden from the victim. • Email-related threats: in this category of attacks, we find spoofing, spam, spear phishing, Business Email Compromise (BEC), whaling, smishing, and vishing. All these attacks the same characteristics concerning the exploitation of the weaknesses of human behavior, human habits, and the vulnerability of computer systems to push individuals to become victims of an attack. • Threats against data: this category includes attacks where a data breach or loss occurs, and sensitive and confidential data ends up in an unprotected/secure environment. Taking over other people's data is certainly one of the main goals of hackers for many reasons, such as ransomware, defamation, extortion, etc. This type of breach can present in several ways: they can occur due to a deliberate cyber-attack or can involve personal and sensitive data being spread incidentally.
•
Threats against availability and integrity: these attacks aim to make information, services, or other relevant resources inaccessible by interrupting the service or overloading the network infrastructure. • Disinformation and misinformation campaigns: the main difference between these two types is that, the first case refers to the diffusion of false information to intentionally deceive people while; the second case concerns with the dissemination of of misinformation, misleading, inaccurate, or false information is provided without the explicit intention to deceive the reader. These campaigns reduce the general perception of trust and lead people to doubt the veracity of information. • Non-malicious threats: a malicious user uses authorized software, applications, and protocols to perform malicious activities. This refers to the kind of threat in which the malicious intent is not evident, and the control of the infected device takes place without the need to download malicious files. • Supply-chain attacks: this involves damaging the weakest elements of the supply chain. The goal is to access source code to create or update mechanisms, infecting apps to spread malware.
Usable Security
When employing the term "usability", it is difficult to avoid falling back on expressions such as "ease of use", "simplicity", or "intuitiveness". However, the concept fails to be captured exclusively by these terms. Although we are often content to define usability as ease of learning and using an artifact, there are many ways in which the quality of user-technology interaction can be described and measured.
The ISO 9241 standard, introduced by the International Organization for Standardization in 1998 (and revised in 2018), describes "usability" as the degree to which specific users can use an artifact to achieve certain goals with effectiveness, efficiency, and satisfaction in a specific context of use [4].
Typically, effectiveness coincides with the achievement of goals, efficiency with the time it takes to perform a task, and satisfaction with users' subjective ratings. These are, of course, simplifications of a more complex concept. However, although this "standard" definition represents a compromise between different theoretical instances, it does not provide clear operational indications. The standard highlights an important aspect that is often overlooked by different theoretical and procedural approaches: usability is not a characteristic of the product itself but depends on the characteristics of the users of the system, on the goal that they intend to achieve, and on the context in which the product is used. Implicitly, the standard underlines how usability cannot be traced back to the presence/absence of characteristics but must be assessed considering the individual's subjective experience. With their abilities and limitations, users intend to achieve an objective using technology and wish to do so using the least number of resources possible and not having to invest more, paradoxically, because of the technology they use. Design, therefore, cannot disregard the knowledge of the users' needs, limits, and potentialities and the careful task analysis that will have to be carried out while using a device in each context.
Information technology security also needs to be usable; the expression "usable security" indicates managing security information in the user interface design. In this context, usability is fundamental at different levels: (1) from the user's point of view, because it allows completing a task effectively, efficiently, and satisfactorily, avoiding errors that may cause security problems; (2) from the developer's point of view, as it is crucial for the success of a system; (3) from the management's point of view, considering that weak security mechanisms could represent a limitation to the usability of the system [22].
Security, then, is not a functionality divorced from usability but is related to it, and the designer's goal must be to ensure both security and usability while preventing one from compromising the other [22]. Furnell [23] (p.278) pointed out that "the presentation and usability of security features, in some cases, are less than optimal", requiring effort from the user to use them correctly. This effort is necessary but, in some cases, reduces the usability of a procedure and, therefore, discourages its use. Gunson and colleagues [24] reached the same conclusion when comparing the usability of two authentication methods in automated telephone banking. They conducted empirical research involving customers of a bank. The objective of the study was to compare the level of usability perceived by users using one-factor and two-factor authentication methods. Two-factor authentication is a security procedure in which users provide two different authentication factors to verify their identity.
The results confirmed how participants considered the two-factor authentication less easy to use but, at the same time, they were aware that it was the most secure method [24]. This example precisely describes the problem of managing usability and security, with the two components often being at odds. The consequences of mismanagement of this trade-off could result in procedures that are either at high risk of violation or, conversely, too complex. In the latter case, the excessive effort to engage in secure behaviors could lead to the emergence of less secure alternative behaviors [25].
Within companies and organizations, all these processes, which ensure information security, are designed according to a top-down perspective, generating security policies that are not centered on the end-users. Therefore, they obtain the paradoxical effect of being ignored by users who, while adhering to the policies, would perform their tasks with difficulty. In contrast, a desirable approach to ensuring usable security should be to understand the user's behaviors to generate viable security practices [26,27]. For example, Bravo-Lillo and colleagues studied the effect of habituation on security dialogue boxes, which leads users to ignore important messages due to repeated exposure to these dialogues.
The user reports reading them (by automatically pressing a button) but without paying attention to the content [28,29]. It seems clear that a "fulfillment" mode to the design of security features (i.e., security alerts must be presented, and the user must report having read them) does not work because it does not consider processes that involve the interaction between the system and the individual with their human characteristics, such as, in this case, the process of habituation to repeated stimulation.
One of the most studied topics in usable security is the problem of passwords, which are constantly between usability and security. Interfaces for password creation and entry are the most implemented authentication methods in modern systems. Hundreds of thousands of websites and applications require users to enter a password to access a system [30]. The goal of passwords is twofold: the first is the security goal, which requires passwords to be complex, unique, and difficult enough to not allow hackers to identify them; the second is the usability goal, which requires passwords to be easy enough for the user to remember them [31,32]. Nowadays, many online services request the adoption of passwords that have a certain degree of complexity (e.g., use at least one capital letter, a number, and a special character and are at least eight characters long), and security policies in organizations often demand password replacement after a certain time interval.
Moreover, policies often require the user to use different passwords for different systems. They suggest not using the same password to access multiple services and storing the login credentials in a place inaccessible to others. Once again, this is an example of a top-down policy that is not based on a user-centered design rationale.
When users feel overwhelmed by the system demands, they could dismiss the system itself [25]. People seem to carry out a cost-benefit analysis associated with safe behaviors [33]. Users will avoid spending too much cognitive and temporal resources on security if the perceived benefits are too low [34]. Consequently, system usability becomes a critical factor when analyzing why people behave unsafely in cyberspace [35]. Indeed, while ease of use seems to be correlated with security weakness, more secure systems are difficult to use. The challenge is to bring together security and usability, which are usually perceived as mutually exclusive [22].
In contrast to other contexts where usability can be addressed as an independent goal when security solutions are developed, it is paramount that usability is evaluated in relation to it [36]. Mechanisms designed to ensure security should never restrict the user from performing the main task but should be designed to recognize human limitations and prevent users from dealing with unusable systems [37]. However, attempts to combine usability and security are often limited to improving the transparency of security processes; they do not make the system usable, but make the communication of information and procedures usable [38]. While this is an entirely reasonable and worthwhile goal, there remains the insurmountable problem of users' tendency to override security practices that are perceived as obstacles to the effectiveness, efficiency, and satisfaction of their interaction with the system. In this paper, we intend to propose an alternative approach to that adopted thus far in the literature: rather than trying to reduce the gap between usability and security, we suggest accepting the existence of the usability/security trade-off to ensure adherence to security procedures by compensating with increased usability. To this end, it is appropriate to briefly describe the theoretical assumptions of behavior analysis that will clarify the proposed intervention model.
Behavior Analysis: A Primer
Among the different perspectives adopted in psychology to explain individuals' behavior and make predictions, Behavior Analysis has a role of primary importance. The behavioral view (see [39] for an extended discussion) is based on the idea that consequences are crucial in producing repeated behaviors. A person who plays slot machines will play again if he or she has won on other occasions. So, when behavior is selected by its reinforcing consequences, its frequency of emission increases. Conversely, behavior that is not followed by a reinforcing consequence decreases in frequency, up to extinction. This process is called "operant conditioning", and it is the primary modality in which organisms modify their behavior throughout their life experience. The term "operant" indicates that a behavior operates on the environment to produce effects. We dial a phone number to communicate with the person we want to contact. Communicating with the person, in this case, is a reinforcer, and the consequence of the behavior is the fundamental element in defining the operant. For a behavior to be defined as an operant, it is not necessary to dial the phone number by typing the keys on the keyboard, asking Siri, or even asking another person to do it for me, if all these actions allow me to obtain the same effect. All these actions pertain to the same class of operants, even if they vary in their topography (i.e., in the form that the responses take). For this reason, we speak of response classes, which are united by the function they fulfill.
Taken as a whole, the events that signal the opportunity to enact a behavior (alternatively called discriminative stimuli), the class of operants, and the consequences that follow the operant behavior all constitute what is called the "reinforcement contingency" or "three-term contingency".
Discriminative stimuli have a paramount role in regulating operant responses. Still, they are not the cause of behavior; they are events that regulate behavior, because, in their presence, the latter has been previously reinforced. If the phone rings and we answer, our behavior does not depend on the fact that the phone has rung; rather, it happens because, on other occasions when the phone has rung, answering has put us in communication with the speaker. In this case, the consequence increases the likelihood of the behavior in the presence of the discriminative stimulus; therefore, we call this consequence a reinforcer.
Moreover, how reinforcement is delivered determines typical response patterns that are independent of the organism, the type of behavior, and the type of reinforcement used. Reinforcement schedules can be fixed-ratio (reinforcement is delivered after the production of a specific number of responses), variable-ratio (reinforcement is delivered after the production of a variable number of responses), fixed-interval (reinforcement is delivered after the production of a response at the end of a specific time interval), or variable-interval (reinforcement is delivered after the production of a response at the end of a variable time interval).
The simplest example of a reinforcement schedule is the Fixed Ratio 1 (or continuous reinforcement) schedule: each response is followed by a reinforcer. When we flip the switch to turn on the light, the light bulb turns on. Every time. In nature, reinforcements are often not continuous. Organisms engage behaviors that are occasionally reinforced, as reinforcement is intermittent; however, this very intermittence makes the production of the behavior particularly robust. The criterion that organisms adopt is "sooner or later it will work, as it has worked in the past".
How can behavior analysis be usefully employed to help individuals cope with cyber threats? First, we need to reframe the role of usability in determining the use of security tools and procedures.
We know that safety tools are necessary but not sufficient for creating a safe system. Indeed, as reported by Furnell, Bryant, and Phippen [40], even if a tool is available, it is not always implemented. For example, low levels of software updating (weekly from 37% to 63%) are observed, despite the high rate of software installation (93% with antivirus software, 87% with firewalls, 77% with anti-spyware, and 60% with anti-spam), suggesting that human behavior is a crucial factor in prevention [41]. In this regard, in 2007, Global Security Survey reported "human error" as the most reported cause of failures of information systems [42]. The human component is also considered a relevant factor in more recent years. The IBM Security Service [43], for example, observed that 95% of security breaches were due to human error. El-Bably [44] investigated the behavior of employees of companies in the UK and the US, reporting that the percentage of employees who had committed errors concerning security processes was slightly lower than 50%.
Recently, a study by Kannison and Chan-Tin [45] found a relationship between psychological features and safety behaviors, therefore confirming the relevance of the human component in the implementation of safety behaviors.
Furnell [23] suggested that the correct use of a tool relies on the awareness of its usefulness: if users do not understand or are not aware of the security risks, they are more vulnerable to incorrect behaviors. Moreover, users may be aware of the risk but may not know the correct behavior. Indeed, the more complex the tool, based on concepts such as cryptography, access keys, and digital signature, the more of an obstacle it becomes so that people try to get around it. Password management is a clear example of this problem: a strong authentication method is highly demanding and increases user workload [46]; therefore, incorrect low-demanding behaviors are engaged instead. When the users feel overwhelmed by the system demands, they can dismiss the system itself [25]. As reported above, people seem to carry out a cost-benefit analysis associated with safe behaviors [33]. Users will avoid spending too much cognitive and temporal resources on security if the perceived benefits are too low [34]. Consequently, system usability becomes a critical factor to explain why people behave unsafely [46].
As we reported at the beginning of this paper, the security/usability trade-off cannot be avoided, but we suggest that it can be exploited to devise design strategies for the improvement of security. The behavioral perspective becomes crucial in this attempt. Recently, we have witnessed a renewed interest in the behaviorist model concerning understanding certain phenomena related to the use of digital technologies and the design of interfaces and procedures able to encourage the acquisition of new habits [47]. However, the marketing perspective with which these issues are approached has distanced itself from the conceptual and methodological rigor of behavior analysis, often jeopardizing fundamental concepts and creating dangerous makeup operations by using different terms to indicate things with a specific meaning (e.g., the use of the term reward to indicate reinforcement). This is not the place to address these issues. Still, it is important to emphasize that the complexity of the model underlying behavior analysis (or functional analysis) requires conceptual, terminological, and methodological rigor.
For example, there is much literature on the topic of "gamification" [48]: the use of game-like modalities to achieve learning objectives. The principle behind gamification is to use the dynamics and mechanics of gaming, such as accumulating points, achieving levels, obtaining rewards, and exhibiting badges. However, long before the term gamification made its way into the literature, this dynamic was well known (and still it is) under the name "token economy". The token economy is a rather complex reinforcement system based on the accumulation of objects, namely tokens that can be eventually exchanged for goods, services, or privileges (the "back-up" reinforcements). The operating principle of the token economy is similar to the monetary system; it is a set of rules that determines the value of an object without an intrinsic value (just like a coin or a banknote). As tokens become exchangeable, the value of terminal reinforcements and the schedules employed to obtain both tokens and back-up reinforcers constitute the variable the experimenter can manipulate.
Ayllon and Azrin [49] were the first to implement the token economy as a behavior modification strategy. The first study was conducted on psychiatric patients, and the token economy was used as a reinforcement system to modify a range of behaviors. Subsequent studies [50] have supported and accumulated data on the efficacy of the token economy by applying it to settings other than mental health [51,52]. Today, the token economy is the strategy chosen for the treatment of autism spectrum disorders [53].
The token economy consists of six primary elements [54,55]: • Target behavior: the behavior required for receiving tokens. This behavior must be objective and measurable. • Token conditioning: the procedure through which the token is conditioned as reinforcement.
• Back-up reinforcement selection: the method by which the activities that can be acquired through the token exchange are identified. • Token production schedule: schedule of reinforcement through which tokens are released. • Exchange production schedule: a schedule that defines when tokens can be exchanged for back-up reinforcement. • Token exchange schedule: schedules that determine the cost of back-up reinforcement in terms of tokens.
Moreover, interventions based on the token economy allow assessing the effectiveness of the treatment at the level of the single individual. The effects that are reported in the psychological literature are almost always based on studies that examine average scores from measurements obtained on a high number of subjects and analyzed using techniques based on inferential statistics. Of course, there are different circumstances in which this is desirable, but single-subject designs [56] allow the examination of the time course of a phenomenon. Effects can be studied in more detail by comparing the baseline (i.e., how the subject behaved before treatment) with what happens after the introduction of the treatment and, subsequently, its removal. We believe that this research strategy may be more useful to better understand the dynamics of the interaction with security systems.
As an example, consider the following plot (Figure 1) showing the performance of a 23-year-old female subject in discriminating between suspicious and non-suspicious emails in an experimental study designed to be realistic but not real. The d-prime metric was used as the dependent variable. In Signal Detection Theory, the discriminability index (d-prime) is the separation between the means of the signal and the noise distributions. The task was to read 90 emails (30% of them were suspicious) and correctly categorize them. The "system" required the user to enter a password for each categorization. This was necessary to create discomfort for the subject. The subject performed the task for three days (Baseline), then entered the Treatment condition (three days), in which she received a bonus point for each correct identification (token). Whenever she correctly categorized 10 emails, she received access to the benefit (backup reinforcement) of not entering the password for the following 10 emails (and so on, until the end of the task). After three days of Treatment, the subject entered the Baseline condition again (no tokens collected, no back-up reinforcement for three more days). The plot shows how Treatment (providing a more usable system not requiring entering a password continuously) was effective for changing the quality of performance (improved d-prime). Reinforcement withdrawal led to a performance decrement, meaning that the behavior was not immediately generalized. This is only a first attempt to understand whether a token economy can be applied to the cybersecurity setting and should not be considered conclusive. Of course, a research program in this field requires many steps to assess the effect of as many variables as possible.
techniques based on inferential statistics. Of course, there are different circumstances in which this is desirable, but single-subject designs [56] allow the examination of the time course of a phenomenon. Effects can be studied in more detail by comparing the baseline (i.e., how the subject behaved before treatment) with what happens after the introduction of the treatment and, subsequently, its removal. We believe that this research strategy may be more useful to better understand the dynamics of the interaction with security systems.
As an example, consider the following plot ( Figure 1) showing the performance of a 23-year-old female subject in discriminating between suspicious and non-suspicious emails in an experimental study designed to be realistic but not real. The d-prime metric was used as the dependent variable. In Signal Detection Theory, the discriminability index (d-prime) is the separation between the means of the signal and the noise distributions. The task was to read 90 emails (30% of them were suspicious) and correctly categorize them. The "system" required the user to enter a password for each categorization. This was necessary to create discomfort for the subject. The subject performed the task for three days (Baseline), then entered the Treatment condition (three days), in which she received a bonus point for each correct identification (token). Whenever she correctly categorized 10 emails, she received access to the benefit (backup reinforcement) of not entering the password for the following 10 emails (and so on, until the end of the task). After three days of Treatment, the subject entered the Baseline condition again (no tokens collected, no back-up reinforcement for three more days). The plot shows how Treatment (providing a more usable system not requiring entering a password continuously) was effective for changing the quality of performance (improved d-prime). Reinforcement withdrawal led to a performance decrement, meaning that the behavior was not immediately generalized. This is only a first attempt to understand whether a token economy can be applied to the cybersecurity setting and should not be considered conclusive. Of course, a research program in this field requires many steps to assess the effect of as many variables as possible. Figure 1. Performance of a subject discriminating between suspicious and non-suspicious email with (days 4 to 6) and without (days 1 to 3 and 7 to 9) reinforcement.
Discussion and Conclusions
The central point of our reflection is that security systems based on the token economy could be implemented by reinforcing secure behaviors, that is, offering usability as a back-up reinforcement for secure behaviors. In practice, this would be a behavior-based Figure 1. Performance of a subject discriminating between suspicious and non-suspicious email with (days 4 to 6) and without (days 1 to 3 and 7 to 9) reinforcement.
Discussion and Conclusions
The central point of our reflection is that security systems based on the token economy could be implemented by reinforcing secure behaviors, that is, offering usability as a backup reinforcement for secure behaviors. In practice, this would be a behavior-based security model that exploits the usability/security trade-off instead of reducing the gap between these two needs, an attempt that thus far has not been satisfactorily achieved.
The users would not be merely encouraged to adopt secure behaviors but reinforced for having put them in place. In this case, it is important to remember that a system needs to verify the user's credentials too often during an interaction cannot be defined as very usable. On the other hand, the redundancy of security measures is considered necessary to prevent unaware users from making mistakes. However, a user who provides continued evidence of awareness may "score points" and achieve a "pro" user status, and it may be possible to eliminate some restrictions to ensure greater usability. Would having access to greater usability improve safety behaviors? No activity can be defined as reinforcing a priori; only the observation of its effects on behaviors can tell us if it is a reinforcer. Therefore, it is essential to test this hypothesis by imagining a research agenda that includes specific experimental activities to answer the following questions: • Will a reduction in the complexity of the interaction represent a reinforcer for the emission of secure behaviors? The answer to this question is not obvious; the reinforcing stimulus is not based on its intrinsic properties but on the modification of the future probability of emitting the behavior. Therefore, it can be defined only post hoc, based on the effect that the consequence has on the behavior.
•
Will the implementation of a token economy system be effective in achieving an increase in secure behavior, in the context of cybersecurity, where the individual's task is to detect suspicious activity during the normal use of technology? The token economy has been used successfully in several fields. There is no reason to rule out that it could show its beneficial effects in the cybersecurity context as well. Of course, this remains to be proven.
•
Will the possible beneficial effects of such a program be limited to obtaining tokens, or will they persist after a reinforcement program is completed? In educational contexts in which the token economy has been largely employed, the goal is the generalization of learned behaviors. It is critical to assess whether exposure to a reinforcement program needs follow-up activities to generalize safe behaviors.
•
What is the most effective reinforcement schedule to achieve immediate and longlasting effects? Reinforcement schedules can be based on the amount of behavior produced or the interval required to achieve reinforcement. In addition, they can be fixed or variable. It would be unnecessarily complicated to deepen reinforcement schedules, but it is useful to point out that each type of schedule produces specific effects independently of the organism, the behavior, and the type of reinforcement. • Will response cost (i.e., punishing insecure behavior) add anything? Reinforcement is a very powerful mechanism, much more than punishment, but the combination of these two strategies is plausible for several practical reasons; encouraging safe driving does not detract from the need to impose fines on those who violate traffic laws.
These questions are just a first set that is essential to answer, but many more could be formulated as knowledge in this area advances. Once the regularities in user behavior have been defined, the next step should be to move away from laboratory research to implement intervention strategies on real platforms. This paper focuses on password management, the authentication method used in almost every web application. Perhaps, systems that do not require passwords may become widespread in the future and allow overcoming the problem of attacks like phishing [57]. However, the research agenda devised here is not aimed at finding technical solutions to be applied to a specific type of system. On the contrary, our aim is to provide a general perspective that could be called "behavior-based cybersecurity" (as in behavior-based safety [58]). In this paper, we wanted to emphasize that it is necessary to start an experimental research program that goes beyond the role to which the human factor is often relegated in the field of cybersecurity: identify differences between types of users almost exclusively using self-reporting questionnaires.
Lebek et al. [59], who reviewed psychological approaches to cybersecurity, criticized the use of self-reports and suggested the use of "additional research methodologies such as experiments or case studies" (p. 1063).
The example based on the single case provided above is a starting point. The answer to all our questions is still a long way off.
Author Contributions: All authors reviewed the literature, drafted the manuscript, and critically revised and approved the final version before submission. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding. | 7,659.4 | 2022-03-28T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Influence of the Addition of Spruce Fibers to Industrial-Type High-Density Fiberboards Produced with Recycled Fibers
The growing production of wood-based panels and the linked consumption result in a need for substituting standard wooden raw materials. The shortage of wood availability, as well as the increasing prices and a trend towards more environmentally friendly materials and processes, have encouraged the producers of wood-based products to consider extending the life cycle of wood composites. In the present work, the influence of substituting pine with spruce for industrial high-density fiberboards with 5% of recovered fibers was studied. Samples containing 0%, 25%, 50%, and 100% spruce fibers were tested in their mechanical resistance and their interaction with water. Boards from all samples met relevant standard requirements; however, the addition of spruce caused a decrease in mechanical properties, with homogeneity having the most significant influence. The modulus of rupture dropped up to 6% and the internal bond for 47% for samples having 50% of spruce. The most significant drop (50%) was observed for surface soundness for samples made with 100% spruce. Regarding physical properties, swelling increased up to 19% with 50% spruce; on the other hand, its water absorption decreased for up to 12%. The addition of spruce to industrial high-density fiberboards also influenced the formaldehyde content negatively, with an increase of up to 21% with 50% spruce.
Introduction
Wood, as a natural origin raw material, wholly ecological and renewable, becomes more popular not only in structural applications or to furniture production and interior furnishing, but also as a fuel. Due to this, the prices and availability of round wood on the market rise rapidly. Instead of solid wood, nowadays, there is a significant demand for wood-based panels, such as plywood, particleboards, or fiberboards. These materials can replace solid wood in several applications. Because of that fact, the production of these materials is increasing to meet the market demand. Medium-density fiberboard (MDF) is one of the most willingly used wood-based materials in the furniture industry [1]. Figure 1 shows the production of medium-and high-density fiberboards (MDF/HDF) over the last 18 years. From 2000 to 2018, the fabrication of fiberboards in Europe has more than doubled from 8.4 to nearly 18 Mm 3 . Together with wood-based panels production, the need for wooden raw material was also increasing.
However, there have been studied substitutes for round wood for the production of fiberboards; nowadays, such material as recovered wood, newsprint, plantation wood species, straws, and recycled wood-based composites may be used as well [2]. In Poland, the main round wood species used for fiberboard production are pine (Pinus sylvestris L.) and spruce (Picea abies (L.) H. Karst), while adler, birch, or beech are less popular [3]. Moreover, the available forest area has increased from 6.5 million ha in 1946 to 9.2 million ha in 2018 (www.lasy.gov.pl), with the main species being pine (70%) and spruce (6%), the rest of species representing 24% (www.nadle snict wo.pl). The availability of these species has played a primary role in their selectionfor MDF production; another factor influencing this decision is the fact that pine or spruce wood anatomical structure provides good fiber quality during defibrating [4].
While forest areas in Poland are continuously increasing, the increase mentioned above on the consumption of round wood has affected its price. Based on data from the Polish State National Forest Holding, the cost of pine wood has increased by15% from 47 in 2013 to 61 $ m −3 in 2017 (www.e-drewn o.pl). Based on a report by State Forests, the difference in the price of one m 3 of pine and spruce wood varied from − 1.9 in January and August 2019 to 2.4 $ m −3 in October 2019. This increase in the price of raw wooden materials used for the production of MDF has motivated the inclusion of post-use boards into the production lines. In general, the addition of recovered fibers is negatively influencing the physical and mechanical properties of MDF [5]. However, a previous study has explored the introduction of recovered MDF up to 20% of the final composition of the composite panels, with the final products meeting relevant standard requirements [6].
The less popularity of spruce fibers to be used for the production of fibreboards is mainly due to their shorter length, but also because of a more substantial amount of unfavorable dust being produced during defibration which is caused by the spruce wood anatomy [7]. However, these properties also allow spruce fibers to be added into fiber mixes to produce boards with a better-filled structure, which also provides a smoother surface. Previous works have explored the possibility of producing MDF with black spruce tops [8], and black spruce bark [9], with the main drawback being the need for adapting the conditions of the thermomechanical refining. In another work, particleboards and MDF were made of Norway spruce bonded with melamine-urea-formaldehyde (MUF) resin, showing a high internal bond in the case of MDF. Moreover, it is even possible to consider black spruce bark as a potentially suitable raw material for MDF production, but the proper adaptation of thermomechanical refining conditions is required [9]. In work published by Salem, a 16 mm thick MDF board has more than double higher internal bond (IB) strength and nearly twice higher modulus of rupture (MOR) performance comparing to pine PB produced in the same thickness [10].
In the present study, 2.5 mm thick high-density fiberboards (HDF) with a 5% share of recovered HDF, bonded with MUF resin, were produced in industrial conditions. The main mechanical properties for HDF, being moduli of elasticity and rupture, internal bond, and surface soundness, were evaluated. Moreover, the performance of the surface was studied through surface water absorption, thickness swelling, free formaldehyde emissions, and surface roughness. The goal of this investigation was to determine the influence of the different amounts of spruce wood added as a raw material for producing high-density fiberboards made in industrial conditions on several fundamental mechanical and physical properties of panels as required by the appropriate standards.
Materials
Pine (Pinus sylvestris L.) and spruce (Picea abies (L.) H.Karst) were obtained from Polish forests and kept at a wood yard for three months to acclimate. A commercially available melamine-urea-formaldehyde (MUF) resin was used with melamine content: 5.2%, molar ratio: 0.89, solid content: 66.5%. Fibers were produced on industrial Metso Defibrator EVO56 from debarked round wood, as well as from recycled HDF (5%) from offcuts and leftovers added to the feeding conveyor.
Adjustment of Wooden Material Amount Dosage
An exact wood mix was prepared in advance to dose the proper amount of spruce wood into the production of HDF. Depending on the sample, pine and spruce wood chips were mixed at the wood yard with a Liebherr front loader with a capacity of 30 t per hour on a dry wood basis. Four different shares of spruce were used: 0%, 25%, 50%, and 100%, named P1, P2, P3, and P4, respectively. All samples contained 5% of recovered HDF from cutouts.
Production of Fiberboards
High-density fiberboards were produced at industrial conditions. Thicknesses were kept at 2.5 mm, nominal density at 860 kg m −3 and the standard formaldehyde emission was CARB 2 compliant. The hydrothermal parameters of the defibrillator had a constant setup: preheating pressure of 0.94 MPa, preheating temperature of 180 °C, and a preheating time of 192 s. Paraffin emulsion was added into the defibrillator milling chamber in the amount of 0.5% calculated on the weight of the oven-dry fibers. Fibers were glued in a Blow Line high-steam pressure system with the MUF resin; the amount of dry resin calculated on dry wood was 11.0%. Urea content was 21.0%, and ammonium nitrate (hardener) content was 3.0%, both calculated based on the dry weight of the resin, considering a fiber mat moisture content of 10.7% ± 0.7%. Pressing was done with an industrial Dieffenbacher continuous press system with press factor: 5.3 s mm −1 , press temperature 220 °C, maximum unit pressure 2.5 MPa, parameters were constant for all produced boards. The average energy consumption was 145 kWh per each ton of dry wood. These parameters resulted in an average fiber bulk density of all samples on the level of 23.94 kg m −3 .
Raw Material Fraction
The fraction of pine and spruce wood chips were examined with an IMAL vibrating laboratory sorter with nine sieves in the size of 40 > 20 > 10 > 8 > 5 > 3.15 > 1.0 > 0.315 > 0 m m. For each fraction, 100 g of raw material was used. The set time of conducting the vibrating was 5 minutes; results correspond to an average of three examinations.
Chips Moisture Content
The chips moisture content were examined according to the oven drying method. The amount of material for each examination was ≈ 50 g, oven temperature 103 °C and heating time from a minimum of 4 h to the achievement of constant weight. Results shown correspond to an average of eight examinations.
Fibers Fraction
The fraction of fibers produced with different spruce wood share were examined on ALPINE Air Jet Sieve e200LS according to DIN 66,165. Briefly, 5 g of totally dry wood was fractioned, with a set time of sieving of 120 s. The selected sieves were 125, 315, 630, 1000, 1600, and 2500 µm. Gathered results were shown as an average of three examinations.
Density
Density was determined according to EN 323 [11], vertical density profiles of the produced HDF boards were analyzed on a GreCon DAX 5000 device following a procedure described previously [12].
Mechanical Properties
HDF panels were conditioned in normal conditions (20 °C and 65% ambient humidity) until they reached a constant weight. Then, samples were cut according to European standards [13]. The modulus of rupture (MOR) and modulus of elasticity (MOE) was determined according to EN 310 standard [14], internal bond (IB) was established according to EN 319 standards [15], and Surface soundness (SS) was determined according to EN 311 [16]. All of the mentioned mechanical properties were examined on the IMAL laboratory testing machine, with use as many as 12 test specimens of each panel type to each mentioned test.
Surface Properties
Moisture content, according to EN 322 [17] and thickness swelling (TS), according to EN 317 [18], surface water absorption was conducted accordingly to EN 382-1 [19] (each mentioned test completed on 12 test specimens of each sample). Surface roughness (Ra) was characterized with a Surtronic 25 (Taylor Hobson) profilometer, results of surface roughness presented correspond to an average from 10 measurements for each examined surface sample. Free formaldehyde content was tested thrice for each panel type according to EN 12460-5 [20] using a Hach Lange DR 3900 spectrophotometer.
Statistical Analysis
One-way analysis of variance (ANOVA) was conducted to study the effect of the above-mentioned parameters on the properties of the tested panels at the 0.05 significance level (P = 0.05). All the statistical analyses were performed using the software of IBM SPSS Statistics 22. The results of statistical analyses are presented in Table 1.
Results and Discussion
Results of the sieving of wood chips and rHDF are shown in Table 1. Results show that the retained spruce fibers were, in general, slightly higher than the recommended values proportionated by the sieve manufacturer. It can be seen that the most significant difference could be noticed on the biggest sieve (40 mm) comparing pine and spruce fractions, with the difference being nearly 60% higher in spruce chips compared to pine chips. Moreover, the fraction of spruce chips on the 10 mm sieve was more than 15% higher than the fraction of pine chips; while on the fifth, sixth, and seventh sieves (size 5, 3.15 mm and < 0.1 mm) was accordingly ≈ 5 and ≈ 10% higher. In the case of the fraction of pine chips on the 20 mm and 8 mm sieves, it was ≈ 13% and ≈ 6% higher than in the case of spruce chips. There was no significant differences between the average values of presented results of sieving analyses for pine and spruce chips. The moisture content of pine and spruce chips were similar, having a difference between both moisture contents of about 3%, although the moisture content of spruce chips was slightly higher.
In Table 2, the fiber bulk density for each sample is presented. As can be seen, fiber bulk density differs depending on the sample; this should be taken into account for selected properties of MDF, which might be influenced [21]. Bulk density depends on diverse characteristics of wood-fibers, as fiber length and its distribution. Fibers produced from pure pine chips had fiber bulk density on the level of 23.41 kg m −3 , and the addition of 25% of spruce chips did not influence the fiber bulk density (23.38 kg m −3 ) considerably. The , which was 4% less compared to P1. P4, made with 100% spruce fibers, had 12% more bulk density (the only statistically significant difference between tested samples) than P1, which is made of pine fibers only. Although spruce and pine wood are similar [22], having fibers of comparable dimensions like length and width [4], the influence of spruce wood on fiber bulk density is noticeable and similar to previously published works [23]. For a better understanding of the influence of the fiber share, the mass fractions of different fibers sizes are shown in Table 3. The percentage of fibers on the 125 µm sieve was comparable for P1 and P2, having 62.2% and 63.1%, respectively; the most significant portion was found in P3-64.8% and the smallest in P4-60.2%. There is a correlation between the fiber bulk density and the mass fraction shares, P1 and P2, having a sum of 84% and 83.7% respectively for the 125 and 15 sieves, this sum was 87.1% for P3 and 80.4% for P4, this shows an inversely proportional relationship between bulk density and the share of fiber size. Moreover, P4 had the highest sum of fibers of the last four sieves (630, 1000, 1600, and 2500 µm), being 19.6%, while for P1, it was 16% and for P3 12.9%. Distribution of fiber fraction influences the performance of the HDF, as it is known that mechanical properties increase with increasing fiber size, whereas physical properties decrease [24].
The density profile distribution evaluated, and the results are shown in Fig. 2, moreover, the average maximum surface layer density (SLD) and an average minimum core density (CD) are presented in Fig. 3. In Fig. 2, the left side shows the top surface, while the right side shows the bottom surface of the produced HDF. The differences between the top and bottom surface density from the samples were relatively small-on the level of 2-3% regarding the bottom side. Based on the density profiles, it can be seen that all examined HDF boards had a similar shape that is characteristic for HDF panels [25], and there was no delamination in the middle. The highest surface density was obtained for P4, which was 1130 kg m −3 , while the lowest was 1130 kg m −3 , corresponding to P3. The surface and core densities have a different impact on final mechanical and physical performances [26], and differences may occur due to the processing parameters [27], to avoid that, processing parameters were kept constant for the present investigation. Although the difference in minimum and maximum SLD was low (2.5%), it might be caused by the difference in fiber bulk density, as the values concur, being the highest for P4 (26.46 kg m −3 ) and the lowest for P3 (22.49 kg m −3 ). The CD of P1 was 850 kg m −3 , which was the lowest CD, while P4 had the highest CD (868 kg m −3 ) HDF; this means that spruce HDF was, in general, denser than pine HDF. Another factor influencing the final properties is the difference between SLD and CD [28]; the highest difference was for P4 (≈ 23%), while for P2, it was 25%. The lowest difference was that of P3, which was slightly above 22%, this means that the panels made with 50% spruce wood resulted in more homogeneous density profiles. Figure 4 shows the modulus of rupture (MOR-bending strength) and modulus of elasticity (MOE). It can be highlighted that all the samples met the minimum MDF requirements, according to EN 622 (> 23.00 N mm −2 ) [29]. P1 While density has a major role in influencing mechanical properties, moisture content also has an impact, with higher moisture content affecting the properties of MDF negatively [31]. In this sense, although HDF boards produced with spruce had about 10% lower moisture content (4.76%) compared to P1 (5.23%), its influence was not observed. The most significant impact on the decrease of MOR is given by the addition of spruce fibers, rather than the moisture content [32]. It should also be noted that the lowest MOR is that of P3, where the density of surface layers (Fig. 3) was the lowest from all the tested panels. The lower MOR happens as the strain-stress distribution during bending depends mostly on the strength of the surface layers.
Similar to MOR, the highest MOE was obtained for P1 (5880 N mm −2 ), and the negative influence of spruce fibers could be observed. Modulus of elasticity result of P1 was nearly 5% higher than P4 (5620 N mm −2 ). Unlike MOR, the MOE of P3 was not the lowest, being 4935 N mm −2 15% less compared to P1. The lowest MOE was that of P2, which was 4635 N mm −2 , 32% less than P1. This change in the tendency might be explained by the 5% lower equilibrium moisture content (see Table 4) found in P3 compared to P2 [30]. Moreover, the smallest amount of fines in P3 (as reported in Table 3) decreases the surface area of the fibers, increasing the demand for resin coverage per unit surface area [5], and the higher gluing per unit influences the mechanical properties of MDF, such as MOE positively [33]. There were statistically significant differences between average values of MOE for P1 and P2 and P3 without significant differences for P4. Similarly, there were statistically significant differences for P2 and P4, while there was no statistically significant difference of MOE between P2 and P3.
The internal bond (IB) requirement specified in EN 622-5 (> 0.65 N mm −2 ) was met by all the samples, as shown in Fig. 5. The internal bond is also related to the CD, having a directly proportional relationship. The highest IB was obtained for P1 (1.33 N mm −2 ), which was near twice the IB of P3 (0.70 N mm −2 ), where homogeneity of the wood mix was the biggest. The internal bond of P2 and P4 were comparable (0.95 N mm −2 and 1.00 N mm −2, respectively) being ≈ 30% and ≈ 25% lower compared to P1. It can be stated that not the panel core density, not the fiber bulk density, had the most significant impact on decreasing IB. On the other hand, the homogeneity of the wood mix might have the most significant impact on the final IB, with P1 samples having the highest IB [1,34]. Although adding spruce fibers caused a decrease in IB, there was a higher strength observed in samples with higher homogeneity of the wood mix (P1, P2, and P4). There was a statistically significant difference between average values of IB for P1 and the rest of the panels, while there were no statistically significant differences of IB average values for P2, P3, and P4 samples. Although surface soundness (SS) minimal requirements for HDF boards are not specified by European Standards, customers demand this parameter to be > 0.80 N mm −2 [35]. Considering this, the SS was examined to evaluate the influence of spruce fibers. It can be seen that the behavior of SS was similar to that of IB. The highest SS was obtained for P1, being 1.72 N mm −2 . Moreover, the highest SS of boards containing spruce fibers was obtained for P4 (1.35 N mm −2 ), which is ≈ 22% lower than that of P1. The lowest SS was observed for P3 (1.20 N mm −2 ), being 30% less than P1 and 11% less than P4. P2 had a surface soundness of 1.29 N mm −2 .
From the density profile perspective, [26] the highest performance of SS should be obtained for the highest SLD; similarly to IB, this dependence was observed only for HDF boards produced with spruce fibers (P2. P3, and P4). Although the SLD peak for P1 was one of the smallest from the samples, boards produced only from pine wood achieved the best SS. These results could mean that the most significant influence on the surface soundness was the addition of spruce fibers, as well as the homogeneity of the wood mix. The variations in SS might also be explained by differences in the anatomical structure of spruce and pine fibers, as spruce fibers have ≈ 26% smaller diameter comparing to pine fibers. Additionally, spruce fiber length is 16% shorter comparing to pine fiber length. What is more, since spruce fiber wall thickness is much thinner than pine (2-3 µm comparing to 3-11 µm) [36,37], the spruce pulp is about 15% stronger compared to pine pulp [38]. Apart from the panels P1 and P4, the statistical differences for SS results from the samples were not significant. Figure 6 presents the surface roughness (Ra) of both faces, along with the formaldehyde emissions. One of the factors influencing the use of sealing materials during lacquering is increased roughness of the HDF surface [39]. The surface roughness of P4 had almost the same values on both sides, while for the remaining samples, the Ra was about 6% higher in the bottom side. This difference implies that the bottom side was more "open" compared to the top side, which might lead to higher water absorption [40]. On the other hand, the most "closed" surface was achieved for P4, which had a top roughness of 2.90 µm, and a bottom roughness of 2.89 µm, these values also concur with a high surface density, with P4 having an SLD of 1130 kg m −3 .
Regarding formaldehyde content, all the panels were produced complying with the CARB 2 formaldehyde emission standard, which is requiring formaldehyde content (FC) to be below 5.0 mg/100 g, as examined by the perforator method. Moreover, all produced HDF met that California Air Resources Board standard requirement of formaldehyde content. However, it was observed an increase in FC with a mixture of pine and spruce fibers. P1 had the lowest formaldehyde content (3.27 mg/100 g), while the highest FC was obtained for P3 (4.15 mg/100 g), which was 21% more than P1. However, increasing spruce wood (100%, P4) did not cause a further rise in FC, but a decrease to the level of 3.42 mg/100 g, which is ≈ 18% less than P3 and ≈ 5% more than P1. P2 had formaldehyde content of 3.37 mg/100 g, ≈ 19% less than P4, and about 3% more than P1. The increase in FC could be caused by the addition of spruce wood itself because, depending on the wood age, it can have higher natural formaldehyde content for about 18% than pine [41]. This is due to, in general, spruce is cut in older age than pine in Poland (www.gios.gov.pl).
Thickness swelling results are presented in Table 4. Based on EN 622-5, the maximum allowed swelling after 24 h for boards < 2.5 mm is 45%; in this sense, all examined HDF samples met the demand. However, furniture companies require swelling limited to < 35% [35]. In this sense, not all board samples could satisfy this regulation. As can be observed in Table 5, the relationships between swelling (TS24) and board moisture content are inversely proportional [42]. This is because when wood-based panels moisture increases, their swelling is decreased [39].
P1 had the lowest swelling -28.79%. The lowest MC was found for P3-4.68% (≈ 8% less than P1), which swelling was the highest (35.68%), being nearly 20% more than P1. TS24 was dependent on moisture content itself, but also the addition of spruce influenced swelling increase. The lowest swelling for boards produced with spruce was for P4 (32.41%), although its equilibrium moisture content was not the highest among boards with spruce. P4 had TS24 11% higher than P1, and 9% lower than P3, while its MC was 4.70%, which is comparable to MC of P3. However, P2 MC was ≈ 4% higher compared to P3 or P4, and its TS24 was very similar (32.51%) but slightly higher than P4 by more than 3%. What is more, the wood mix homogeneity might have a positive influence on this final HDF physical properties. From the statistical analysis, it could be found that there was a statistically significant difference between the average values of TS24 for P1 and (Table 4) for the bottom side of the board were mainly 2-3% higher comparing to the top side of the board (except P4 where it was about 8% lower). Opposite to TS24 behavior, the minimum surface water absorption (WA) was achieved for P3 (147 g m −2 for top and 150 g m −2 for bottom surface). However, its surface density was the smallest (1102 kg m −3 ), and MC was also the lowest (4.68 kg m −3 ). In general, WA differs from TS24. However, WA of this sample was the highest compared to other examined HDF boards for top and bottom surface, respectively 198 g m −2 and 183 g m −2 , which was 36% and 18% higher compared with P3. The highest fiber bulk density (26.46 kg m −3 ) might have influenced the final result. Even though moisture content of P1 was the highest (5.03%), its WA was about 12% higher compared to P3, while P2 had around 16% higher WA, although P3 and P2 surface density and moisture content did not differ much. These differences could mean that spruce wood addition might have affected final surface water absorption results.
Conclusions
Industrial fibers produced from 100% spruce wood had 12% higher fiber bulk density compared to pine wood and around 19% more of fine fiber fractions. Changes in wood mix homogeneity caused a decrease of MOR up to 6% in P3. Both P1 and P4 had comparable MOR, which was 2.5 times higher than required by EN 622-5. Spruce wood addition caused decreasing in IB. The most significant drop (30%) produced the highest wood mix homogeneity for P3. P4 had a 25% lower IB comparing to P1. Spruce wood addition caused a decrease in SS. The most significant drop (47%) produced the highest wood mix homogeneity for P3. P4 had a 22% lower SS comparing to P1. Spruce wood addition caused an increase in TS24. The most significant raise (19%) produced the highest wood mix homogeneity for P3. P4 had 11% higher TS24 compared to P1. Spruce wood addition caused an increase in WA. However, in P3, WA was decreased by 12%. P4 had an average 11% higher WA comparing to P1. Spruce wood addition caused increasing in FC. The most significant raise (21%) was observed for P3. P4 had 5% higher FC comparing to P1. It can be concluded, that spruce wood is a suitable raw material substitute for pine wood for industrial HDF production meeting EN 622-5 standard requirements. However, the mentioned results show, that there is no straight and predictable influence of the spruce wood fibers content on the properties of HDF panels produced with use recycled fibers. | 6,456.8 | 2020-10-03T00:00:00.000 | [
"Materials Science"
] |
Multilabel Image Classification with Deep Transfer Learning for Decision Support on Wildfire Response
: Given the explosive growth of information technology and the development of computer vision with convolutional neural networks, wildfire field data information systems are adopting automation and intelligence. However, some limitations remain in acquiring insights from data, such as the risk of overfitting caused by insufficient datasets. Moreover, most previous studies have only focused on detecting fires or smoke, whereas detecting persons and other objects of interest is equally crucial for wildfire response strategies. Therefore, this study developed a multilabel classification (MLC) model, which applies transfer learning and data augmentation and outputs multiple pieces of information on the same object or image. VGG-16, ResNet-50, and DenseNet-121 were used as pretrained models for transfer learning. The models were trained using the dataset constructed in this study and were compared based on various performance metrics. Moreover, the use of control variable methods revealed that transfer learning and data augmentation can perform better when used in the proposed MLC model. The resulting visualization is a heatmap processed from gradient-weighted class activation mapping that shows the reliability of predictions and the position of each class. The MLC model can address the limitations of existing forest fire identification algorithms, which mostly focuses on binary classification. This study can guide future research on implementing deep learning-based field image analysis and decision support systems in wildfire response work.
Introduction
Wildfires have become increasingly intense and frequent worldwide in recent years [1]. A wildfire not only destroys infrastructure in fire-hit areas and causes casualties to firefighters and civilians but also causes fatal damage to the environment, releasing large amounts of carbon dioxide [2]. To minimize such damage, decision makers from the responsible agencies aim to detect fires as quickly as possible and to extinguish them quickly and safely [3]. Wildfire response is a continuous decision-making process based on a variety of information that is constantly shared in a spatiotemporal range, from the moment a disaster occurs to when the situation is resolved [4]. Efficient and rapid decision making in urgent disaster situations requires the analysis of decision-support information based on data from various sources [5].
Video and image data are key factors for early detection and real-time monitoring to prevent fires from spreading to uncontrollable levels [6]. Over the past few decades, the use of convolutional neural networks (CNNs) in image analysis and intelligent video surveillance has proven to be faster and more effective than other sensing technologies in minimizing forest fire damage [7]. Nevertheless, several problems must be addressed in forest fire detection and response when using CNNs.
The first problem is that most of the research on wildfires using deep-learning-based computer vision is mainly limited to binary classifications, such as the classification of wildfire and non-wildfire images [8]. In other words, these models are focused on detecting forest fires but ignore other meaningful information, such as information on the surrounding site. Even though a wide range of regions can be filmed via unmanned aerial vehicles (UAVs) or surveillance cameras, information provisions for decision makers are limited in this single-label classification model environment because only one type of result can be obtained from one instance. Unlike single-label classification or multiclass classification, where the classification scheme is mutually exclusive, multilabel classification (MLC) does not have a specified number of labels per image (instance), and the classes are non-exclusive. Therefore, the model can be trained by embedding more information in a single instance [9].
In the context of wildfire response, information is shared to establish a common understanding of wildfire responders regarding the disaster situations they encounter [10]. Information on the disaster site is a vital data source that must be shared to enable timely and appropriate responses. Therefore, information concerning human lives and property at the site of occurrence must be considered to ensure effective and optimized response decisions by decision makers [11].
Another important problem is that the performance of the learning model can be degraded by overfitting owing to insufficient data [12]. The lack of large-scale image data benchmarks remains a common obstacle in training deep neural networks [13]. However, transfer learning and data augmentation can significantly enhance the predictive performance of binary classification models to overcome image data limitations. In particular, transfer learning (with fine-tuning of pretrained models) improves accuracy compared to scanarios when the parameters of the model are initialized from scratch (i.e., without applying transfer learning) [14].
The main purpose of this study was to develop a decision support system for wildfire responders through the early detection and on-site monitoring of wildfire events. A decision support system should perform two functions: (1) process incoming data and (2) provide relevant information [15]. The received data are limited to image data from an optical camera, and the results of deep learning can be analyzed. Therefore, we propose a transfer learning approach for the MLC model to address the following challenges:
1.
Does the proposed CNN-based multilabel image classification model for wildfire response decision support show a convincing performance? 2.
Are transfer learning and data augmentation methods, which are used to overcome data scarcity, effective in increasing the performance of the proposed MLC model? 3.
Images taken from drones are usually collected at a high resolution. However, the CNN-based result is output as a low-resolution image (224 × 224). How can the gap between these two resolutions be addressed? 4.
How can the models be used to support forest fire response decision making?
In this study, it is significant that MLC was used to provide multiple pieces of information within the image frame, away from the binary or multi-class classifications mainly covered in previous studies. The reason for using this multi-information framework is to share various pieces of information at disaster sites with disaster responders in near-real time. In the model configuration, we tried to lower the error rate as much as possible by using data augmentation, transfer learning, by adding similar data, and cross validation. In order to minimize the resolution gap between the CNN input model and the actual captured image, a method of dividing and evaluating the image was attempted.
The backbone network of the MLC was constructed using VGG16 [16], ResNet50 [17], and DenseNet121 [18], which are mainly used in CNN-based binary classification. These models were retrained on a dataset built by researchers and were validated using 10-fold cross-validation. The size of the dataset used in the training model was increased by data augmentation to overcome the limitations caused by a lack of data. Finally, the model with the best performance among the three models was selected using the evaluation metric, and the result was visualized as a class activation map (CAM).
The remainder of this paper is organized as follows: Section 2 briefly summarizes previous studies on wildfire detection and response using image data and decision support systems. Section 3 presents the multilabel image classification, transfer learning model, and evaluation methods. The results of relevant experiments are analyzed and discussed in Section 4. Finally, the conclusions are presented in Section 5.
Related Work
Effective disaster management relies on the participation and communication of people from geographically dispersed organizations; therefore, information management is critical to disaster response tasks [19]. Because forest fires can cause widespread damage depending on the direction and speed of the fire, strategic plans are required to ensure prioritization and resource allocation to protect nearby homes and to evacuate people. In the past, limitations in data collection techniques constrained these decision-making processes, making them dependent on the subjective experience of the decision-maker [20]. Recent advances in information technology have led to a sharp increase in the amount of information available for decision making. Nevertheless, human capability in information processing is limited, and it is problematic to process information acquired at the scene of a forest fire timely and reliably. To solve this problem, a forest fire decision-support checklist for the information system was developed [21], and machine-learning-based research has steadily increased in the field of forest fire response and management since the 2000s [11]. Analyzing wildfire sites with artificial intelligence can substantially reduce the response time, decrease firefighting costs, and help minimize potential damage and loss of life [5].
Traditionally, wildfires have mainly been detected by human observations from fire towers or detection cameras, which are difficult to use owing to observer errors and timespace limitations [21]. Research on image-based automated detection that can monitor wildfires in real-time or near-real-time according to the data acquisition environment using satellites and ground detection cameras has been steadily increasing over the past decade [22]. Satellites have different characteristics depending on their orbit, which can be either a solar synchronous orbit or a geostationary orbit. Data from solar synchronous orbit satellites have a high spatial resolution but a low time resolution, which limits their applicability in cases of forest fires. Conversely, geostationary orbit satellites have a high temporal resolution but a low spatial resolution. According to previous studies, geostationary orbit satellites can continuously provide a wide and constant field-of-view over the same surface area; however, many countries do not have satellites owing to budget constraints, atmospheric interference, and low spatial resolution [23]. Therefore, satellites are not suitable for the early detection of small-scale wildfires [24]. On the other hand, small UAVs or surveillance cameras incur much lower operating costs than other technologies [25], offer high maneuverability, flexible perspectives, and resolution and have been recognized for their high potential in detecting wildfires early and for providing field information [26].
Previous studies combined image data and artificial intelligence methods to improve the accuracy of forest fire detection or to minimize the factors that cause errors. Damage detection studies often face the problem of data imbalances [27], which previously relied only on images downloaded from the Web and social media platforms [28,29]. Online image databases, such as the Corsican Fire Database, have been used for binary classification as a useful test set for comparing computer vision algorithms [30] but are still not available in MLC. Recent studies have demonstrated its effectiveness using data augmentation or transfer learning for the generalization of the performance of CNN models [31] and have shown its potential in object detection or MLC fields.
Because neural networks cannot be generalized to untrained situations, the importance of the dataset has been steadily emphasized to improve the performance of the model. During model verification, the smoke color and texture are too similar to other natural phenomena such as fog, clouds, and water vapor, and because it is difficult to detect smoke during the night, algorithms relying on smoke detection generally cause problems such as high false alarm rates [31,32]. The current study was conducted by including the objects that could not be differentiated in the dataset.
Data Augmentation
Data augmentation is the task of artificially enlarging the training dataset using modified data or synthesizing the training dataset from a few datasets before training the CNN model, which lowers the test error rate and significantly improves the robustness of the model to avoid overfitting. The most popular and proven effective current practices for data augmentation are affine transformation, including the rotation and reflection of the original image and color modification, including brightness transformation [33]. In this study, the image dataset was pre-processed in terms of reflection, rotation, and brightness, which are commonly used data augmentation techniques in previous studies to increase the richness of the training datasets.
Transfer Learning
Transfer learning is another approach to prevent overfitting [34]. It is a machine learning method that uses the weights of the pretrained models as weights for the initial or intermediate layers of the new objective model. In computer vision, transfer learning refers mainly to the use of pretrained models. This method is widely used to handle tasks that lack data availability [35]. There are two representative approaches for applying a pretrained model, called a fixed feature extractor and fine-tuning. The fixed feature extractor is a method of learning only the fully connected layer in a pretrained model and fixing the weights of the remaining layers. It is mainly applied when the amount of data is small, but the training data used for pretraining are similar to the training data of the target model. This approach is uncommon for the deep learning of damage detection areas, such as wildfire monitoring images, because of the dissimilarity between ImageNet and the given wildfire images.
On the other hand, fine-tuning not only replaces the fully connected layers of the pretrained model with a new one that outputs the desired number of classes to re-train from the given dataset but also fine-tunes all or part of the parameters in the pretrained convolutional layers and pooling layers by backpropagation. It is used when the amount of data is sufficient, even if the training data are not similar. This is shown in Figure 1.
The pretrained CNN model from ImageNet [36], which contains 1.4 million images with 1000 classes, is used for transfer learning. However, as there are no labels similar to flame or smoke or other on-site images to assist disaster response in the ImageNet label, fine-tuning is introduced.
Multilabel Classification Loss
Cross-entropy is defined as the calculation of the difference between the two probability distributions p and q, i.e., error calculations. Cross entropy is used as a loss function in machine learning. However, our framework uses binary cross entropy (BCE), which has commonly been used in the loss function for multilabel classification. The CNN model performs training by adjusting the model parameters such that probabilistic prediction is as similar to ground-truth probabilities as possible through the BCE. In other words, the probability of the output and the target similarly adjusts the model parameters. The BCE loss is defined by the following equation: where N denotes the total count of images, p(y i ) denotes the probability of class y i in the target, and q(y i ) denotes the predicted probability of class y i .
Proposed Network
The MLC model used in this study consists of a backbone network pretrained on ImageNet and fully connected layers. In multilabel classification, the training set consists of instances associated with the label set, and the model analyzes the training instances with a known label set to predict the label set of unknown instances. Figure 2 shows an example of a framework for an MLC-based proposed model with DenseNet121 as the backbone. The fully connected layer included dropout [37] and batch normalization [34]. The order of dropout, batch normalization, and rectified linear units (ReLU) were constructed based on the methodology of Ioffe [34] and Li [38].
Six classes were printed out, and the model was configured to achieve the following goals required for disaster response during the event of a forest fire: (a) check whether a forest fire has occurred, (b) detect smoke for the early detection of fires, (c) detect the burning area for extinguishing, and (d) detect the areas where human or property damage may occur.
In this study, we selected each of the three pretrained models mentioned above as a backbone network. An MLC model was constructed to provide information that can be supported for wildfire response from CCTV or UAV images. Finally, we compared the performance of each model.
Performance Metrics
Instances in single-label classification can only be classified correctly or incorrectly, and these results are mutually exclusive. However, the classification schemes in multilabel classification are mutually non-exclusive: in some cases, the predicted results from the classification model may only partially match the elements of the real label assigned to the instance. Thus, methods for evaluating multilabel models require evaluation metrics specific to multilabel learning [39]. Generally, there are two main groups of evaluation metrics in the recent literature: example-based metrics and label-based metrics [40]. Labelbased measurements return macro/micro averages across all labels after the performance of the training system, for each label is calculated individually, whereas example-based measurements return mean values throughout the test set based on differences in the actual and predicted label sets for all instances. To evaluate the performance of each model and to verify the effectiveness of transfer learning and data augmentation, this study used macro/micro average precision (PC/PO), macro/micro average recall (RC/RO), and macro/micro average F1-score (F1C/F1O). The abbreviations for evaluation metrics are based on the notations of Zhu [41] and Yan [9]. The metrics are defined as follows: In the above equations, TP, FP, and FN denote true positives, false positives, and false negatives, respectively, as evaluated by the classifier.
Macro averages are used to evaluate the classification model on the average of all of the labels. In contrast, the micro average is weighted by the number of instances of each label, which makes it a more effective evaluation metric on datasets with class imbalance problems. The F1 score is a harmonic average that considers both precision and recall. Therefore, the F1 score is generally considered a more important metric for comparing the models. In addition, the datasets for MLC generally suffer from data imbalance, and thus, micro-average-based metrics are considered important.
In addition, this study used Hamming loss (HL) and mean average precision (mAP), which are represented by example-based matrices. These metrics are defined as follows: In the above equations, |D| is the number of samples, |L| is the number of labels, and AP i is the average map of label i. Hamming loss is the ratio of a single misclassified label to the total number of labels (considering both the cases when incorrect labels are predicted and when associated labels are not predicted) and is one of the best-known multilabel evaluation methods [42]. The mean average precision was the mean value of the average precision for each class.
Class Activation Mapping
In the CNN model, the convolutional units of various layers act as object detectors. However, the use of fully connected layers causes a loss in the localizing features of these objects. Class activation mapping (CAM) [43] is used as a CNN model translation method and is a popular tool for researchers to generate attention heatmaps. A feature of CAM is that the network can include the approximate location information of the object even though the network has been trained to solve a classification task [8]. To calculate the CAM value, the fully connected layer is modified with the global average pooling layer (GAP). Subsequently, a fully connected layer connected to each class is attached and fine-tuned. However, it has a limitation in that it must use a GAP layer. When replacing the fully connected layer with GAP, the fine-tuning of the rear part is required again. However, CAM can only be extracted for the last convolutional layer.
Gradient-weighted class activation mapping (Grad-CAM) [44] solves this problem using a gradient. Specifically, it uses the gradient information coming into the last convolutional layer to take into account the importance of each neuron to the target label. In this study, Grad-CAM was used to emphasize the prediction values determined by the classification model and to visualize the location of the prediction target.
Results
This section presents the learning process and test results of the MLC model to support wildfire responses. Experiments were conducted in a CentOS (Community Enterprise Operating System) Linux release 8.2.2004 environment with Nvidia Tesla V100 GPU, 32 GB memory, and models were built and trained using PyTorch [45], a deep learning opensource framework.
Dataset
The dataset used to train and test the deep learning model contained daytime and nighttime wildfire images captured by surveillance cameras or drone cameras downloaded from the Web and cropped images of a controlled fire in the forest captured by a drone by the researchers. This study also included a day-night image matching (DNIM) dataset [46], which was used to reduce the effects of day and night lighting changes, and Korean tourist spot (KTS) [47] datasets generated for deep learning research, which comprise images linked by forest labels containing important wooden cultural properties in the forest. Additionally, wildfire-like images and 154 cloud and 100 sun images were also included as datasets because they have similar properties with early wildfire smoke and flames as the color or shape and are often detected erroneously. As such, they were included in the training dataset to prevent predictable errors in the verification stage and to train the robust model against wildfire-like images.
The collected images were resized or cropped to 224 × 224 pixels to consider whether the model is applicable to high-definition images. The datasets included 3,800 images. Figure 3 shows samples of the images. All instances were annotated according to the following classes: "Wildfire", "Non-Fire", "Flame", "Smoke", "Building", and "Pedestrian" (each class was abbreviated as "W", "N", "F", "S", "B", and "P", respectively). Table 1 lists the number of images for each designated label set before data augmentation. It consists of 2165 images downloaded from the Web, 1000 images from the KTS dataset, 101 images from the DNIM dataset, 254 images for error protection purposes, and 280 cropped images captured by the researchers. To ensure the annotation quality and accuracy, all of the annotated images were checked twice by different authors.
Data Partition
The dataset used for the experiment was divided into train, validation, and test sets. The test dataset included 2280 images from the entire dataset. The remaining 1520 images were pre-processed by data augmentation techniques, such as rotation, horizontal flip, and brightness, which are typically used in CNN image classification studies to secure sufficient data for learning, as shown in Table 2. Table 1 also lists the number of images for each designated label set after augmentation. Overall, the non-fire label group was the most common, and as the number of multi-labels increased, the number of the label group decreased. In particular, the number of label groups in which pedestrians and houses exist in the wildfire site, which is difficult to obtain, has the lowest number. Since drones are generally not perpendicular to the horizon or are inverted when photographing wildfires, the rotation was not set up as extreme, such as at 90 • or 180 • , but was instead set up between 10 • and 350 • , considering the lateral tilt of the drone. In addition, if the brightness of the image is too high or too low, the boundary line of the objective target becomes unclear, and the object becomes ambiguous. Therefore, data enhancement was performed between the maximum brightness l = 1.2 and the minimum brightness l = 0.8. After data augmentation, the training and test datasets were divided in a ratio of 4:1. In the model learning phase, 912 randomly sampled instances from the training dataset were evenly divided into 10 groups for evaluation using the cross-validation strategy.
The total number of classes of the prepared data was checked, and the distribution is shown in Figure 4. Due to the nature of wildfire response, most of the early detection was performed by smoke, so the number of smoke classes was higher than the number of flame classes. In addition, the wildfire classes and non-fire classes also had an imbalance, and the building and pedestrian classes also had relatively few classes. Since there was an imbalance between the labeling classification table in Table 1 and the overall class distribution in Figure 4, the micro average-based metric evaluation index should be checked.
Performance Analysis
This study compared the models with different backbones and verified the efficiency of transfer learning and data augmentation. The model was constructed using training and validation sets that had been partitioned by a 10-fold cross-validation strategy, and the final performance was measured according to each performance metric from the test dataset. The initialized learnable parameters (i.e., hyperparameters) for the CNN-based MLC architectures are listed in Table 3. The models were trained using binary cross-entropy as a loss function with the selected parameters. Each model was trained using a 10-fold cross-validation strategy, and the results were calculated 10 times. The training process using the validation scheme of each model with the selected hyperparameter combination is illustrated in Figure 5. In the case of VGG-16, the training loss fell gently and started at a very high validation loss value, while the training loss of ResNet-50 and DenseNet-121 fell sharply to about epoch 10 at the initial stage and then remained close to zero. However, it was found that DensNet-121 remained lower in terms of the validation learning curve. In the final epoch, epoch 100, it was shown that both the training loss and validation loss recorded the lowest values in DensNet-121. The models were evaluated using the label-based performance metrics, which are shown in Figure 6 as a box plot. All of the proposed models showed good multilabel classification ability using images of forest and wildfire sites for disaster response, with high scores (above 0.9) for most of the evaluation metrics. Among the proposed models, DenseNet-121 not only showed a significantly higher score for all of the evaluation metrics (distribution of the highest box and median value) but the interquartile ranges of each metric result were also typically smaller (i.e., with fewer distributed results) than in other models. Thus, the model maintained consistently high performance over several tests. Table 4 presents the results of the evaluation measurements with the mean and standard deviation.
However, an evaluation that only uses label-based measurements cannot highlight the dependencies between classes. Therefore, Table 4 presents example-based scores that consider all of the classes simultaneously and thus are considered more suitable for multilabel problems. The mAP score for the best-performing model (DenseNet-121) was 0.9629, whereas the mAP score for HL was 0.009. In addition, the per-class score of the area under the receiver operating characteristic curve (ROC-AUC) values of our proposed models were calculated to determine the performance for each class in the image dataset. The ROC curve is a graph showing the performance of the classification model at all possible classification thresholds, unlike the recall and precision values that change as the threshold is adjusted. AUC is a numerical value calculated from the area under the ROC curve and represents the measure of separability. Therefore, the ROC-AUC is a performance metric that is more robust than other performance indicators. AUC values range from 0 to 1, where AUC = 0.5 indicates that the model performed a random guess, and thus, the prediction was the entirely unacceptable. The best performance is when AUC = 1, indicating that all of the instances are properly classified. Table 5 presents the results with mean and standard deviation values. The dataset for the MLC model includes pictures of fires or non-fires (because the results are mutually exclusive). Therefore, the results of two classes-"Wildfire" and "Nonfire"-are calculated in almost the same way. The results of the classes "Wildfire" and "Smoke" were also calculated similarly, as flames are inevitably accompanied by smoke, although this smoke may be invisible because some fires are small or obscured by forests. The accuracy of the pedestrian and building labels was low in all of the models, which can be attributed to the relatively small number of labels assigned to the instances. It was confirmed that the ROC-AUC scores in all of the classes were generally high in the transfer learning algorithm using DenseNet-121 as a network.
Finally, to confirm the effect of transfer learning and data augmentation on the training model, we removed one data limit overcoming strategies each time using the control variable method and obtained the F1-score and the HL value. This method was implemented for DenseNet-121, which showed the highest performance. The training learning curve over epochs is illustrated in Figure 7. In the training stage, there was a significant difference in the slope of the learning cover curve when no data limit overcoming strategies were used and when one or more strategy was used; a steep learning curve was demonstrated with the strategies; a shallow learning curve was demonstrated without the strategies. When all of the strategies were used, the curve was formed the most rapidly, and the lowest final loss was calculated. This means other models require more practice or attempts before a performance begins to improve until the same level is reached. The curve produced in the case of using all of the strategies was formed the most rapidly, and the lowest final loss score was also calculated. There was no significant difference in the data augmentation and transfer learning effects when looking at the gradient slope or the final loss, but it was shown that the roughness of the curve was further reduced when data augmentation was used. In other words, learning was more stable. Additionally, the test results determined by the evaluation metrics are listed in Table 6. The results of this experiment show that transfer learning can significantly improve multilabel classification performance. With the exception of the transfer learning strategy, the macro average F1-score decreased by 0.0745, the micro average F1-score decreased by 0.0466, and the HL increased by 0.0286. In the case where only augmentation was used, the macro average F1-score decreased by 0.1159, the micro average F1-score decreased by 0.0701, and the HL increased by 0.0412.
The performance of transfer learning was further reduced when trained only with the datasets with no data augmentation. Hence, the quantitative number of the datasets required for learning in MLC has a significant impact on the performance of the model.
Visualization
To perform localization, a bounding box is drawn by the thresholding method, which retains over 20% of the grad-CAM result. It also provides a confidence score indicating the extent to which the model's predictions are true, considering a threshold value of 0.5. Figures 8 and 9 show an example of the results obtained with DenseNet-121 as a backbone. As shown in Figure 8, the sum of the confidence scores between the two classes is almost 100% because the wildfire and non-fire classes are mutually exclusive. Among the test datasets, a Case 1 image was selected as a sample of labeled wildfire with pedestrians, a Case 2 image was selected as a sample with a confusing object, such as sun or fog, that can be treated as a wildfire object for verification. Finally, a sample image with a night fire was selected to evaluate the model in nighttime conditions in Case 3.
In Case 1, the model predicted smoke, flames, non-fire, and a pedestrian with confidence scores of 0.9464, 0.7642, 0.0539, and 0.8623, respectively. The heatmap and bounding box were separated from each other to express the location. Conversely, for non-fire that is not assigned to an instance, a heatmap map without fire or smoke was displayed in the bush area.
Case 2 used an image taken at sunrise in a foggy mountainous area. All of the classes except for the non-fire class showed a score of 0.000, and it was possible to examine whether the model worked correctly to classify the sun and fog, which are frequently used for evaluating errors in wildfire detection.
Case 3 was an image of a wildfire that occurred near a downtown area at nighttime, and fireworks were displayed nearby, which may have some effect on detection. For the wildfire class, a heatmap was formed even in an area unrelated to the wildfire (lower left), which was judged to have detected the smoke generated by the firecrackers on the image. However, although the lighting in the dark at night was considered in the model training process, the heatmap represented the residential area, but the reliability of the building class was still very low (0.0030).
These results were similarly expressed in other test datasets, indicating that the accuracy of the classification was only improved if the CNN model was observable with the naked eye because only clear targets were labeled during the dataset preparation process.
Using an example, Figure 9 shows that the classification model is robust to a small object or noise in a photograph. As shown in Figure 9a, firefighters were dispatched to extinguish the fire in the forest, and nearby hikers were caught on camera. Although the human shape in Figure 9a looks very small, it is detectable with a 0.9952 confidence score, and an approximate location of the object was determined. Figure 9b shows a house with lights on and a nearby forest fire. Despite the similarity of the lamp to the fire image, the heatmap result did not recognize this part as a fire. Thus, the model looked at the appropriate part when identifying each class.
Finally, Figure 10 shows the influence of transfer learning and data augmentation. As discussed in the previous subsection, four cases were classified using the control variable method. For each case, the heatmap and bounding box of a specific class (smoke and person in Figure 10) were visualized, and the probability values were calculated. In the case of not using the data-shortage overcoming strategy, it was found that the heatmap represented the wrong place, and the class that was difficult to distinguish could not be detected at all. (If the value is less than 0.5, it is not treated as a detected value.) Therefore, the accuracy difference is significant if data supplementation strategies are not used for wildfire images, where data are inevitably lacking, as shown in Figures 9 and 10. When a data supplementation method was used, the heatmap distribution was somewhat reasonable, and the confidence value for the class that was to be detected was significantly increased. When both strategies were used, the heatmap distribution was the cleanest, and the positive predictive probability was the highest.
Application
The proposed model was applied to an image captured by the researchers using a DJI Phantom 4 Pro RTK drone, which had an image size of 1280 × 720 high-definition (HD) units. The filming site was composed of a virtual wildfire environment similar to that of a fire created by lighting a drum around the forest.
Although the captured HD image can be resized to the input size of the proposed model, downscaling a high-resolution image may result in the loss of information that is useful for classification, and the model may not operate smoothly [48]. Thus, the images were divided into 28 equal parts of 224 × 224, and the model was evaluated for the divided pictures. When the pictures are divided without overlapping parts, there is a possibility of a blind spot where the object to be found is cut off. Thus, the images are divided such that there are overlapping parts. The predicted classification values of each part of the picture were merged into the entire image and were visualized. The results of applying the proposed model to the drone shooting screen are shown in Figure 11. The confidence value was over 50%, and the label corresponding to the photograph part was predicted. In Figure 11, a forest fire was detected based on smoke in the central part of the whole picture. However, there was also an error (9.02%) in the building area, which was important for preserving residential and cultural assets. This error can be explained as follows: the large object was still cut off in the cropped image despite the application of the overlapping method. The white dotted circle was drawn to highlight the area with people, and the model correctly predicted that there were people in the area at 99.34% and 77.03%.
Discussion
In this study, transfer learning and data augmentation were combined to improve the capabilities of the model. Three different pretrained models were used to handle data limitations, data augmentation was performed, and each model was evaluated using labelbased evaluative metrics and example-based evaluative metrics. In conclusion, DenseNet-121 surpassed VGG-16 and ResNet-50 in the proposed MLC model. This is confirmed by the results of the evaluation metrics. With the advancement in camera technology, the image resolution increases, but training a CNN to handle large images is particularly difficult. The problems are the cost and learning time caused by excessive computational load in the initial layer. Because of the discrepancy between the image size in these models and the image size taken from the imaging device, we split the high-resolution image into smaller parts and processed them separately. The method proposed in this study loses less data and is expected to better classify small objects compared to scenarios when the original image is reduced in size and only a single image is processed. The proposed framework can be converted into other applications of image-based decision-making systems for disaster response fields to extract redundant information from one object.
Some previous studies used public data for binary classification problems (fire and non-fire). However, a dataset with multiple labels or classes changes according to the requirements of the system, and it is difficult to use the datasets from previous studies. The classified labels were defined to solve the need for a response from the image sources collected at the site. Fire is often accompanied by smoke, which is released faster than flames. The flame of a forest fire is barely visible from a distance. However, the smoke columns caused by fires are usually visible on camera. Therefore, early smoke detection is an effective way to prevent potential fire disasters [49]. Based on the detection of flames, field responders can be informed as to where the flames need to be extinguished. Decision makers for wildfire response, who receive information on the life and property at the fire site, use this basic information to decide on the evacuation route by considering the spot of fire occurrence to preferentially protect the area where there is a possibility of severe damage and to establish a line of defense. Instructions for prioritizing such tasks and for efficiently allocating limited support resources must be provided. Therefore, label categories can be defined for wildfire response and building large image benchmarks for disaster response. This is a basic study that provides multilabel information of target areas from cameras by applying CNN to wildfire response. Considering that multilabeling was performed manually by the researchers, the distinction between instances was vague in some of the images collected from external data sources. Instances that were too small to distinguish were not labeled to avoid overfitting. In addition, in the case of an instance that cannot be easily distinguished with the naked eye, it was not possible to easily add a class to be classified because of a label classification error.
Therefore, future research should aim to construct a formal annotated data benchmark for wildfire response in deep learning systems to enable the use of field information for supporting disaster decision-makers from the perspective of the wildfire detection algorithm. For example, the state of the wildfire may be understood from the fire shape. Crown fires are the most intense and dangerous wildfires, and surface fires cause relatively little damage. It is also important to identify forest species in disaster areas using videos. If the forests of the target area are coniferous, fires may spread to a large area. To provide this additional information, it is important to ensure communication between photographers, labeling workers, and deep learning model developers. From the perspective of wildfire response, future studies should also aim to develop an integrated wildfire-response decision-support system that can provide decision makers with various insights. Location can be retrieved from the global positioning system (GPS) of drones filming in disaster areas, and this can be combined with data on weather conditions that greatly affect wildfire disasters, such as wind direction, wind speed, and drying rate at the target site. In addition, when combined with a geographic information system (GIS), it is possible to determine the slope of the target area because a steep slope is difficult to control during a wildfire.
Conclusions
To the best of our knowledge, previous computer vision-based frameworks for managing fires have only used binary classification. However, in disaster response scenarios, decision makers must prioritize extinguishing operations by considering the range of flames, major surrounding structures such as residential facilities or cultural assets, and residents at the site. Various types of information on the scene of a wildfire can be obtained and analyzed using the photographs from an imaging device. However, annotation work is limited because of a lack of training datasets and the fact that previous wildfire detection research has only focused on binary classification. To solve these problems, we proposed a basic MLC-based framework to support wildfire responses.
The proposed model was verified through well-known evaluation indicators from the dataset selected by the researchers, and DenseNet-121, the most effective of the three representative models, was selected as the final model. Then, we visualized the result through grad-cam, and proposed a method to divide and evaluate each image to prevent data omission when applied to FHD or higher photos according to recently developed camera technology. | 9,478 | 2021-10-05T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Dityrosine formation outcompetes tyrosine nitration at low steady-state concentrations of peroxynitrite. Implications for tyrosine modification by nitric oxide/superoxide in vivo.
Formation of peroxynitrite from NO and O-(*2) is considered an important trigger for cellular tyrosine nitration under pathophysiological conditions. However, this view has been questioned by a recent report indicating that NO and O-(*2) generated simultaneously from (Z)-1-(N-[3-aminopropyl]-N-[4-(3-aminopropylammonio)butyl]-amino) diazen-1-ium-1,2-diolate] (SPER/NO) and hypoxanthine/xanthine oxidase, respectively, exhibit much lower nitrating efficiency than authentic peroxynitrite (Pfeiffer, S. and Mayer, B. (1998) J. Biol. Chem. 273, 27280-27285). The present study extends those earlier findings to several alternative NO/O-(*2)-generating systems and provides evidence that the apparent lack of tyrosine nitration by NO/O-(*2) is due to a pronounced decrease of nitration efficiency at low steady-state concentrations of authentic peroxynitrite. The decrease in the yields of 3-nitrotyrosine was accompanied by an increase in the recovery of dityrosine, showing that dimerization of tyrosine radicals outcompetes the nitration reaction at low peroxynitrite concentrations. The observed inverse dependence on peroxynitrite concentration of dityrosine formation and tyrosine nitration is predicted by a kinetic model assuming that radical formation by peroxynitrous acid homolysis results in the generation of tyrosyl radicals that either dimerize to yield dityrosine or combine with (*)NO(2) radical to form 3-nitrotyrosine. The present results demonstrate that very high fluxes (>2 microM/s) of NO/O-(*2) are required to render peroxynitrite an efficient trigger of tyrosine nitration and that dityrosine is a major product of tyrosine modification caused by low steady-state concentrations of peroxynitrite.
crease of nitration efficiency at low steady-state concentrations of authentic peroxynitrite. The decrease in the yields of 3-nitrotyrosine was accompanied by an increase in the recovery of dityrosine, showing that dimerization of tyrosine radicals outcompetes the nitration reaction at low peroxynitrite concentrations. The observed inverse dependence on peroxynitrite concentration of dityrosine formation and tyrosine nitration is predicted by a kinetic model assuming that radical formation by peroxynitrous acid homolysis results in the generation of tyrosyl radicals that either dimerize to yield dityrosine or combine with ⅐ NO 2 radical to form 3-nitrotyrosine. The present results demonstrate that very high fluxes (>2 M/s) of NO/O 2 . are required to render peroxynitrite an efficient trigger of tyrosine nitration and that dityrosine is a major product of tyrosine modification caused by low steady-state concentrations of peroxynitrite.
Tyrosine nitration is a well established protein modification occurring in vivo in a number of inflammatory diseases associated with oxidative stress and increased activity of NO synthases (1,2). Nitration of specific tyrosine residues has been reported to affect protein structure and function (3), suggesting that 3-nitrotyrosine formation may not only be a disease marker but could be causally involved in the pathogenesis of certain disease states.
Peroxynitrite, formed in a nearly diffusion-controlled reaction from NO and O 2 . , is considered a potent pathophysiologi-cally relevant cytotoxin. Besides oxidation reactions resulting in dysfunction of various biomolecules, nitration of free and protein-bound tyrosine to yield 3-nitrotyrosine is a well established reaction of peroxynitrite that may contribute to NO cytotoxicity (1). The nitration reaction has been extensively studied in vitro by bolus addition of synthetic peroxynitrite to tyrosine-containing samples including purified proteins, cells, and tissues (3)(4)(5)(6) . In situ, 3-nitrotyrosine was most frequently visualized with monoclonal or polyclonal antibodies (2), but the identity of the product has been confirmed by several laboratories using sophisticated gas chromatography/mass spectroscopy and HPLC 1 methods (7,8).
Thus, there is general agreement that (i) authentic peroxynitrite is a potent nitrating agent that converts free and proteinbound tyrosine to the corresponding 3-nitro derivative, and that (ii) 3-nitrotyrosine does occur in vivo. The conclusion that peroxynitrite is the main cause for in vivo nitration may thus seem obvious, but is not supported by experimental data. In fact, several recent studies have identified alternative pathways of tyrosine nitration (9), and we found that nitration by simultaneously generated NO and O 2 . is much less efficient than the reaction triggered by authentic peroxynitrite (10). The interpretation of the latter results has been disputed, and a number of points have been raised questioning their validity. One point was related to the possibility that urate formed in the XO reaction might have scavenged peroxynitrite and thus prevented tyrosine nitration in long term ( Solutions-All solutions were prepared freshly each day. Water was from a Milli-Q reagent water system from Millipore (Vienna, Austria; resistance Ն18 megaohms ϫ cm Ϫ1 ). SPER/NO and DEA/NO were prepared as 10-fold stock solutions in 10 mM NaOH. DHR was dissolved to 10 mM in acetonitrile and kept in the dark until use. Alkaline solutions of peroxynitrite were prepared from acidified NO 2 Ϫ and H 2 O 2 as described (14). The solutions were diluted with H 2 O to 10 mM (pH ϳ12.8) and further diluted in 10 mM NaOH to 10- Oxidation of DHR-Oxidation of DHR was monitored at 501 nm as described (10,18). The amount of oxidized DHR was calculated using an extinction coefficient of 78.78 mM Ϫ1 cm Ϫ1 . For measurements, 200-l aliquots were taken every 10 -20 min from 3-ml samples. Total incubation time was 3 h. SPER/NO (1 mM) or hypoxanthine/XO (28 milliunits/ ml) alone led to DHR oxidation rates of Ͻ0.06 and Ͻ0.3 M ϫ min Ϫ1 , respectively.
Peroxynitrite Infusion-The infusion experiments were performed with a Merck-Hitachi HPLC pump (655A-11) provided with Peek capillaries (internal diameter, 0.25 mm) under constant stirring of the tyrosine-containing solutions at ambient temperature. Peroxynitrite (2 ml of a 0.1 mM stock solution) was infused at increasing rates (0.1, 0.2, 0.4, 0.5, and 0.8 ml/min) into 18 ml of 0.1 M K 2 HPO 4 /KH 2 PO 4 buffer (pH 7.4) containing 1 mM tyrosine, followed by the determination of 3-nitrotyrosine as described below.
Determination of 3-Nitrotyrosine and Dityrosine-HPLC analysis of 3-nitrotyrosine was performed on a C 18 reversed phase column with 0.1 M KH 2 PO 4 /H 3 PO 4 buffer (pH 3) containing 6% (v/v) methanol at 0.7 ml/min and detection at 274 nm, as described (19). In some experiments 3-nitrotyrosine was detected with a dual-channel electrochemical detector (ESA, Coulochem II, Chelmsford, MA) set to 600 mV and 850 mV (20). Oxidation of 3-nitrotyrosine was followed at 850 mV. A guard cell placed between the solvent delivery system and injector was set to 1000 mV. Calibration curves were recorded daily with authentic 3-nitrotyrosine (2 nM-0.5 M and 60 nM--5 M for electrochemical and UV-visible detection, respectively). HPLC analysis of dityrosine was performed on a C 18 reversed phase column with 50 mM KH 2 PO 4 /H 3 PO 4 buffer (pH 3) containing 1% (v/v) methanol at 0.7 ml/min and fluorescence detection (Hitachi fluorescence spectrophotometer F 1050; excitation 285 nm, emission 410 nm) as described (4). Calibration curves were recorded with authentic dityrosine (50 nM-5 M).
Kinetic Experiments-The rate of peroxynitrite decay was determined by stopped-flow absorbance spectroscopy at 302 nm (Bio-Sequential SX-17MV stopped-flow spectrofluorimeter, Applied Photophysics, Leatherhead, UK) at 22°C. Reservoir 1 contained peroxynitrite (0.2 mM) in 0.01 M NaOH, and reservoir 2, the buffer solution (0.2 M K 2 HPO 4 /KH 2 PO 4 buffer (pH 7.4), containing 2 mM tyrosine). A k av value of 0.27 Ϯ 0.05 s Ϫ1 (mean Ϯ S.D.; n ϭ 9) was calculated from the initial rates of first order peroxynitrite decay. The peroxynitrite steady-state concentrations obtained in the infusion experiments were calculated by dividing infusion rates (nM s Ϫ1 ) by 0.27 s Ϫ1 .
RESULTS
The formation of 3-nitrotyrosine was measured in the presence of four different NO/O 2 . -generating systems. As shown in (28). Allantoin (0.1 mM) had no effect on tyrosine nitration mediated by authentic peroxynitrite (data not shown). In the presence of uricase, the amount of detectable peroxynitrite was approximately doubled, accompanied by a 6-fold increase in 3-nitrotyrosine formation. The corresponding nitrating efficiency was 0.54%. These results suggested that accumulation of urate does indeed contribute to the low nitrating efficiency of the applied NO/O 2 .
-generating system, but the nitration yield was still about 10-fold lower than that obtained with 70 M authentic peroxynitrite (3.41 Ϯ 0.63 M, corresponding to 4.9%) and about 5-fold lower than the nitration triggered with SPER/NO alone. It was conceivable that this difference was caused by residual urate, because uricase did not completely consume the accumulated urate under our experimental conditions (data not shown). Therefore, two urate-free NO/O 2 .
-generating systems were additionally tested.
Acetaldehyde is known to function as an alternative substrate of XO albeit at much lower turnover numbers (29) Incubation of DEA/NO (0.1 mM) with 1 mM tyrosine in the absence and presence of 0.1 mM FMN led to formation of 86.1 Ϯ 29.5 nM and 678 Ϯ 90.8 nM 3-nitrotyrosine, respectively. The effect of FMN was concentration-dependent; maximal effects were obtained with Ն0.1 mM, the apparent EC 50 was 57.5 Ϯ 7.0 M (Fig. 1B). FMN did not significantly increase DEA/NOmediated nitration in dark conditions (data not shown). Due to a strong interference of FMN with the DHR assay, 3 it was not possible to measure apparent peroxynitrite formation by this system, but the DEA/NO-FMN system allowed us for the first time to demonstrate a stimulation of NO-mediated nitration by co-generation of O 2 . , indicating that the in situ generation of peroxynitrite does lead to tyrosine nitration under certain experimental conditions. Intriguing data were obtained when the nitrating efficiencies of NO/O 2 . -generating systems were studied in the presence of bicarbonate (CO 2 . ). CO 2 . is known to react rapidly with peroxynitrite to yield the potent nitrating adduct nitrosoperoxycarbonate (ONO 2 CO 2 Ϫ ) (33). Therefore, depending on the buffer concentrations of CO 2 (34), tyrosine nitration by authentic peroxynitrite is increased 2-to 4-fold upon the addition of 0.25-50 mM bicarbonate (5,(35)(36)(37). The data obtained with four NO/O 2 . -generating systems tested for tyrosine nitration with and without 25 mM bicarbonate (Fig. 2) clearly demonstrated that CO 2 had no effect whatsoever on nitration by NO We considered several possibilities to explain the poor nitrating efficiency of NO/O 2 . . Unfortunately, however, most hypotheses, including the proposal of a distinct chemical species that is formed from NO/O 2 . in situ (10), are in conflict with the known theoretical background of NO/O 2 . and/or peroxynitrite chemistry. One remaining possibility was that tyrosine nitration required a certain threshold steady-state level of peroxynitrite to become significant. This would explain the observed differences between bolus addition and continuous generation of peroxynitrite. We have addressed this issue using two experimental approaches. First we studied the nitrating efficiency of increasing peroxynitrite concentrations (5-1,000 M) added as a bolus to buffer solutions containing 1 mM tyrosine. As expected, the total amount of 3-nitrotyrosine gradually increased with increasing concentrations of added peroxynitrite (46.23 Ϯ 1.59 M at 1 mM; data not shown). It was surprising, however, to find that the nitrating efficiency of peroxynitrite increased from 1.4 Ϯ 0.3 to 5.4 Ϯ 0.4% when the peroxynitrite concentration was increased from 5 M to 100 M and leveled off at higher concentrations (Fig. 3A). In another set of experiments 2 ml of a 0.1 mM stock solution of peroxynitrite was infused at increasing rates (8.33 nM s Ϫ1 , 16.67 nM s Ϫ1 , 33.33 nM s Ϫ1 , 41.67 nM s Ϫ1 , and 66.67 nM s Ϫ1 ) into tyrosine-containing buffer solutions (10 M peroxynitrite final in each case). The respective steady-state concentrations of peroxynitrite were calculated from the rate of first order decomposition measured by stoppedflow absorbance spectroscopy under identical conditions (k av ϭ 0.27 Ϯ 0.05 s Ϫ1 ; data not shown). Fig. 3B shows that the nitrating efficiency of infused peroxynitrite increased about 3-fold (from 0.22 Ϯ 0.05 to 0.64 Ϯ 0.08%) when the steady-state concentrations were increased from 30.7 to 247 nM.
Since dityrosine is another product of the reaction between tyrosine and peroxynitrite (4,35,36), we speculated that the tyrosine dimerization reaction may be predominant at low peroxynitrite concentrations. To test this hypothesis, we measured dityrosine formation from 1 mM tyrosine treated with increasing concentrations of peroxynitrite. As shown in Fig. 4, a max-imal yield of 17.0 Ϯ 3.8% dityrosine was obtained with the lowest peroxynitrite concentration that has been tested (1 M) and decreased down to less than 1% at Ն1 mM peroxynitrite. The replot of the 3-nitrotyrosine data (open symbols in Fig. 4) demonstrates that dityrosine is indeed the major product of tyrosine reacting with low concentrations of peroxynitrite. efficiency of the combined system was still significantly lower than that of authentic peroxynitrite, these results suggested that peroxynitrite formed from NO/O 2 . may indeed be capable of triggering nitration under certain conditions. It is conceivable that tyrosine nitration has been quenched by XO that was used for O 2 . generation in the other experimental set-ups. Accordingly, the protein-free DEA/NO-FMN system apparently allowed the detection of the minor nitration reaction triggered by peroxynitrite at low steady-state concentrations.
The most interesting finding of this study was the observation that dityrosine formation almost completely outcompeted nitration at low concentrations of peroxynitrite. As a mechanistic explanation of these surprising results, we propose the scheme depicted in Fig. 5. Accordingly, the key event of both reactions, nitration and dityrosine formation, would be the generation of tyrosyl radicals by ⅐ NO 2 formed in the course of homolytic cleavage of ONOOH (Equations 1 and 2, path a in Fig. 5). The tyrosyl radicals could either react with ⅐ NO 2 to yield 3-nitrotyrosine (Equation 3, path b) or dimerize to give dityrosine (Equation 4, path c). A major competing reaction would be the dimerization of ⅐ NO 2 yielding N 2 O 4 (Equation 5, path d).
Homolysis of ONOOH (Equation 1) has been questioned based on thermodynamical calculations (39), but recent evidence suggests that about 30% of ONOOH does indeed yield free ⅐ NO 2 and ⅐ OH, whereas the residual 70% undergoes rearrangement to nitric acid without escape of free radicals (22,23,40). Tyrosyl radical formation by ⅐ NO 2 and subsequent combination of ⅐Tyr and ⅐ NO 2 has been reported to occur with second order rate constants of 3.2 ϫ 10 5 and 3 ϫ 10 9 M Ϫ1 s Ϫ1 , respectively (24). Rate constants of 9 ϫ 10 8 and 2.25 ϫ 10 8 M Ϫ1 s Ϫ1 , respectively, were reported for the two major competing reactions, i.e. the dimerization of ⅐ NO 2 (24) and the combination of two ⅐ Tyr radicals to yield dityrosine (25).
Together with the rate of peroxynitrite decomposition determined by stopped-flow spectroscopy under our experimental conditions (0.27 s Ϫ1 ), the published rate constants were used for the kinetic simulation of peroxynitrite reacting with excess free tyrosine, assuming 30% homolysis of ONOOH. Fig. 6 shows that the yields of 3-nitrotyrosine and dityrosine predicted by the model for tyrosine reacting with peroxynitrite at concentrations ranging from 1 M to 2 mM are similar in shape to the measured data illustrated in Fig. 4. In agreement with our observations, the model predicts an inverse dependence on peroxynitrite concentration of tyrosine nitration and dimerization. At low peroxynitrite concentrations, dimerization of ⅐Tyr radicals (filled symbols) is the predominant pathway, whereas nitration (open symbols) and ⅐ NO 2 dimerization (not shown), which both follow second order kinetics with respect to ⅐ NO 2 , become the predominant reactions at high peroxynitrite (and thus ⅐ NO 2 ) concentrations. The measured yields of dityrosine agreed well with the predictions of the model, but the measured 3-nitrotyrosine levels were Ն2-fold below the theoretical expectation over the complete range of peroxynitrite concentrations. This quantitative mismatch suggests that reactions not considered in the kinetic simulation compete with tyrosine nitration. These reactions may involve ⅐ OH radicals, as it was shown previously that ⅐ OH radical scavengers significantly enhance peroxynitrite-triggered tyrosine nitration (4). Therefore, it is likely that the reactions of ⅐ OH radicals with ⅐ NO 2 to yield HNO 3 and with ⅐ Tyr radical, resulting in the formation of 3-hydroxytyrosine (dopa) (36), compete with the nitration reaction. Since the rate constants of the reactions triggered by ⅐ OH are not known, it was not possible to account for them in the kinetic model. Nonetheless, we think that, despite some quantitative uncertainties, the proposed model provides a simple and reliable mechanistic explanation for the insignificant nitration efficiency of peroxynitrite generated in situ.
What are the implications of the present study for the effects of peroxynitrite generated from NO/O 2 . in vivo? Obviously, the oxidative chemistry of peroxynitrite, including dityrosine formation, would be expected to be predominant at the relatively low NO/O 2 . fluxes that are likely to occur in most in vivo conditions. As a specific marker of oxidation, dityrosine has been detected in human atherosclerotic plaques (41,42) in the brain of elderly humans (43) or patients affected with Alzheimer's disease (44), in age-related nuclear cataract (45) and other pathologies thought to be associated with oxidative stress. Formation of dityrosine has been attributed mainly to the activation of the myeloperoxidase/H 2 O 2 system of neutrophils and macrophages (46), but other peroxidases (47) and peroxynitrite (36, 48) have been recognized as additional sources of dityrosine. The present results agree with previous studies suggesting that dityrosine formation together with increased NO synthase expression may be a useful marker for peroxynitrite formation in tissues (49,50).
With respect to tyrosine nitration, it seems unlikely that the high NO (58), and reports with transgenic mice and SOD knockout mutants showing that both Cu,Zn and Mn-SOD are protective against stroke (59). Our data render it likely that the molecular mechanisms underlying these pathologies are related to protein oxidation and/or cross-linking rather than nitration. It is conceivable that latter reaction is triggered by peroxynitrite-independent pathways involving myeloperoxidase (60 -62) or other peroxidases (63). As a further alternative, trapping of tyrosyl radicals by NO and subsequent peroxidase-mediated oxidation of nitrosotyrosine could result in the formation of 3-nitrotyrosine (64). The latter mechanism would imply that several pathways have to be activated at the same time to cause significant nitration. In inflammatory tissues, for example, induction of macrophage NO synthase together with the activation of neutrophil NADPH oxidase and secretion of myeloperoxidase would constitute a highly efficient nitrating system operating through several pathways. Further studies should clarify which of these pathways or which combinations of them are responsible for tyrosine nitration in human disease. | 4,284 | 2000-03-03T00:00:00.000 | [
"Biology"
] |
A Computational Study of the Station Nightclub Fire Accounting for Social Relationships
Using agent-based modeling, this study presents the results of a computational study of social relationships amongmore than four hundreds evacuees in The Station Nightclub building in Rhode Island. The fire occurred on the night of February 20, 2003 and resulted in 100 fatalities. A er summarizing and calibrating the computational method used, parametric studies are conducted to quantitatively investigate the influences of the presence of social relationships and familiarity of the building floor plan on the death and injury tolls. It is demonstrated that the proposed model has the ability to reasonably handle the complex social relationships and group behaviors present during egress. The simulations quantify how intimate social a iliations delay the overall egressprocess and show theextentbywhich lackof knowledgeof abuilding floorplan limits exit choices and adversely a ects the number of safe evacuations.
Introduction
. There is widespread consensus that people participate in social gathering during emergency egress (Aguirre et al. a,b; Chu & Law ; Moussaïd et al. ; Pluchino et al. ). Group members are o en connected through pre-existing social relationships, e.g. familial or friendship, and their behavior is significantly a ected by such social a iliations (Santos & Aguirre ; Moussaïd et al. ; Aguirre et al. b; Chu & Law ). Participants tend to interact with each other and stay together, potentially increasing the dangers they collectively face (Johnson et al. ; Cornwell ). Yet, of the many egress models that have been published to date, only a few are able to adequately handle social interaction and social emergence involving groups of evacuees (Santos & Aguirre ; Aguirre et al. a). Moreover, aside from a few cases, most existing models lack validation of their simulated results by real-world processes (Aguirre et al. a). .
To address such gaps, this paper employs the agent-based egress simulation tool, EgressSFM, developed by Fang et al. ( . This study o ers the results of a numerical study using EgressSFM of social relationships among the more than four hundreds evacuees of The Station Nightclub fire. The theory behind the computational platform is first summarized and background about the social organizational features of the gathering at the venue is presented. Key modeling parameters in EgressSFM are calibrated to detailed data from The Station Nightclub event. A er calibration, parametric studies are conducted to quantitatively investigate the influences of the presence of social relationships and familiarity of the building floor plan on the death and injury tolls. Field information first collected by Aguirre et al. ( b) about the persons and groups that were present at the Station during the fire is used to evaluate the validity of the results of this study.
Background
The Scalar Field Method (SFM): Theory and validation . The SFM developed by Fang et al. ( , ) assumes that the behavior of each evacuee is controlled by a rational thinking process. Agents representing evacuees can perceive surrounding entities, evaluate di erent potential avenues of action, desired goals and social relationships, and represent those factors through locomotion. These goals may comprise the evacuee's need to escape through an exit, avoid collision with walls and obstacles, move towards related agents, keep given spacing to other agents, and respond to social relationships. By assuming that agents are analogous to charged particles in an electrical field, the SFM quantitatively evaluates these e ects as a series of scalar quantities, termed virtual potential energies (VPEs). The VPEs from various sources can be directly added together to form a comprehensive field around an agent that signifies the additive or subtractive e ects of issues of importance to the agent. Based on charged-particle-in-field analogy, an agent will seek to minimize its VPE, i.e. the lower the value of VPE, the greater will be the desire of the agent to take action, and vice versa. .
The VPEs are computed through a series of functions of distances to other agents or objects in the environment. While detailed equations can be found in Fang et al. ( ), some governing equations are shown next for the sake of completeness: where E 1 , E 2 and E 3 are the VPEs of the goals to exit a building, preserve private space, not collide with other agents and with walls; d 1 , d 2 and d 3 are the distances between agent and exit, other agent and wall, respectively. c 1 , c 2 and c 3 are strength constants that are assigned to be , , . ∆θ 1 in Equation is the absolute value of the angle di erence between the forward facing orientation of an agent and the direction pointing to a target object. D 20 and D 30 are influence distances in Equations and , respectively. Agents and other entities within the influence zone can interact together in a VPE sense; otherwise they are unable to influence one another. D 1a is m, and D 1b is . m and are associated with the orientation of an agent; R a is the radius of an agent in the direction of interest. To simplify calculation of R a , an agent is assumed to be enclosed by an ellipse with principal radii R T and R T + R S , where R T and R S are the sizes of the torso and shoulder respectively with R T = . m and R S = .
m. R T , other is the size of the torso of the other agent in Equation . E 2 ,counter is a term that accounts for an agents' dodging behavior in a counter-flow situation, where agents attempt to prevent face-to-face situations as they are approaching other oncoming agents. .
Along with the desired goals, two categories of social relationships are outlined: kin-relationship (such as spouses and dating partners) and friend-relationship (such as friends and co-workers). The former is assumed to be effective over distances and stronger than the latter. The friend relationship is assumed valid for a limited distance beyond which it is considered ine ective. The governing equations for these two cases are as follow: where E 4 and E 5 are the VPEs of kin-relationship and friend-relationship; d 4 and d 5 are distances between kinrelated agents and between friend-agents, respectively. c 4 and c 5 are strength constants that are assigned to be and -(negative sign for attractive e ect). D 4b is the distance within which agents can communicate and decide on their collective action: they can stop moving towards one another and seek to exit as a group. d 4 is a term employed to ensure that an agent achieves the correct orientation, towards its target. D 50 is the influence distance of E 5 in Equation . .
The computational process is described in Fang et al. ( ). Each agent in the simulation processes a sequence of algorithmic steps of "decision-making" in every time-increment: observe and update perception; refresh sampling points for VPE computation; compute an evacuation route; estimate others' movements; calculate VPEs to reach a locomotion decision; and execute the decision. In the second-to-last step, an agent's locomotion is decomposed into translation and rotation. The agent needs to first consider whether to rotate or not and a erwards translates when an orientation decision is made. Both rotation and translation decisions are dependent on VPEs computations.
.
The value of the parameters used in EgressSFM and detailed validation studies can be found in Fang et al. ( , ). The validation exercises undertaken in Fang et al. ( , ) include comparison of simulation results to those of field experiments and other refined models.
The Station Nightclub Building Fire
. Once pyrotechnics ignited polyurethane foam lining the walls and ceiling of the band platform and dance floor, the fire spread aggressively. Film shows that at seconds a er initiation, the dense black smoke layer was near the floor, while ". . . the entire club was engulfed in flames within minutes of initiation (Gill & Laposata )". The fire occurred on February , during a heavily attended night. The building, which had accumulated a number of risks, some quite severe, over the years mainly through the practice of grand fathering the structure from recent and safer municipal building code requirements (Barylick ), was a single-story wood frame building shown in Figure . It was comprised of multiple spaces or ecologies, including a dance floor and a raised platform in front of it for the performers, a sunroom, a dining room, main bar, kitchen, dart room, bathrooms and o ice. It had four exit accesses: front door entrance, main bar side, kitchen side, and platform side. The crowd began to evacuate as soon as it became clear that the fire was occurring, or about twenty five seconds a er ignition. The last person to escape came out minutes and seconds a er ignition, although most survivors got out during the first seconds (other details of the fire and a timeline are in Tubbs & Meacham ( ). For the purposes of this study, the simulation's timeline count starts the moment the crowd began to evacuate (considered to be seconds a er ignition). . The platform's side exit (see Figure , west side of building) was blocked by the spreading fire and became impassable about seconds a er ignition. Only occupants escaped through it, while on the other side of the building only escaped through the kitchen door, probably because of its very poor signage and lack of visibility. The remaining survivors evacuated through the front entrance and the main bar side exit, and respectively. When other would-be evacuees clogged the spaces near the main hall, main bar and the corridor of the front entrance, some of the other occupants at the back of these queues searched for alternative egresses, eventually breaking the windows of the main bar room and sunroom about seconds a er the fire started. In this manner another occupants escaped the fire. Clearly, attendees were not blindly following others to the main entrance in a herd-like manner but showed initiative and creativity as they tried to exit the building. Nevertheless, occupants died from severe burns and smoke injuries, making it the fourth deadliest fire in the nation's history.
The social organization of the Station Nightclub the night of the fire .
The Station Nightclub in West Warwick, Rhode Island was a popular dance hall for people in the city and region. percent of the patrons the night of the fire had visited the nightclub previously. An even greater percent ( percent) saw the sparks that started the fire. The gathering was composed of older than average concert goers (median age = years) and had an unusually high degree of sociality, amity and goodwill among its members (for an in-depth, a ecting although tragic reconstruction of the o en intimate relations among the people in attendance see Barylick ( ). Only percent of the people in the Station that evening were by themselves. The rest were members of groups. percent of them were in groups of persons, percent were in groups of and , and percent were in groups of or more members. Almost half ( percent) of these groups were made up by coworkers and friends, dating partners, and kin and spouses (Aguirre et al. a,b, Unpublished; Torres ; Best ). percent of the members of groups were in close proximity of each other when the fire started (with the average distance of group members to each other less than linear feet.) Size of group and distance among group members are highly statistically correlated (Pearson R . ). .
The social cohesion produced by the norms shared by members of these groups can be measured "in extremis" even if ghoulishly, by examining the extent to which group members stayed with other members of their groups in the midst of this fire even if by doing so they augmented their chances of death and injury, showing the strengths of systems of social control that operated in this instance. For, as with injury (Pearson R . between size of group and the chance of injury), the mean number of dead increased almost monotonically with the sizes of the groups (Pearson R . ). Thus, the groups of persons had a . mean number of dead persons; groups of had a . mean number of dead; groups of had a mean of . ; groups of a mean of . ; groups of a mean of . ; groups of a mean of . ; groups of had a . mean number of dead; and groups of or more persons had a . mean number of dead members (similar finding are reported, among others, by Cornwell ( )). A somewhat unusual characteristic of this gathering is that when the fire struck, there tended to be a division by space and gender inside the building, for males separated from the other members of their groups tended to congregate at or near the bar while their female counterparts congregated in the dance floor. The result is that once the fire commenced, there was a good deal of movement in opposite directions of men and women searching for each other and unintentionally creating "knots" of people who blocked the paths of other would be evacuees, in an environment that was deteriorating very rapidly as flames engulfed the building. .
Human density (number of persons per square foot) of the ecologies inside the building was also an important predictor of deaths and injuries (Pearson R . ). The highest death rate occurred in the ecology to the north of the main bar in front of the bar windows (. death per square feet). Perhaps many of those who perished in this area migrated to the space trying to reach the main entrance nearby and were overtaken by smoke and fire due to delays in evacuating caused by the large number of people in front of them who were also hoping to exit through the front door. The high percent of dead and injured in this fire is partly the result of these social organizational features. Many victims lost precious seconds in the search for their group members, while others were inconvenienced by the knots of people that formed in the middle of the building. For these and other reasons, the resulting delays in evacuating placed many of these victims in the back of the throngs of people who were also trying to escape the fire. .
In the next sections, the Scalar Field Method is presented and the relationships of these groups are comprehensively modeled and quantitatively analyzed.
Assumptions and Model Implementation
Environmental hazards . Environment hazards are harmful to an evacuee's health. In particular, fire can lead to burn injuries and fatality, and the toxic e ects of smoke will reduce an evacuee's stamina (Bryan ; Best ). An agent's mobility is related to whether or not its "health" is impaired (Pauls ; Klote et al. ). To describe an agent's health, stamina is quantified as a scalar number termed energy level, or EL (terminology adapted from Aguirre et al. a; Best ; Aguirre et al. Unpublished), not to be confused with the VPE used by agents to model their rational thinking process. EL is a non-negative quantity, and the agent's mobility is assumed to be dependent on its EL. The lower the energy level is, the more injured the agent is and the less likely it can move and exit the building. Once the energy level is zero, the agent is assumed to have died. .
The building and environment model of the EgressSFM takes into account fire and smoke hazards. Fire is presented herein as a series of rectangular areas with stochastic sizes and start times.
An Agent's Energy Level
.
Before the fire occurs in the simulation, each agent is assumed to have an initial EL based on occupant demographics with a stochastic element added to account for variability. The initial EL values are taken from Aguirre et al. ( b), Torres ( ), and Best ( ). A er the simulation begins, each agent in the building su ers smoke damage over time, manifested by a reduction in energy level, until it either evacuates or is killed. An agent's energy level is computed as follows (based on Best ( )): . When an agent does not move from an active fire region, its energy level drops instantaneously to zero. This signifies that it is deceased.
. Smoke leads to a gradual reduction in an agents' energy levels in all building spaces except as noted next.
The EL changes at the rate of -. , -. , and -. EL/second during the time periods of -second,second, and a er second, respectively.
. Based on an analysis of oxygen volume fractions conducted by Grosshandler et al. ( ), as shown in Figure , agents in the main bar room are assumed to su er damage at a lower rate ( % of values specified above) because: ) this room is far away from the fire, and the fire and smoke are impeded by the walls of the front entrance corridor and kitchen; and ) this room has access to one side exit and multiple windows that can provide more fresh air than other rooms.
. When an agent is present in an oxygen zone, the damage rates of EL are divided by a factor of -. (Gill et al. ) to recognize the beneficial e ects of oxygen. As a result, the EL gradually increases in oxygen zones. .
An injured agent is assumed to su er mobility loss that is linearly dependent on the ratio of its current energy level to its initial energy level. If the energy level is equal to or higher than % of the initial amount of energy, the agent's maximum velocities are not influenced. Otherwise, the agent's maximum velocities in various directions are lowered in a linear manner with the remaining energy level, as shown in Equation .
max.v original max.v = 0.2 + energy level initial energy level
Egress model implementation .
As modeled in EgressSFM (Fang et al. , ), the Station Nightclub building model is comprised of a collection of exits, doors, windows, and interior spaces. Agents that reach exits are considered to have safely exited. Each exit has an open and close time that determines whether this exit is available (passable) or not, respectively. Application of such open/close times is necessary to account for dynamic conditions during the fire, e.g. the side exits became impassable as the fire progressed. Windows are a special set of exits that are normally impassable. They can switch functions to enable egress a Table . Such times are based on the simulation timeline starting when the crowd begins to evacuate, as initially estimated by Grosshandler et al. ( ).
Front Entrance None Platform Side Exit Main Bar Side Exit
None Kitchen Side Exit None All Windows None Table : Time of opening and closing of exits and windows.
. The agent's normative behavior is controlled by the Scalar Field Method as discussed earlier. The agent model is implemented to address the demographic and interview data of the Station Nightclub fire as follows: . Personal demographic information of age, gender, initial energy level, and prior visit experience are considered. The term 'prior visit experience' pertains to whether the agent has visited the building before the night of the fire, i.e. it accounts for familiarity with the floor plan, which presumably facilitates successful evacuation from the building.
. Initial location and orientation. Initial location of each occupant is determined based on coding of survivor interviews (Aguirre et al. b; Torres ). Each agent's initial orientation is randomly selected for each simulation.
. Social a iliation. The majority but not all agents are members of one of the social groups, which are characterized by a specified type of relationship: they were either alone, with co-workers, or with friends, dating partners, family members, and multiple group types. The first term refers to an individual without pre-existing relationship to others. The last term means an agent is in more than one group type.
. Group leader. A social group can have a leader that influences other members' decisions in this group.
In the case of the Station Nightclub scenario, group leaders were identified and coded based on survivor interview data (Aguirre et al. Unpublished; Torres ; Best ). .
Age determines an agent's mobility before being injured. The maximum speeds of each agent are dependent on its age category. Adult agents (ages to ) are assumed to have a maximum forward speed that is randomly selected from a range of . m/s to . m/s to reflect the stochastic nature of moving individuals. Agents in the "children + seniors" category have a maximum forward speed in the range between . m/s to . m/s. The lateral speed limit is selected as . m/s and the backward limit . m/s for the "adults" and as . m/s and . m/s for "children + seniors". The maximum rotational capability is randomly determined between rad/s to rad/s for the "adults" and half of that value for "children + seniors". These speeds are based upon on information in previous studies (Tang & Ren ; Thompson ). The initial orientation of each agent is allocated randomly.
.
Prior visit experience influences an agent's awareness of side exits, so that if they had never visited the Station Nightclub they lacked awareness and would have a higher probability of missing an exit near to them. The data generated by Torres ( ) and Aguirre et al. ( a,b) show that close to half of the evacuees had no prior visit experience. Grosshandler et al. ( ) mentions that approximately percent of the occupants believed the main entrance to be the only exit. In this study, prior visit experience is assumed to determine an agent's knowledge of the floor plan when the evacuation starts: an agent without prior visit experience is aware of the front entrance exit and main bar side exit only, and is assumed to be unaware of other side exits. An agent who visited the building previously is assumed to know all the exits. However, an agent can learn from the surrounding environment and updates its knowledge and considers other exits as alternative potential destinations. .
For simplicity, this study assumes a dichotomous coding of social relationships, either friend-or kin-related. Thus, each agent has the same type of social relationships to other group members in the same group. Spouses and dating partners are interpreted as kin-related in the SFM, and co-workers and friends are categorized as friend relationships. Members of more than one group are also assumed to be friend-related. If a group leader is specified in a strongly bonded relationship like spouses, the group leader is responsible for leading the group, and the other group members are assumed to follow the leader. To do so, the leader establishes kin-related interactions with each of the other group members, but non-leader members only set up a social relation with the leader. In addition, the non-leader members duplicate the leader's decision to follow a specific escape route.
. An agent has multiple potential choices of destination for egress, since there are four exits and two walls with windows. Selecting the exit, particularly the platform exit, was discussed by previous researchers such as Grosshandler et al. ( ) and Best ( ). Both studies assumed the occupants to always select the closest exit and applied algorithms to control their decisions. The former used two so ware packages, buildingEXO-DUS and Simulex. In the simulation with buildingEXODUS, the platform exit was assumed to be impassable a er s, and the front entrance was blocked at s. In the Simulex simulation, Grosshandler et al. ( ) first calculated number of occupants who would use the platform exit, which resulted in people, and then made the platform exit only visible to these occupants. The latter study conducted by Best ( ) assumed that only occupants were aware of the existence of the platform exit and that / of the occupants believed that the main entrance was the only exit. .
In this study, the agent generally selects one exit to which the travel distance from the agent's current location is the shortest, although the agent is not forced to use it. The final choice of exits is dependent on availability of exits, prior visit experience, and group leadership, as discussed previously. To address the fact that only a limited number of people escaped through the platform exit, a penalty is added to each agent's perception of this particular exit's distance, so as to make it less desirable as an exit. This empirical approach is motivated by two facts: ) this exit door swung inwards rather than outwards, and hardware on the door was broken (Grosshandler et al. ), and ) the exit was close to the fire and covered by heavy smoke shortly a er the fire ignited. A -meter penalty is selected to use as shown by the parametric study shown in Figure , in which the number of agents using the platform exit is simulated with various penalty distances. As can be seen, the correct number of agents using the exit corresponds to the use of a m penalty. Figure : Parametric study of the 'penalty' distance to the platform exit.
Simulation and Hypothetical Investigations of Social Traits
. The egress scenario in the Station Nightclub Building Fire is modeled in EgressSFM. The simulation results are shown in Table . Because of the stochastic nature of the simulations, twenty simulations are conducted and average values and standard deviations are reported. The computed number of occupants using each exit and people dying are compared to the actual values as reported by Aguirre et al. ( a,b) (see also Best ( )). Note that the sum of 'actual' occupants in Table adds up to and not the agents simulated herein. The di erence is due discrepancies in the published literature about the actual number of patrons in the nightclub, e.g. as reported in Grosshandler et al. ( ). Nevertheless, the di erence between the sum of actual and simulated people is less than %. As shown in Table , the simulation results match the actual statistical data reasonably well. The implications of this favorable match are discussed later on.
Bar exit Kitchen exit Platform exit Windows Deceased
Actual Simulated Standard Deviation Table : Simulated and actual data of escaped and deceased occupants. .
To give an impression of how the simulation progresses, snapshots at the initial starting point and a series of intermediate times during one run of simulation are taken and presented in Figure . An important observation is that the egress process lasted for most victims less than seconds due to the extreme severity and rapidity of the fire and smoke that enveloped the building. During the egress process, pre-existing social relationships took place and influenced agents' decisions and behaviors. Group behavior driven by strong interactions, influenced neighboring agents and led to clogging and delays in egress (c.f. Aguirre et al. Unpublished). For example, agents in social groups are involved in kin-related interactions at a particular instant of time rather than in prompt evacuation thereby delaying themselves and others. In the remainder of Figure , the color variation of the agents signifies di erent degrees of impairment, specifically an agent changes color gradually from green (and its variations) to yellow to red based on the remaining energy level compared to the initial level. Dead agents are colored light gray. Generally, the main exit, bar exit, and windows played primary roles for egress, and other side exits were ignored by the majority of the agents, perhaps due to loss of visibility. Moreover, the toxic e ect of smoke impaired agents' health and lowered injured agents' mobility and their ability to egress from the building. .
Two areas, as highlighted in Figure , are found to be critical for overall egress e iciency of the occupants in the building. These areas are the connection between the main bar and main hall and the connection between the front entrance and main hall. Along with the corridor of the front entrance exit, these areas are filled with agents and become problematic because of the presence of strong social bonding, e.g. spouses and dating partners. Agents driven by such interactions tend to congregate with their groups, and such gatherings lead to tra ic congestion in the connection areas. As a result, as initially reported by Best ( ) and Aguirre et al. (Unpublished) and confirmed herein, the overall egress is delayed by these bottlenecks. .
To investigate the influence of social traits in a quantitative manner, two series of parametric simulations are conducted. The first is based on "break down", which is defined as the degree by which an agent ignores its social a iliations. The second focuses on the e ect of prior-visit experience. Each simulation shown hereon is conducted twenty times to account for the stochastic nature of the problem.
Break down of social relationships .
As shown in Table , the numbers of agents using various exits and those that are deceased are compared in a sensitivity study of "break down" probabilities of %, %, %, %, %, and %. In particular, the case of % assumes that every agent responds to its pre-existing relationships, and the case of % assumes that all agents ignore their social a iliations and egress alone as individuals. Four plots are drawn in Figure to showcase the tendencies of using front entrance exit, main bar exit, window exit, and deceased agents versus the "break down" probability, respectively. As shown, they are generally linearly dependent on the break down probability. More agents successfully evacuate through the front entrance exit and main bar exit as the break down probability increases, i.e., as more of them become free agents. Table : Sensitivity study of e ect of the "break down" probability.
Probability Main exit Bar exit Kitchen exit Platform exit Windows Deceased
.
An example of the case of % (pre-existing relationships fully active) is given in Figure a, in Figure b. In Figure a, like-colored agents are members of the same group. As they attempt to congregate, they slow down all other agents by blocking their path. In Figure b, the variation in green color implies agents with di ering EL, the darker the shade, the lower the EL. These variations in mobility contribute to an overall slowdown in the evacuation process. Figure : Sensitivity study of the e ect of the "break down" probability. . The number of agents using windows to evacuate decreases because the number of remaining agents in the building decreases when the windows become passable at seconds. As a result, the number of deceased agents decreases and is almost half of the % condition when every agent drops its social relationships. Clearly, the presence of social relationships increases potential risk and delays the overall egress. This result is consistent with many previous studies, e.g.
Prior visit experience .
An agent who has no prior visit experience is considered to be only aware of the main and bar exits and unaware of others such as the kitchen and platform exits. To explore the influence of such limitations, a set of control tests, which are comprised of % and % "break down" cases are conducted under a hypothetical situation in which all agents are assumed to have prior visit experience and awareness of the full floor plan. The simulation results are drawn in four pie charts as shown in Figure , in which the numbers of agents using various exits and the number of deceased agents are divided by the total number of agents and presented as di erent components. Figure a and % "break down" conditions. As expected, significantly more agents evacuate through the platform exit and kitchen exit, so the deceased agents are fewer. On the other hand, the number of agents who use the front entrance exit and the main bar exit are not a ected.
Meaningfulness and Implications of the Computational Study
. Even though EgressSFM was extensively validated in Fang et al. ( , ), it was not validated under the same conditions for which it was exercised in this work. The Station Nightclub scenario modeled is complex and incorporates many levels of multi-dimensional interactions that occur between numerous actors in the simulation, i.e. agents, physical building components such as walls and exits, fire regions and oxygen zones. Each of these interactions are modeled based on key assumptions as outlined in the manuscript. Thus, it is naturally di icult to draw strong conclusions about the fidelity of the simulation results. Yet, the observed reasonable comparison to field data lends credence to the simulation model and suggests that it is capable of capturing some key aspects of the event. Obviously, this is not rigorous validation of the model given the extent of uncertainties and assumptions. However, it is acceptable since the methodology employed is the only way available at present to carry out ethical studies of these crisis evacuations and conduct quantitative parametric research on a uniquely complicated and multi-disciplinary problem that has implications for life safety within facilities.
Summary and Conclusions
. This paper reports on the use of the EgressSFM platform that used the Scalar Field Method (SFM) to model a historical egress scenario, the Station Building Fire. The platform is modified to incorporate environmental hazards of fire and smoke, and computes each agent's stamina as an energy level, which impacts the agent's mobility. The study considers demographics and social relationships of the occupants in the building when the fire happened. When calibrated, the simulation captures the realism of the actual data, and shows EgressSFM's ability to reasonably handle the complex social relationships and group behaviors present during egress. The parametric simulation exercises show in a quantitative manner that the presence of intimate social a iliations delay the overall egress, and that lack of knowledge of the building floor plan limits exit choices and adversely a ect the number of safe evacuations. | 8,005.2 | 2017-10-31T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
On the Application of Directional Antennas in Multi-Tier Unmanned Aerial Vehicle Networks
This paper evaluates the performance of downlink information transmission in three-dimensional (3D) unmanned aerial vehicle (UAV) networks, where multi-tier UAVs of different types and flying altitudes employ directional antennas for communication with ground user equipments (UEs). We introduce a novel tractable antenna gain model, which is a nonlinear function of the elevation angle and the directivity factor, for directional antenna-based UAV communication. Since the transmission range of a UAV is limited by its antenna gain and the receiving threshold of the UEs, only UAVs located in a finite region in each tier can successfully communicate with the UEs. The communication connectivity, association probability as well as coverage probability of the considered multi-tier UAV networks are derived for both line-of-sight (LoS) and non-line-of-sight (NLoS) propagation scenarios. Our analytical results unveil that, for UAV networks employing directional antennas, a necessary tradeoff between connectivity and coverage probability exists. Consequently, UAVs flying at low altitudes require a large elevation angle in order to successfully serve the ground UEs. Moreover, by employing directional antennas an optimal directivity factor exists for maximizing the coverage probability of the multi-tier UAV networks. Simulation results validate the analytical derivations and suggest the application of high-gain directional antennas to improve downlink transmission in the multi-tier UAV networks.
I. INTRODUCTION
Unmanned aerial vehicles (UAVs) have gained increasing interest in both academia [1], [2] and industry [3], [4]. By flying in the sky at moderate to high speeds, UAVs can provide flexible short-term services such as information collection over wireless, traffic surveillance, and disaster information dissemination to wherever demand occurs at a low cost [5]. An energy-efficient data collection method was proposed for UAV-aided networks in [6], which can ensure fairness for ground sensors. For surveillance of multi-domain Internet-of-Things (IoT) devices, an approach based on linear integer programming was proposed in [7] to minimize the maximum flying range of UAVs. In [8], joint optimization of the trajectory and scheduling of UAVs was investigated for a UAV-assisted emergency network, where UAVs are deployed to re-establish communication between ground devices and surviving BSs in the aftermath of natural disasters. Exploiting the highly flexible and low-cost deployment of UAVs, several works have suggested employing UAVs as aerial base stations (ABSs) to serve ground user equipments (UEs) directly and offload traffic for terrestrial cellular networks. A novel threedimensional (3D) ABS deployment was investigated in [9] for maximizing the number of UEs within ABSs' coverage while fulfilling the quality-of-service (QoS) requirements of UEs. The performance of a two-tier network consisting of ABSs and terrestrial cellular base stations (BSs) was analyzed in [10], where ABSs are deployed to offload cellular traffic in hotspot areas. A distributed algorithm was further proposed to minimize the average distance between UAVs and UEs without degrading the communication between UAVs and cellular BSs [11].
The existing works [9]- [11] have considered omnidirectional antennas for UAV communication. However, as omni-directional antennas employ uniform antenna gains in all directions, the performance of UAV communication is severely limited due to excess interference from neighboring UAVs and terrestrial nodes, especially at high altitudes with abundant line-of-sight (LoS) propagation [4]. To tackle this issue, application of directional antennas for efficient UAV communications has recently gained tremendous attention due to the associated advantages of enhanced signal transmission, interference mitigation, and payload deployment. In particular, different from omni-directional antennas, UAV with directional antenna generates highly directive beams with strengthened signal power in the main lobe and reduced power leakage in the side lobes. Consequently, directional antennas with high antenna gain can improve the communication distance and the data rate in the downlink transmission without increasing the power consumption of the UAVs. Moreover, due to the small footprint, the interference to other UAVs and the terrestrial cellular system is reduced by employing directional antenna. The resulting interference mitigation capability can significantly enhance the performance of UAV networks. Furthermore, due to the size, weight, and power (SWAP) constraints of UAVs, an on-board deployment of large-scale antenna array is usually difficult, whereas directional antennas with large antenna gain and flexible payload deployment provide a promising alternative for UAVs. Considering directional antennas in UAV networks, joint optimization of UAVs' flying altitude and antenna beamwidth was investigated in [12] for maximizing the throughput of downlink multicasting, downlink broadcasting, and uplink multiple access, respectively. A long-range broadband aerial communication system using directional antennas was proposed in [13], where a Wi-Fi infrastructure established in the air is exploited for real-time communication. Directional antenna has also been combined with millimeterwave technique, for high-resolution 3D localization in [14], where multi-level beamforming with compressive sensing based channel estimation is usually employed [15], [16].
Motivated by its huge potential, this paper investigates the application of directional antenna in UAV networks with a focus on evaluating the network performance in the downlink. Different from terrestrial cellular networks, UAV network occupies a range of altitudes in the air and inherently has a 3D network topology. This is usually captured by a multi-tier network model for performance evaluation, where UAVs of a given tier keep flying at a certain altitude while communicating with the ground UEs. Considering a multitier UAV network deployed atop terrestrial heterogeneous networks (HetNets), a cell management framework was proposed in [17] to improve the communication coverage and retransmission time for UEs in congested networks. In [18], the authors investigated the spectral efficiency of downlink multi-tier UAV networks and derived the optimal intensities and altitudes for UAVs in different tiers. Assuming omnidirectional antennas, the association probability, successful transmission probability, and area spectral efficiency of multi-tier UAV networks were analyzed in [19]. Despite the fruitful development in the aforementioned works [17]- [19], a comprehensive performance evaluation for multi-tier UAV wireless networks employing directional antennas is still lacking, which may be hindered by two potential challenges. In particular, in the existing literature, especially for millimeter-wave communication, the gain of directional antenna was usually modeled as a flat-top antenna pattern with the maximum antenna gain attained in the main lobe [20]. Although this model is suitable for receivers at fixed location and within short transmission distance, it is not applicable for UAV communication networks. This is becauce the UAV flying in the sky usually changes its position and hence, the angle of arrival (AoA) and angle of departure (AoD), which will impact the antenna gain. In this case, the antenna gain model should capture the complicated channel variations associated with UAV communications along its flying trajectory, including channel conditions both line-of-sight (LoS) and non-line-of-sight (NLoS) propagation. Therefore, a novel tractable antenna gain model should be introduced for UAV communication networks.
On the other hand, the battery-powered UAVs usually suffer from severely limited energy supply. To reduce energy consumption and prolong the lifetime, signal transmission at UAVs has to respect a maximum transmit power budget and, at the same time, ensure at least a minimum receiving power at the ground UEs required to activate the receiving circuits. Therefore, transmit power management at different flying heights is crucial for UAVs and has been extensively studied in the literature while assuming omni-directional antennas [21]- [23]. In [21], optimal power control for UAVassisted networks serving underlaying D2D communication was investigated for minimizing the energy consumption and increasing the battery's service time. In space-air-ground three-tier HetNets, the hovering altitude and transmit power of UAVs were jointly optimized to reduce the cross-tier interference [22]. In [23], joint trajectory and transmit power optimization was investigated for UAV communication to maximize the average secrecy rate between UAV and ground UEs. Due to the signal enhancement and interference mitigation enabled by directional antennas, which are further affected by the 3D mobility of UAVs, the impact of transmit power management on UAV communication, particularly at different flight heights, needs to be newly investigated but has not been reported in the literature.
To address both challenges, in this paper, we propose a framework for modeling the downlink of UAV networks where, different from [9]- [11], [14]- [16], [18], UAVs are equipped with directional antennas. We assume that the ground UEs are associated with the serving UAV that provides the maximal receiving power while the UAVs in each tier employ the same transmit power for communication. The beam shaped by directional antenna has complicated impact on the user association in UAV networks. For example, UEs may prefer a UAV located far away as serving ABS if the UAV is transmitting to the UEs in the main lobe. Moreover, for ground UEs located in the main lobe, directional antennas deployed at UAVs will improve the connectivity probability as they can easily activate their circuits, i.e., satisfying the receiving signal threshold. In contrast, for UEs located out of the main lobe, their connectivity probability reduces. Therefore, by employing directional antennas, there exists an interesting tradeoff between connectivity and coverage of UAV networks, whereas such tradeoff is unavailable for wireless networks employing omni-directional antennas. In this paper, we present a detailed analysis of the coverage and connectivity for the downlink of K-tier UAV networks. The stochastic geometry, which has been widely used for analyzing cellular networks [24], UAV networks [25], and millimeter-wave communication [26], is adopted in this paper to obtain closed-form results for performance evaluation. Our derivation results take into account the directivity of antenna elements, the receiving threshold, and the transmit output power. We note that, due to the impact of its antenna pattern, application of directional antenna leads to much more complicated performance analysis than in [9]- [11], [14]- [16], [18]. In [27], directional antennas with flattop, sinc and cosine pattern functions are considered for millimeter-wave and cellular networks. However, the bounds for the achievable transmission rate are obtained by utilizing these approximate pattern functions, which fail to capture the directivity factor of directional antenna and its impact on the performance of wireless networks. In this paper, we adopt a novel tractable pattern function for directional antenna, which enables us to characterize the connectivity, association probability, and coverage probability of the considered UAV networks while capturing the impact of antenna directivity factor.
The contributions of this paper are as follows: • We propose a tractable framework for modeling downlink transmission employing directional antennas in Ktier UAV networks. The associated probability distribution of communication distance between UEs and UAVs is analyzed taking into account the transmit output power, flying height, and directional antenna pattern of the UAVs. We find that a maximal transmission height of UAVs exists, independent of the directivity factor of directional antennas, within which UAVs can successfully connect the UEs. • The probabilities of connecting the UAV network and associating with given tier of UAVs are both derived in closed form. We show that, with the adopting of directional antennas, ground UEs are prone to connect to UAVs flying at a large height. • By investigating the coverage probability of downlink transmission in K-tier UAV networks, we show that UAV network equipped with directional antennas of large directivity factor can achieve much higher coverage probability than that with omni-directional anten-nas. The remainder of this paper is organized as follows. In Section II, the system model of the considered multi-tier UAV networks employing directional antenna is presented. The probability distribution of communication distance for ground UEs served by the considered UAV networks is derived in Section III, where the impact of transmit output power, receiving threshold, and directivity of antenna element is revealed. In Section IV, the connectivity and the association probabilities of the considered UAV networks are analyzed, based on which the total coverage probability is further derived in Section V for LoS and NLoS transmissions. The derived results are validated via Monte Carlo simulation in Section VI, where the impact of maximal transmission distance, density and height of UAVs, and directivity factor of directional antenna on the downlink system performance is revealed. Finally, Section VII concludes the paper.
A. NETWORK MODEL
As shown in Fig. 1, we consider downlink transmission in a 3D UAV network composing K tiers of ABSs. The ABSs in each tier are located at a given height but are randomly distributed horizontally. Let (m i,k , h k ) be the 3D location of ABS i in tier k ∈ {1, · · · , K}, where h k is the height and m i,k denotes the horizontal coordinate. We assume that the horizontal locations of ABSs in tier k, denoted by Φ k = {m i,k ; i = 1, 2, 3 · · ·}, follow a homogeneous Poisson Point Process (PPP) with density λ k . The ABSs in tie k transmit signals with output power P k . The UEs are randomly distributed on the ground with a height assumed to be zero. The locations of the UEs are modeled by a homogeneous PPP with density λ u , denoted as Φ u = {x i }, which is independent of Φ k . For a tractable analysis, we assume that the ABSs of each tier move only horizontally while providing wireless communications in the considered multi-tier UAV networks. Since the speed of UAVs is relatively low, the locations of the ABSs in the air are considered to be fixed during the transmission of a data packet, whereas the ABSs may fly horizontally to different locations for the transmission of multiple data packets. Hence, the spatial distributions of the ABSs under random horizontal movements can still be captured by the PPP model. We assume that the channel fading keeps constant within a time slot, as commonly adopted for performance analysis of UAV networks [28], [29].
The ground UEs are associated with the ABSs in a tier that provide the strongest average receiving power. In this paper, we aim to analyze the performance of the considered multi-tier UAV networks and would ignore the terrestrial BSs which do not exist in the considered area or otherwise may employ orthogonal radio resources as the ABSs. Moreover, we focus on analyzing a typical UE U 0 located at the origin O and the typical cell C 0 , where the ground UEs within C 0 , including the typical UE, are served by the same UAV. The derivation results of the typical UE can be extended to other UEs on the ground by applying the Palm theory [30]. We assume open access within the UAV network, whereby the ground UEs are allowed to access ABSs in all tiers to maximize the coverage probability. However, all downlink transmission within the UAV network occupy the same spectrum, whereby the UAVs in networks can interfere with each other while serving the ground UEs.
B. DIRECTIONAL ANTENNA
The ABSs are equipped with directional antennas for communication with the ground UEs. In general, the antenna gain of a directional antenna, denoted as G(ϕ, ψ), is a highly nonlinear function of the azimuth angle ϕ ∈ [−π, π] and the incidence angle ψ ∈ [0, π/2], which complicates the performance analysis. To simplify the derivations, in this paper, we consider conic directional antenna elements. The antenna gain of a conic antenna is given as G(ϕ, ψ) = A er cos m (ψ), where A er is the maximal gain of antenna element and m is the directivity factor dependent on the beam shape [31].
The directivity D of antenna element is obtained from the following antenna equation [32] By substituting G(ϕ, ψ) into (1), the directivity of conic antenna is D = 2(m + 1), which only depents on the directivity factor. The considered conic antenna gain model facilitates us to evaluate the impact of the shaped beam on the downlink performance of UAV networks. Fig. 2 shows the normalized power pattern G(ϕ, ψ)/A er for different m.
C. CHANNEL MODEL
The ABSs with a large elevation angle usually have a high likelihood of establishing LoS communication with the ground UEs [33]. As shown in Fig. 3, let θ be the elevation angle of the typical UAV in rad, which is the angle formed by the line from the UAV to the typical UE and the ground plane. In this paper, we assume that the ABSs always point the directional antennas toward the ground and, hence, we have θ = π/2 − ψ. According to [33], the probability of establishing LoS communication from ABS at m i,k to the UE at O, denoted by P L (θ), is given as where a and b are constants capturing the statistical properties of the signal propagation environment. Consequently, the probability of NLoS communication is given by P N (θ) = 1 − P L (θ).
Considering path loss and fading effect in the channel model, the receiving signal power at the typical UE is given as where P k is the transmit power of the ABS located at m i,k and A er cos m π 2 − θ denotes the transmit antenna gain of the ABS with incidence angle ψ = π 2 − θ (Note that the receiving antenna gain of ground UEs is assumed to be 1 in exp (1) is exponentially distributed with unit mean power. For deriving the analytical results, the path loss exponent α L and α N are assumed to vary rarely across different tiers.
D. DOWNLINK SIGNAL TRANSMISSION
The ABSs should utilize large enough transmit output power to activate the receiver circuit at the UEs after combating the path loss and hence, the maximal allowable path loss between UAVs in tier k and the typical UE is given by L max,k = P k /ρ c , where ρ c is the receiving signal threshold of the UEs. UAVs will fail to communicate with the UEs when their distances exceed L max,k . This implies that only a portion of UAVs can successfully connect the typical UE. Based on the PPP, the ABSs in tier k that can successfully connect the typical UE are uniformly distributed within a disc b k (o k , r k ) centered at o k = (0, 0, h k ). The radius of the disc, r k , can be derived based on the channel model (3), as will be revealed in Section III. We note that the transmit power P k and the receiving signal threshold ρ c can jointly impact the performance of the considered multi-tier UAV networks. For example, the maximum distance between UAV and UEs, which impacts the connectivity of UAV networks, will be limited by the receiving signal threshold and the transmit power. Moreover, a higher transmit power P k at UAVs will enable a large access region but leads to more interfering UAVs competing for the allocated spectrum. This reduces the signal-to-interferenceplus-noise-ratio (SINR) and the coverage for UAV communication. Therefore, the joint impact of the transmit power and the receiving signal threshold on coverage and connectivity should be investigated in detail, which is the aim for the rest of this paper. The key notations used in this paper are listed in Table I.
Disc with radius r k and center o k
III. COMMUNICATION DISTANCE DISTRIBUTION
In this section, we first derive the maximal distance within which the ABSs can successfully connect to the UEs. Then we characterize the probability distribution of the distance between the closest ABS in the k-th tier of the UAV networks and the ground UEs.
A. MAXIMAL COMMUNICATION DISTANCE
For the UAVs in tier k with height h k , the maximum communication distances to the typical UE U 0 are calculated for LoS and NLoS propagation channels separately. When the ABS located at m i,k transmits in LoS channels, the receiving signal power at the typical UE is given as Since U 0 is located at the origin, thereafter we simply denote L L (x, o) and L N (x, o) as L L (x) and L N (x), respectively. In order to activate the receiver circuit, the average receiving power P r should exceed the receiving threshold ρ c , i.e., where cos (π/2 − θ) = h k / m i,k according to Fig. 3. Therefore, the maximal communication distance for the kth tier ABSs under LoS transmission is Given the flying height of the ABSs in tier k, h k , the radius of the disc of ABSs in tier k that can activate U 0 is further derived as Similarly, the maximal communication distance and radius for the kth tier ABSs under NLoS transmission are given as respectively. Since α L ≤ α N , we have r L k ≥ r N k . That is, the ABSs implementing LoS and NLoS communications to the UE may occupy different (though overlapping) regions. Therefore, the performance of LoS and NLoS communications should be evaluated separately.
We note that, constrained by the maximal transmit power of UAVs, P max , a maximal transmission height, h max , also exists for the ABSs. If the height of UAVs in tier k exceeds the maximal transmission height, i.e., h k > h max , any UAV in this tier cannot successfully transmit signals to the typical UE. Based on (7) and (8), the maximal transmission height in tier k under LoS and NLoS communications can be obtained as Note that the maximal transmission height is independent of the directivity factor of directional antenna, m. This is because the maximal transmission height appears when the ABS is located at the center of disc b k . In this case, the elevation angle is θ = π/2 and the maximal antenna gain A er is achieved independent of the directivity factor m.
B. DISTRIBUTION OF COMMUNICATION DISTANCE
Recall that, in tier k, only the ABSs located within disc b k can successfully communicate with the typical UE. The communication distance between the ABSs and the UE under LoS and NLoS communication, denoted by random variables D L k and D N k , have the following probability density functions (PDFs) and where r L k , r N k , m L k , and m N k are given in (6)- (8). Note that (10) and (11) can be obtained similar to Lemmas 1 and 2 in [28], which conclude that a given number of UAV nodes uniformly distributed within a finite region follow a binomial point process (BPP). Lemma 1: Based on the maximal transmission distance and the uniform location distribution of UAVs in tier k, the average number of ABSs implementing LoS and NLoS communications are given as and Proof: Recall that the UAV with elevation angle θ can implement LoS communication with probability given in (2). The elevation angle can be expressed as θ = arctan h k d , where d is the distance between UAV in tier k and o k . The number of UAVs in tier k implementing LoS communication is then given by where φ k is the angle subtended by line from the ABS to o k and the x-axis in disc b k , and is uniformly distributed within [0, 2π]. The UAVs failing to implement LoS communication will communicate under NLoS condition. The number of UAVs in tier k implementing NLoS communication is which complete the proof. The typical UE is associated with the ABS providing the maximal average receiving power. As all UAVs in tier k employ the same transmit power P k , the ABS in tier k having the closest distance will provide maximal average receiving power for the typical UE. This motivates us to derive the probability distribution of the closest distance between ABSs in tier k and the typical UE and the result is included in the following lemma. Lemma 2: The PDF of the closest distance between the ABS in tier k and the typical UE under LoS transmission is where E n L k is the average number of UAVs in tier k implementing LoS communication as given in (14).
Proof: For the ABSs in tier k having the closest distance, the cumulative distribution function (CDF) of the closest distance to the typical UE is obtained as The PDF can obtain from the derivation of (17) as 6 VOLUME 4, 2016 Similarly, the PDF of the closest distance between the ABSs in tier k and the typical UE under NLoS transmission can be obtained as
IV. CONNECTIVITY AND ASSOCIATION PROBABILITIES OF UAV NETWORKS
In the considered UAV networks, the ground UE can connect with an ABS provided its receiver circuit can be activated by the ABS. Meanwhile, according to the association policy, the ground UE will associate with the ABSs in any tier that provide the maximum average receiving power. In this section, we will analyze the connection and association probabilities for the considered multi-tier UAV networks.
A. CONNECTIVITY PROBABILITY OF UAV NETWORKS
Based on the analysis in Section III, the typical UE can connect to the considered UAV networks if and only if there exists k ∈ {1, · · · , K} such that at least an ABS in tier k is located within disc b k . On the contrary, for ABSs spatially distributed according to a PPP, the probability that ABSs in tier k cannot connect with the typical UE, which is given by P {Φ i ∩ b i = φ, ∀i = 1, · · · , K}, can be characterized in Lemma 3. Lemma 3: The probability that the typical UE cannot connect with UAV networks is Proof: The typical UE cannot connect with the ABSs in tier k if and only if it can neither connect with the ABSs implementing LoS nor NLoS communication, i.e., Since b k,L and b k,N have the same center o k , b k,L and b k,N would overlap with each other. Moreover, as r N k ≤ r L k , b k,L covers b k,N and hence, P {Φ k ∩ b k = φ} = P {Φ k ∩ b k,L = φ}. Consequently, the typical UE cannot connect with the ABSs if all ABSs in tier k are located outside the circle b k,L . Thus, the probability that the typical UE cannot connect with the ABSs implementing LoS commu-nication in tier k is As the point processes of ABSs in different tiers are independent of each other, we have which complete the proof. Based on Lemma 3, the probability that the typical UE can connect with the K-tier UAV networks is From (24) we observe that the directivity of directional antenna, m, highly impacts the connection probability of the ground UEs. It can be further proved that the probability of UE connecting UAV networks decreases with m. This is because, by utilizing directional antennas, the radius r k of b k will decrease with m and more UAVs are located out of b k . Therefore, by employing directional antennas, the region where UAVs can successfully connect with the UE will reduce and less ground UEs can communicate with the UAV networks.
B. ASSOCIATION PROBABILITY OF UAV NETWORKS
When the typical UE can connect with the UAV networks, it will associate with the serving ABS providing the maximal average receiving power. The probability that the typical UE is associated with the ABS in tier k under LoS communication, denoted as P A L = k , is given in the following theorem.
VOLUME 4, 2016
Theorem 1: Under LoS communication, the typical UE will be associated to the ABSs in tier k with a probability given as Proof: As the UAVs can implement both LoS and NLoS communications, the associated ABS in tier k for serving the typical UE has to satisfy two independent conditions. For association with the ABSs under LoS transmission, the serving ABS has to provide the maximal average receiving power among all ABSs. Meanwhile, the receiving power from the serving ABS has to exceed that from all ABSs under NLoS transmissions. Let R be the closest distance between the UAVs in tier k and the typical UE. We have As the ABSs implement LoS and NLoS communications independently, we can calculate (26) as where f R (r) has been obtained in (16). Theorem 1 can be proved by Substituting (16) into (27).
Under NLoS transmission, the probability of association to the ABS in tier k can be similarly obtained as Let X k be the distance between the typical UE and its serving ABS when the typical UE is associated with the ABS in tier k. Recall that the ABSs which can connect to the typical UE must be located within disc b k . Given the location distribution of ABSs in tier k, cf. Lemma 2, the PDF of X k can be derived in the following theorem.
Theorem 2: Under LoS transmission, the PDF of X k , the distance between the typical UE and its serving ABS in tier k, is given as Proof: Let R k be the distance between the typical UE and its serving ABS. Under the condition that the associated ABS is located in tier k and has LoS communication with the UE, we have fR k (r)dr. (31) Moreover, substituting (25) into (30), we have fR k (r)dr. (32) Finally, Theorem 2 can be proved be taking f X k (x) = d(1−P{X k >x}) dx . Meanwhile, under the condition that the serving ABS with LoS transmission is in tier k and has distance R k to the UE, other ABSs in the UAV networks will interfere the desired signals due to the full spectrum reuse among the ABSs. The receiving interference power at the typical UE caused from other interfering ABSs should be less than the receiving power from the serving ABS. Define X I,j as the distance between the typical UE and an interfering ABS in tier j. The PDF of X I,j can be described in the following lemma.. Lemma 4: Given that the serving ABS is located in tier k at a distance of R k , the PDF of the distance between the typical UE and interfering ABS in tier j, X I,j , with LoS transmission is Proof: As the distance between the typical UE and the serving ABS in tier k is R k , the distance from the interfering ABS in tier j to the UE should satisfy We obtain that X I,j > Pj P k (35) which completes the proof.
Similarly, the PDF for the distance between the typical UE and an interfering ABS in tier j, X I,j , with NLoS transmission is
V. ANALYSIS OF COVERAGE PROBABILITY
Based on the association probability in Section IV, we can further derive the coverage probability of the K-tier UAV networks. For this purpose, we first have to derive the Laplace transform of the aggregated interference power caused from the interfering ABSs. Based on the distance distribution of the serving ABS and interfering ABSs, the total coverage probability of the K-tier UAV networks, P c , can be calculated as where P L (A = k) ( P N (A = k) ) is the coverage probability conditioning on that the serving ABS is in tier k and has LoS (NLoS) transmission. A L k and A N k are the association probabilities that the serving ABS in tier k has LoS and NLoS transmissions, respectively.
Given that the distance from the serving ABS in tier k with LoS or NLoS transmission is R k,o , the signal-to-interference ratio (SIR) at the typical UE can be given as where I L and I N are aggregated interference powers from interfering ABSs under LoS and NLoS transmissions, respectively. Based on [34], the coverage probability P L (A = k) can be defined as where T is the SIR threshold for downlink transmission. Moreover, given that the distance of the serving ABS is R, the Laplace transform of interference power with LoS transmission, I L , is given in the following Lemma. Lemma 5: Given that the serving ABS with LoS transmission is located at a distance of R away from the typical UE, the Laplace transform of the interference power, I L , is given as and Proof: The Laplace transform of the interference power, I L + I N , can be derived as Herein, the Laplace transform of the interference power caused by ABSs with LoS transmission is given in (44). Moreover, the interference power caused from ABSs with NLoS transmission has the following Laplace transform, where U L N,j is the set of interfering UAVs implementing NLoS transmission in tier j. Substituting (44) and (45) into (43), we can obtain (40), which completes the proof.
Similarly, given that the serving ABS with NLoS transmission has distance R, the Laplace transform of the interference power, I L + I N , is where and Based on(40) and (46), the coverage probability of the K-tier UAV networks can be obtained in the following theorem. Theorem 3: The total coverage probability of the K-tier UAV network, defined as P c = n k=1 P L (A = k)A L k + P N (A = k)A N k , cf. (37), can be obtained by substituting and T . Moreover, A L k and A N k are given in (22) and (25), respectively.
Proof: The coverage probability of the typical UE associated with the serving ABS under LoS transmission in tier k can be calculated as where f R L k (r) is given in Lemma 3. Moreover, we have where (a) follows from Lemma 6 in [34] and η = m 0 (m 0 ! ) (1/m0) . Similarly, we have Based on Lemma 5 and (46), Theorem 3 can be thus proved.
VI. NUMERICAL AND SIMULATION RESULTS
In this section, we evaluate the performance of a two-tier UAV network, where the ABSs in tier 1 and 2 are located at heights h 1 and h 2 , respectively. Unless otherwise specified, the simulation parameters are set according to Table II. We present both analytical and simulation results, where Monte Carlo simulations are employed to validate the analytical results obtained in Sections III-V. Thereby, the horizontal locations of UAVs and UEs are randomly generated in a large plane and the height of UAVs in each tier is set based on the empirical data from Qualcomm [35]. We simulate 10 4 spatial realizations for the locations of ground UEs and UAVs. For each spatial realization, the locations of the ground UEs and UAVs are fixed, which enables us to obtain the empirical connection and coverage probabilities. The final results are gathered by averaging over all simulation realizations. We note that Monte Carlo simulations have been widely adopted to evaluate performance and validate analytical derivation for cellular networks and UAV networks [29], [36], [37]. Fig. 4 shows the connection probability of the considered UAV network as a function of the height of the first-tier ABSs when different receiving thresholds are employed at the ground UEs. From Fig. 4 we observe that the Monto Carlo simulation results are in a good match with the analytical results, implying that the derivations in Section III are valid.
A. CONNECTION PROBABILITY OF THE UAV NETWORK
As the height of the first-tier ABSs increases in the small value regime, the connection probability increases quickly before it saturates. This result is due to the high directivity of VOLUME 4, 2016 directional antenna. In particular, the ABSs at a low height would transmit signal over the sidelobe of the generated beam, resulting in only small antenna gain. In this case, the connection probability of the UAV network is small. As the ABSs's height increases, the connection probability of the UAV networks improve as more UEs receive signal from the main lobe of the UAVs such that the associated antenna gain increases. On the other hand, we also observe from Fig. 4 that, after the flying height exceeds a limit, e.g., 1300 m for receiving threshold ρ u =40 dBm, the connection probability quickly drops to zero. This is consistent with our derivations of the maximum flying height in Section III. In particular, due to the receiving threshold of the typical UE and the maximum output power of the ABSs, only ABSs within a given flying height can activate the receiving circuit at the UEs. Consequently, the region where ABSs can overcome the path loss to a UE shrinks when the height of the ABSs increases and further vanishes when the flying height of the ABSs exceeds h L max . In the latter case, the connection probability of the UAV network decreases quickly. Fig. 5 evaluates the connection probability of the consid- ered UAV network as a function of the antenna's directivity factor when the ABSs adopt different flying heights. From Fig. 5 we observed that, for directional antennas, the connection probability decreases with the antenna's directivity factor. This is because the main lobe of the generated directional beam shrinks as the directivity factor increases. Consequently, a UE has to communicate with the ABS via the side lobe with a high probability, which reduces the connection probability. In contrast, for omni-directional antennas with m=0, the connection probability is always close to 1 for the considered simulation setup. Moreover, by employing directional antenna, it is interesting to observe that the ABSs flying at a large height can provide a high connection probability, espcially when the directivity factor is large. This is because, as the flying height increases, only the ABSs close to the top of their served UEs can successfully connect with the UEs. Although a high directivity factor leads to a disc of small the radius, the large antenna gain enables ABSs to connect with the UE at large height. This result implies that the UAV network employing highly directive antennas is suitable to fly at a large height provided it is within the limit of the flying maximum height. Fig. 6 shows the connectivity probability as a function of the receiving threshold when the UAVs employ antennas of different directivity factors. From Fig. 6 we observe that the connection probability of the considered UAV network decreases with the receiving threshold of the UE. However, as the directivity factor of antenna elements increases, the connection probability tends to decrease with the receiving threshold at a slower rate. This is because the radius of the disc decreases with the directivity factor; consequently, fewer ABSs can successfully connect with the ground UEs using their transmit output power. In the large regime of the receiving threshold, the connection probability of UAV network decreses sharply and approaches 0. In the latter case, the ground UE cannot connect to the UAV network. This is because the maximal transmission height decreases with the receiving threshold. Consequently, more ABSs at large flying heights will fail in signal transmission until, when the receiving threshold of ground UEs is large enough, none of the ABSs can successfully connect with the ground UEs. Fig. 7 shows the total coverage probability as a function of the flying height of the first-tier ABSs when the ground UEs employ different receiving thresholds. From Fig. 7 we observe that there exists an optimal flying height that achieves the maximal coverage probability. This is because by employing directional antennas, the radius of disc b k increases with the height of the ABSs such that more ABSs can serve the UEs. When the flying height of the ABSs is low (e.g. 100 m < h 1 < 200 m), the radius of the disc is small such that few ABSs can successfully activate the ground UE. As the flying height of the ABSs increases, the disc of radius as well as the connection probability of the UAV network improves. Nevertheless, as the number of interfering ABSs also increases with the radius of the disc, the coverage probability deteriorates. Therefore, a tradeoff between the connection probability and the coverage probability exists when employing directional antenna in UAV networks and the maximal coverage probability of the UAV networks is achieved at the optimal flying height by balancing between signal enhancement and interference mitigation. From Fig. 7 we also observed that the coverage probability decreases quickly for large receiving thresholds of the ground UEs. This is because, by adopting large receiving thresholds, the distances between the ground UEs and their serving ABSs decrease. Consequently, the ABSs at a large flying height have to overcome the large path loss, which reduces the coverage probability of the UAV network. Fig. 8 shows the coverage probability as a function of the SIR threshold for different densities of UAVs. From Fig. 8, we observe that the coverage probability decreases with the SIR threshold. Similar results for a two-tier terrestrial cellular network have been reported in [38]. We note that, in the same SIR threshold regime, the coverage probability of the considered two-tier UAV network is always larger than that of the two-tier cellular network considered in [38]. This is because the directional antenna provides additional antenna gains to overcome the path loss and, at the same time, reduce the impact of interfering ABSs. Moreover, as the density of the ABSs increases, the coverage probability decreases due to the increased number of interfering ABSs. 9 compares the coverage probability of the considered two-tier UAV network with the baseline one-tier UAV network as considered in [28]. For both networks, the coverage probability is evaluated as a function of the SIR threshold for deploying the UAVs at different flight heights, where the path loss exponent is α N = 2.5. It can be seen from Fig. 9 that the coverage probabilities of the considered UAV networks shows the same tendency, which decreases with the SIR threshold. However, the proposed two-tier UAV network with directional antenna always outperform the baseline UAV network for the considered SIR thresholds.
B. COVERAGE PROBABILITY OF THE UAV NETWORK
Finally, Fig. 10 shows the coverage probability as a function of the antenna's directivity factor for different path loss exponents under NLoS transmission. From Fig. 10 we observe that, for directional antennas, the coverage probability increases with the antenna's directivity factor, as the antenna gain of the serving ABSs enlarges. When the directivity factor is high enough, the radius of the disc reduces. In this case, both the serving ABS and the interfering ABSs are located close to the center of the disc and the number of interfering ABSs reduces. Consequently, the SIR increases with the directivity factor of antennas and application of directional antennas leads to much higher coverage probability than that of omni-directional antennas. From Fig. 10 we also observe that, with a large pass loss exponent, the UAV network can obtain a high coverage probability. This result implies that UAVs equipped with directional antenna can achieve high coverage probability even for propagation scenarios of large path loss.
VII. CONCLUSIONS AND FUTURE WORK
This paper developed a novel analytical framework for evaluating the distance distribution, connectivity probability, and coverage probability of K-tier UAV networks that employ directional antennas. To facilitate a tractable performance analysis, we introduced a simple elevation angle based antenna pattern model to capture the antenna gain provided by directional antennas. It was revealed that the directivity factor can highly impact the connection probability, especially when the UAVs deployed at low flying heights have a large elevation angle. However, the coverage probability of Ktier UAV networks can be enhanced by adopting directional antennas as the interference power caused from other ABSs reduced. Both the analytical and simulation results showed that the application of directional antennas for UAVs at large flying heights can provide excess antenna gain to overcome the propagation pass loss and, at the same time, mitigate the impact of interfering UAVs. These results demonstrated the huge potential of employing directional antennas to enhance the performance of multi-tier UAV networks. In the future, the application of directional antenna for uplink communication and the associated performance evaluation of multitier UAV networks in the uplink are interesting extensions of this work. Moreover, selecting the optimal directivity factor of directional antennas for given density and flying height of UAVs is another compelling research direction.
JING ZHANG received the Ph.D. degree from Huazhong University of Science and Technology (HUST) in 2002 and 2010, respectively. He is currently an associate professor with HUST. His current research interests include unmanned aerial vehicle communications, green communications, device-to-device communications, and millimeterwave communications.
HUAN XU received his undergraduate degree from Wuhan University of Technology and is currently pursuing a master's degree at Huazhong University of Science and Technology. His current research interests include green communication, stochastic geometry and UAV wireless network performance analysis. | 10,383.8 | 2019-09-10T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Second-degree branch structure blockchain expansion model
The blockchain runs in a complex topological network which is affected by the principle of consensus, and data storage between nodes needs to maintain global consistency in the entire network, which causes the data storage inefficient. At the same time, the information exchange between large-scale communication node groups leads to the problems of bandwidth expropriation and excessive network load. In response to these problems, this article proposes a second-degree branch structure blockchain expansion model. First, a ternary storage structure is established. Data use the way of fully integrated storage, multi-cell storage, and fully split storage, and data are classified and stored in parallel between the structures. Second, a second-degree branch chain model is constructed. The main chain forks into multiple sub-chains, and a free competition chain structure and a Z-type chain structure are defined; a two-way rotation mechanism is introduced to realize the integration and transition between chain structures. Finally, a set of malicious attacks is simulated to realize the security constraints of the blockchain, to verify the security of the second-degree branch chain model. Experiment shows that the second-degree branch structure expansion model proposed in this article has great advantages in data storage efficiency and network load.
Introduction
Since the concept of blockchain was put forward, it has attracted widespread attention from the world. In terms of data storage, it is a distributed storage ledger; in terms of protocol, it is a decentralized consensus protocol; and in terms of economy, it is an Internet of value that improves cooperation efficiency. From the perspective of blockchain technology, blockchain 1 data storage uses hash compression, asymmetric encryption, 2 and other cryptographic principles to ensure reliability and to adopt distributed data storage 3,4 through point-to-point connections. 5 The blockchain ledger is jointly maintained by all nodes and stores data, which is based on a credible consensus mechanism. 6 In recent years, the application of blockchain technology has become more and more extensive such as Bitcoin, Ethereum, and Litecoin, which uses computing power to prove, 7 and it has gradually evolved from encrypted digital currency to a credible service platform applied to various industries in society. But with facing the challenges of complex environments, 8 blockchain expansion 9 problems have become more prominent.
Although the blockchain adopts expansion mechanisms such as Segregated Witness, 10 Lightning Network (LN), 11 and data sharding, 12 as the scale and production speed of blockchain data continue to increase, the rate of storing is increasingly lagging behind the rate of real-time data production, which results in more serious data storage and network load problems in the blockchain. Therefore, how to further improve the storage efficiency and reduce the network load has become a difficult point in the current blockchain research.
In response to these problems, this article proposes a second-degree branch structure blockchain expansion model. The main contributions are as follows: 1. In view of the inefficiency of blockchain data storage, a ternary storage structure is designed to significantly improve data storage efficiency through data shunting and task distribution. 2. In the design of the actual structure of the blockchain, the second-degree branch chain model is proposed, including the free competition chain structure and the Z-type chain structure. The expansion of the blockchain is based on two chain structures. 3. On this basis, in response to the incompatibility of the second-degree chain due to structural differences, a two-way rotation mechanism is proposed to enable smooth switching between structural chains, and the double-chain structure fusion is specifically demonstrated through the second-degree chain fusion transition process. 4. Finally, the security of the model is further analyed by the second-degree branch structure blockchain expansion model, and on the basis of the security analysis, to find out the security constraints which have instructive significance for the actual application of the blockchain.
Related works
At present, many scholars have conducted in-depth research on blockchain expansion technology and have achieved many research results. In the isolation authentication expansion mechanism, 13 the block body extracts the signature information from the main chain space and stores it in a new data structure to achieve expansion, but the block space saved by this mechanism is limited. Seres et al. 11 quantitatively analyze the structural characteristics of the LN and solve the expansion problem by improving the data throughput through multiple payment channels. But the LN topology needs to be improved, and the security needs to be strengthened. Min et al. 14 propose a multicenter dynamic consensus mechanism in the permission chain, and by optimizing consensus mechanism, it reduces block confirmation delays to achieve expansion. But the dependence on master nodes is risky, and the system reliability is difficult to guarantee. Jia et al. 15 propose a scalable blockchain model, which can achieve expansion by optimizing the storage structure and reducing the communication cost. But it requires high credibility of data storage nodes, which reduce the stability of the system. Burchert et al. 16 propose a new layer between the blockchain and payment channels. To deal with the expansion problem through the new layer of micropayment channels, which achieves delay-free payment. Kim et al. 17 propose a distributed storage blockchain (DSB) system, which improves storage efficiency by combining secret sharing, private key encryption, and information dispersal algorithms. But when peer failure occurs due to denial service attacks, DSB will cause serious communication cost. Fadhil et al. 18 propose the Bitcoin Clustering Based Super Node (BCBSN) protocol, which reduces the transaction propagation delay by a reasonable ratio. But when the super node has a transaction failure, the normal transactions will be seriously affected. Zhao et al. 19 propose a security strategy for DSBs, which can delete part of blockchains, so that nodes only store a part of blockchain. But due to the coexistence of multiple node modes, the operational burden of the management node is increased. Zhang et al. 20 analyze blockchain transaction databases and propose a storage optimization scheme. The proposed scheme divides blockchain transaction database into cold zone and hot zone, and achieves storage optimization by moving unspent transaction outputs outside the in-memory transaction databases. But the data query call to the cold zone is inefficient. Shah et al. 21 propose consensus-ADMM (alternating direction method of multiplier) based distributed optimization algorithm, which decomposes the optimization problem into subproblems, and solves the sub-problem locally and exchanges information with the neighboring regions to solve for the global update.
In summary, this article conducts research on the basis of existing research, further optimizes the blockchain structure, and proposes a second-degree branch structure blockchain expansion model.
Second-degree branch model
Aiming at the construction of blockchain, first this article designs a ternary storage method to optimize the blockchain storage structure, then builds a seconddegree branch chain mechanism for expansion, and uses a two-way rotation mechanism to carry out the transition of differentiated structure chains.
Ternary storage structure
After the blockchain is bifurcated into multiple subchains, the original blockchain information is distributed among the sub-chains for data storage, and the data storage rate has been significantly improved. Definition 1. Blockchain data flow circulates through multiple channels, and it is stored in parallel at the same time, which is called data shunting. The increased block per unit time before the blockchain fork is Nblock, and the effective data amount of a single block body is Dvalid. So the data storage rate before the fork is Nblock 3 Dvalid. The blockchain classifies the stored data information Dvalid as Dvalid1, . . . , Dvalidnvalidn. N sub-chains are formed after n bifurcation; the number of blocks increased per unit time in the sub-chain is Nblock, and the effective data amount from a single block body is Dvalidi. Since data amount Dvalid and Dvalidi tend to be same, after the blockchain is forked, the data storage rate is Nblock 3 (Dvalid1 + Á Á Á + Dvalidn), which is close to n 3 Nblock 3 Dvalid, and data shunting significantly improves the efficiency of blockchain storage.
After the blockchain is bifurcated into multiple subchains, at least one complete storage unit needs to be stored when the node data on the sub-chains are stored.
Definition 2. Each entire single chain that goes back to the genesis block is called a storage unit. When nodes store transaction information, it must be sure that the current block can be traced back to the genesis block of the blockchain. The storage unit composed of this entire single chain reflects the integrity and atomicity of the blockchain information.
A ternary accounting structure is proposed due to the node accounting method and environmental impact: 1. Full integrated storage structure: Each node records all the storage unit of the entire blockchain. Under this structure, a global view of the second-degree branch blockchain is obtained. This type of node is mostly the global management and service node, which is used to dynamically monitor the overall operation of the multitree blockchain and provides subsequent maintenance and update. 2. Multi-cell storage structure: Each node records a limited number of storage units and dynamically updates and stores them. A partial view of the second-degree branch blockchain is obtained. On one hand, this type of node is the local management and service node, and it is used to dynamically monitor and manage the local operation of the blockchain, and report to the blockchain global management service nodes about the current storage status; on the other hand, it is the information storage node of the blockchain and exists in the Z-type chain structure proposed below. 3. Fully split storage structure: Each node records its own separate storage unit. In this mode, the node is only responsible for the block storage of the sub-chain and does not care about the secure status of the chain where it is located. This type of node is the information storage node and exists in the free competition chain structure proposed below.
The ternary storage structure is shown in Figure 1:
Second-degree branch chain structure construction
The construction of a two-branched chain expansion structure is carried out. The two-branched chain structure includes two sub-chain structures: a free competition chain and a Z-type chain. On one hand, the two structure chains use data shunting for effective expansion; on the other hand, they have different characteristics due to structural differences.
Free competition chain structure. In the free competition chain, the blockchain forms multiple sub-chains after branching. Sub-chains are independent of each other. When the storage node downloads the sub-chain data, it must store a complete storage unit, which reflects the good traceability and integrity of the blockchain. The sub-chain only packs information on this chain and does not interact with other sub-chains. Due to the competition between the subchains, the distribution of computing power resources will be obviously uneven, which is not conducive to the stability of the blockchain, so introducing the system default allocation to balance the computing power gap.
Definition 3
System default allocation. Blockchain computing power distribution is different, which will cause the storage rate of each sub-chain to be significantly different, and then generates security problems due to malicious attacks. The system generally allocates newly added accounting nodes to the weaker computing power chain to maintain the safety and reliability of the blockchain. This process is allocated by the system by default.
The schematic diagram of the free competition chain structure is shown in Figure 2. ''Free-competition'' represents the free competition chain structure. Each single chain creates blocks in a top-down order, and the child blocks store the hash value of the parent block. The single chain does not affect each other, and the single chain with weak computing power will be allocated by the system to supply the computing power by default, so that the overall blockchain computing power tends to be consistent. The (x, y) form coordinate uniquely identifies the position of block.
In the free competition chain structure, the node which stores the block only needs to download a storage unit to carry out data storage services. In all chain structures, the storage overhead is the smallest, but it will face the secure issues brought by the shunting of computing power, which will be discussed later. The chain type Construct field of the stored block header information is ''Free-competition.'' Z-type chain structure. After the Z-type chain is branched, information is stored in the blocks on each child chain. In addition to storing the hash value of the parent block, it is also necessary to store the hash value of the adjacent block, which is the pseudo-hash value.
Data storage can only be performed on the premise of obtaining two hash values.
Definition 4. Pseudo-hash. In the Z-type chain structure, the sequence of block generation is from left to right, from top to bottom, and the hash value of the block generated in the previous order is the pseudo-hash.
As shown in Figure 3, unlike the free competition chain structure, blocks are generated sequentially from top to bottom on the child chain. All nodes on the Ztype chain structure save the hash value of the parent block, store the hash value of the adjacent block from the left of the same layer (or the rightmost block of the upper layer), and then perform the block data storage process. Therefore, the block generation sequence under the entire chain structure is from left to right and from top to bottom.
In the Z-type chain structure, honest computing power is concentrated to sequentially store and generate blocks, which avoids the advantages of centralized malicious attacks on one block and significantly improves the security of blockchain. The Construct field of the structure header information is ''Z-type,'' and a new hash supplement field HashExtra is added. This structure HashExtra is ''Fakehash.'' Finally, a block location field Location is defined. the row index of the block in Z-type chain structure, and Xrow represents the column index of the block in the Z-type chain structure. Through the Location field of the blockchain, each sub-chain block can get the block where the pseudo-hash is located.
Inter-chain rotation transition
Due to the structural differences between the two chain structures, compatibility problems can be caused when the chain structure is switched. This article proposes a two-way rotation mechanism to make the transition between different chain structures smoothly and dynamically to display the state changes of the chain structure.
Two-way rotation mechanism. In the second-degree blockchain, it is inevitable that one chain structure will become another chain structure after chain forks, and there will be the transition between the following chain structure: free competition chain and Z-type chain.
Two-way rotation between the free competition chain and the Z-type chain. When the free competition chain rotates to the Z-type chain, the data in the free competition chain are allocated to each sub-chain of the Z-type chain in an orderly manner. The node on the sub-chain cannot only store the storage unit of local sub-chain, because the storage of block information requires pseudo-hash provided by other sub-chains. So, the storage units of all sub-chains after the fork should be stored. The corresponding Construct field changes from ''Free-competition'' to ''Z-type,'' and the HashExtra field is from ''null'' to ''Fakehash.'' When the Z-type chain is rotated to the free competition chain, the data tasks stored in the Z-type chain are orderly distributed to free competition sub-chain. Nodes on the free competition chain only need to store one storage unit where the sub-chain is located (without interference from the hash value of other sub-chains), and the corresponding Construct field changes from ''Z-type'' to ''Free-competition''; HashExtra field changes from ''Fakehash'' to ''null.'' Second-degree chain integration transition. The seconddegree branch structure blockchain model is accompanied by the fusion between the two chain structures. When the blockchain transits between the chain structures, the corresponding attributes of the blocks will change. The sub-chain will form another chain after the fork, and the structure will be accompanied by this change.
On the premise that other fields are omitted except for the original block timestamp, random number, and Merkle Root, the new block information in this article is shown in Table 1.
In Figure 4, first, the initial block is bifurcated through a free competition chain structure, and then each sub-chain is further bifurcated through a Z-type chain structure. In this process, the corresponding Double chain type (Free-competition, Z-type) Location The position of the block in the fork chain [Xline,Yrow] attributes in the blocks inevitably change. Table 2 lists the block attribute changes based on Table 1.
In Table 2, the state changes of the fusion transition from the two chain structures are dynamically displayed. With the change of the chain structure, the attribute fields of the corresponding blocks have also undergone necessary changes. It changes the sub-chains structure and achieves the smooth transition.
Model safety analysis and constraints
Based on the two-degree branch structure blockchain expansion model proposed above, the expansion model is further analyzed in terms of security and based on the security analysis to construct a security constraint.
Safety analysis of free competition chain structure
Each sub-chain in the free competition chain structure runs independently, and the system defaults that the computing power among the sub-chains is uniform (which the honest computing power is evenly distributed to each branch sub-chain). Malicious nodes conduct malicious attacks on the forked single chain, as shown in Figure 5. In this case, the security analysis of the free competition chain is carried out.
Supposing the workload of the entire blockchain model is 1, the proportion of malicious computing power is q, and the chain is bifurcated into n subchains. The malicious nodes compete with the honest nodes in one sub-chain, and z represents the number of blocks that malicious nodes are chasing honest node. At this time, the computing power of the honest nodes attacked by the malicious node is (1q)/n, the percentage of the malicious computing power is q1 = q/{(1q)/n + q}, and the percentage of the computing power from the honest node is p1 = 1 -q1. Through the gambler bankruptcy model, a gambler can gamble countless times, and it tries to fill up the shortfall created. The probability that the gambler fills up the shortfall is the probability that the attacker catches up with the honesty nodes. As shown in equation (1) The probability of success of a malicious node filling z blocks is shown in equation (2) where q1 = q q + (1Àq)=n and p1 = 1 À q1. Using the transfer confirmation chase model, the success rate of malicious node attack is shown in equation (3) Because l = (q1=p1) 3 z, equation (4) is obtained after equation (3) is organized The description of the functional relationship in the free chain structure is shown in Figures 6 and 7.
In Figure 6, the malicious computing power accounts for a constant 0.1 (i.e. 10%), and the relationship between the two variables z, n and the malicious node attack probability p is obtained. Setting the number z of blocks confirmed by honest nodes to 1, 3, 5, 7, 9, 11, 13, 15, and 17, and setting the forked number n in the chain to 1, 2, 4, and 6. When the malicious computing power q is constant, each function line decreases downward, and the success rate p from the upper function line is higher than the lower function line.
In Figure 7, the number of bifurcations of the chain structure is set to be constant, and the relationship between two different variables q, z and the success probability of malicious node attack p is studied. The block number z confirmed by honest nodes is set to 1, 3, 5, 7, 9, 11, 13, 15 and 17 respectively, and the percentage of computing power of malicious nodes is set to 0.2, 0.1, 0.05 and 0.01 respectively. Figure 7 is obtained by law statistics. Each function line downward and the success probability p of the upper function line is always higher than the success probability of the lower function line. The function analysis is as follows: 1. The proportion of malicious computing power q and the number of forks n remain unchanged, and the probability of successful malicious attack p decreases monotonically with the block z confirmed by the honest chain. 2. The percentage of computing power q and the block z confirmed by the honest chain remain unchanged, and the probability of successful malicious node attack p increases monotonically with the number of forks n. 3. The number of forks n and the block z confirmed remain unchanged, and the success rate of malicious attack p increases monotonically with the proportion of malicious computing power q.
Combining with the inference, we can get that in order to ensure the safety and reliability of the blockchain, the measures that can be taken are to reduce the forks of the blockchain, increase the proportion of honest computing power, and wait for as many blocks confirmed as possible.
Safety analysis of Z-type chain structure
When all the sub-chains on the Z-type chain store the data, the forked sub-chains logically connect into a Ztype pseudo-chain, and the chain structure safety feasibility analysis is carried out on the characteristics of the Z-type chain structure.
The Z-type chain block packing sequence is from left to right and top to bottom, and all honest computing power is concentrated for data storage on the Z-type chain. However, as each block is packaged, the hash value from the parent block and the pseudo-hash value from other sub-chain blocks need to be recorded at the same time. When malicious node attacks a sub-chain, it must wait for the pseudo-hash provided by other subchains at the same time and then competes with the honest nodes. In this case, to perform security analysis on the sub-chain. Figure 8 is a diagram of the attack mode from the Z-type chain.
It can be seen from Figure 8 that the blocks on the Z-type chain structure are generated in an orderly manner, which is logically a single chain structure. Therefore, the success rate of the double-spending attack has nothing to do with the number of forked blocks. Supposing the total computing power of the blockchain is 1, the percentage of the malicious computing power is q, and the number of blocks confirmed by honest nodes is z. The successful probability of malicious nodes chasing z blocks through the gambler model is shown in equation (5) Confirming the pursuit model by transfer to get equation (6) Among them, l = q 3 z/p. Converting equation (6) to equation (7) Figure 9 is the functional relationship diagram in the Z-type chain structure.
From Figure 9, the success probability of malicious node attack has nothing to do with the number of forks n. Only the relationship between the variables q, z and the success probability of malicious node attack is considered, setting the block number z confirmed by honest nodes to 1,3,5,7,9,11,13,15, and 17, and setting the percentage of malicious computing power in the Z-type chain to 0.2, 0.1, 0.05, and 0.01. Figure 9 is obtained by law statistics. Each function line downward and the success probability p of the upper function lines are always higher than the lower function lines. The analysis of the combination function to the Z-type chain structure is as follows: 1. The proportion of malicious computing power q remains unchanged, and the probability of successful malicious node attack p decreases monotonically with the block z confirmed by the honest chain. 2. The block z confirmed by the honest chain remains unchanged, and the success rate of malicious attack p increases monotonically with the proportion of malicious computing power q.
Combined with the inference, it can be obtained that the security of the chain structure has nothing to do with the number of forked sub-chains. Against malicious attacks, the measures which can be taken are increasing the proportion of honest computing power and waiting for as many block confirmations as possible in the transaction.
Second-degree chain security constraints
The safety analysis of the two chain structures is carried out above. Among them, the second-degree chain should meet the safety constraints in practical applications and construct the safety constraints based on the safety analysis in the chain structure.
1. The construction of the security constraints in the free competition chain: In practice, there are few nodes with malicious computing power which are bigger than 1% of the total computing power (large mining pools are honest nodes by default). For example, if the malicious computing power accounted for less than 1% starts an attack, and the success rate is less than 1%, the default state of blockchain is safe. The security constraint relationship between the number of blocks z and the number of fork chains n is shown in Figure 10.
The relationship model in Figure 10 satisfies the security constraints of the blockchain. Among them, as the confirming number increases, the highest number of forks also increases accordingly. In practical applications, the security between the number of sub-chains and the number of confirmed blocks should be considered before the blockchain forks. For example, with the rapid increase in the scale of data, the second-degree branch structure blockchain expansion is carried out on the blockchain. The system determines to adopt the free competition chain to expand, and then the blockchain is transformed from a master chain into five branches. In order to prevent malicious doublespending attacks, the sub-chains should wait at least two confirmation blocks after the fork to ensure that the transaction will not be double-spending and then conduct normal transactions.
(2) The construction of the security constraints in the Z-type chain: If the proportion of malicious computing power is less than 1% and the attack success rate is less than 1%, the default state of blockchain is safe. According to the above security analysis, the number of forked sub-chains n has nothing to do with the security of the blockchain. Before the blockchain is forked, the security constraints between the block number confirmed should be mainly considered, as shown in Table 3. The influence in success probability of doublespending attacks between the block number confirmed and the proportion of malicious computing power can be obtained from the table. For example, chains with weak computing power are vulnerable to malicious attacks, and the system determines to adopt the Z-type chain to expand. If the proportion of malicious computing power is less than 1%, after the chain forks, in order to prevent malicious double-spending attacks, the sub-chains should wait for at least two confirmed blocks (or more confirmed blocks) to ensure that the transaction will not be double-spending and then perform normal transactions.
Experiment and analysis
The experimental environment is 20 servers with 32core CPU, 128G memory, and 10T storage space. Docker virtualization technology is used to deploy three-dimensional chain nodes, Kubernetes is used to manage docker clusters, and the servers use gigabit networks and communicate between containers through flannel technology. Twenty hosts with TestRPC installed are equivalent to 20 master nodes NodeDB. NodeDB is mainly responsible for managing and maintaining the internal container nodes, and the internal nodes exchange information through NodeDB. The experimental design architecture diagram is shown in Figure 11.
In this experiment, there are 20 main nodes NodeDB. Each NodeDB has about 10 container nodes, and the network is composed of 250 nodes. The second-degree branch chain branches into four subchains for expansion. The blockchain uses the poof of work (POW) consensus mechanism and does not use random numbers for verification, which reduces the amount of data transmission by broadcasting the block header data, and adds additional information to verify the consistency of transaction information. The experimental comparison mechanisms to the two-degree branch chain (TDBC) are the Segregated Witness Expansion (SWE) mechanism and the directed acyclic graph (DAG) expansion mechanism, to compare and analyze in terms of transaction persecond (TPS), communication overhead, confirmation delay, and effective data rate.
Storage rate
In the second-degree branch chain TDBC, each node only broadcasts its own block header information to the entire network, and the transaction information can be packaged in the block after verifying the valid transaction identifier. Without the verification process, the transaction volume is compressed and stored in the block body, which expands the upper limit of data throughput. The TDBC mechanism effectively shunts data, and tasks are allocated to each sub-chain for parallel storage. The DAG mechanism uses a network structure in the data structure, and the blockchain consensus is converted from the longest chain consensus to the heaviest chain consensus, which retains a certain degree of independence and autonomy of the local network, and allows the parallel creation of blocks. The SWE mechanism isolates the transaction signature from the storing block, and the extra space is used for the storage of transaction information. In the experiment, this article deploys an experimental environment with a master node of 20, and tests the data throughput per unit time under the three expansion mechanisms in different periods. The final figure is shown in Figure 12. Through analysis, due to shunting and paralleling storage in TDBC, TDBC blockchain storage rate reaches nearly 10,000 transactions per second at the peak, which is much higher than the 100 transactions per second in SWE and the nearly 1000 transactions per second in DAG. Therefore, the advantage of TDBC in data storage throughput rate is obvious.
Communication overhead
With the increasing of communication nodes, the communication channels have increased exponentially, and the number of communication nodes has become the main factor restricting communication overhead. In the TDBC mechanism, the blockchain information is distributed to each sub-chain in an orderly manner. After the internal container node recognizes that the block belongs to the sub-chain, data communication is carried out; otherwise, it does not respond. The DAG mechanism adopts a directed acyclic structure. Due to the independence of the local blockchain, the blocks are allowed to be created in parallel. So, the blocks can be confirmed without the recognition of all nodes. The SWE mechanism broadcasts block information to entire network before it is confirmed by the entire network. If a conflict block occurs, it must enter the second confirmation. This experiment sets up an experimental environment of 10 and 20 main nodes. In this experiment, some communication nodes may fail; the number of communication nodes required directly reflects the communication overhead of the blockchain.
As shown in Figure 13(a), TDBC confirms that the average number of communication nodes required is about 30, which is significantly lower than the number that is about 60 nodes in DAG and about 100 nodes in SWE. It can be seen that the data shunting and task classification in TDBC reduce the communication overhead greatly. In Figure 13(b), the number of communication nodes in the TDBC mechanism is still stable and keeps at a minimum level, which reflects the good expansion of TDBC.
In the information exchange of communication nodes, the number of valid transactions carried by the global block in each communication also reflects the communication efficiency. In Figure 14, the size of the global block in the TDBC and SWE mechanisms is relatively stable, because DAG mechanism does not support strong consistency and the block size has obvious volatility, which brings instability factors to the communication of the blockchain.
Confirmation delay
In the blockchain, the transaction information is written into the block body after validity verification. The block body generates the broadcast header information, and the node transmits the block header information to other nodes for verification; finally, it gets verified by other nodes in the entire network. This process is confirmation delay. In TDBC, the information stored in the block is strictly classified, and the storage node only stores the block data on the sub-chain and does not have to be submitted to all nodes in the entire network for verification. In the DAG mechanism, it is affected by the independence of the blockchain local network where blocks are allowed to be verified by nodes in the local network. The SWE mechanism has relatively high requirements for network verification nodes, and blocks need to be confirmed by the entire network. In this experiment, to determine the impact of different blockchain network scales on the confirmation delay, the 5, 10, and 20 NodeDB experimental environments are, respectively, deployed. The abscissa is different periods, and the ordinate is the statistical confirmation delay range at this stage. Three types of average block confirmation delay are obtained under the mechanism. Figure 15 shows that the average deterministic delay of TDBC is about (8,10,13), the average deterministic delay of SWE is about (10,14,20), and the average deterministic delay of DAG is about (9,12,16). It can be obtained that the confirmation delay of the TDBC blockchain that uses data shunting and task allocation is less than the other two mechanisms, and as the network scale becomes larger, the increase of the confirmation delay from the TDBC mechanism is also slightly less than DAG and SWE mechanisms.
Effective data rate
The transaction information generated is validated and stored in the block body. The blocks are verified by the entire network nodes and then stored on the blockchain, which are valid blocks. But the transaction data from generating to the incoming chain inevitably produce data loss such as network instability which leads to data packet loss, network congestion which leads to data cache being cleared, and invalid data caused by double-spending attacks. The blockchain adopts the TDBC mechanism to strictly classify data, and the blockchain network is orderly and not redundant, which greatly alleviates network congestion. The blockchain using the DAG mechanism adopts a directive acyclic graph and allows the parallel generation of blocks. But there will be repeatable valid transaction information in the parallel blocks, and the block storage space is wasted to a certain extent. For blockchains using the SWE mechanism, the degree of network congestion is the main factor that restricts the effective data rate in the blockchain. In this experiment, four blockchain environments including 5, 10, 15, and 20 main nodes are deployed, and the effective data rate is reflected by the ratio of the effective data amount to the total data amount. Figure 16 is obtained through experiments. Through analysis, the data efficiency of the TDBC mechanism is better than the other mechanisms, and with the increase of the blockchain nodes, the data efficiency is always relatively stable. The DAG mechanism data efficiency is better than SWE mechanism in a long term. The data efficiency of SWE mechanism is higher in the early stage, and with the expansion of the blockchain network, its data effectiveness has significantly decreased.
Conclusion and future work
This article conducts an in-depth study on the deficiencies of the existing blockchain system expansion and proposes a second-degree branch structure blockchain expansion model. In order to improve data storage efficiency, the model optimizes the data structure through ternary storage. A second-degree branch chain model is constructed: free competition chain structure and Ztype chain structure. The model expands the blockchain that alleviates the communication burden on the network. To merge and transit the two structural chains through the two-way rotation mechanism, which ensures the stability of the blockchain expansion. On the basis of the simulative malicious attacks on the blockchain, putting forward the safety constraints to ensure the security of the expansion. And finally, to realize the effective expansion in the blockchain.
The second-degree branch structure blockchain expansion model in this article is mainly researched under the POW consensus. The future work is to study the expansion to other mainstream blockchains such as proof of stake (POS) blockchain and delegated proof of stake (DPOS) blockchain. In addition, with the rapid increase of blockchain transactions, the huge global storage ledger of the second-degree branch chain has problems in the management and maintenance for the blockchain. Another future work is to study the global ledger of the second-degree branch chain and, through distributed services, to further optimize the storage structure.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 8,407 | 2022-03-01T00:00:00.000 | [
"Computer Science"
] |
Controlling T$_c$ through band structure and correlation engineering in collapsed and uncollapsed phases of iron arsenides
Recent observations of selective emergence (suppression) of superconductivity in the uncollapsed (collapsed) tetragonal phase of LaFe$_2$As$_2$ has rekindled interest in understanding what features of the band structure control the superconducting T$_c$. We show that the proximity of the narrow Fe-d$_{xy}$ state to the Fermi energy emerges as the primary factor. In the uncollapsed phase this state is at the Fermi energy, and is most strongly correlated and source of enhanced scattering in both single and two particle channels. The resulting intense and broad low energy spin fluctuations suppress magnetic ordering and simultaneously provide glue for Cooper pair formation. In the collapsed tetragonal phase, the d$_{xy}$ state is driven far below the Fermi energy, which suppresses the low-energy scattering and blocks superconductivity. A similar source of broad spin excitation appears in uncollapsed and collapsed phases of CaFe$_{2}$As$_{2}$. This suggests controlling coherence provides a way to engineer T$_c$ in unconventional superconductors primarily mediated through spin fluctuations.
Recent observations of selective emergence (suppression) of superconductivity in the uncollapsed (collapsed) tetragonal phase of LaFe2As2 has rekindled interest in understanding what features of the band structure control the superconducting Tc. We show that the proximity of the narrow Fedxy state to the Fermi energy emerges as the primary factor. In the uncollapsed phase this state is at the Fermi energy, and is most strongly correlated and source of enhanced scattering in both single and two particle channels. The resulting intense and broad low energy spin fluctuations suppress magnetic ordering and simultaneously provide glue for Cooper pair formation. In the collapsed tetragonal phase, the dxy state is driven far below the Fermi energy, which suppresses the lowenergy scattering and blocks superconductivity. A similar source of broad spin excitation appears in uncollapsed and collapsed phases of CaFe2As2. This suggests controlling coherence provides a way to engineer Tc in unconventional superconductors primarily mediated through spin fluctuations.
Through careful control of growth and annealing conditions, LaFe 2 As 2 (LFA) can be grown in the tetragonal phase with markedly longer c-axis than the value in the equilibrium "collapsed" tetragonal (CT) phase (c=11.01Å). The "uncollapsed" tetragonal phase (UT) has c=11.73Å. Moreover, the UT phase is shown to superconduct at 12.1 K, while the CT phase is not a superconductor 1 . A parallel phenomenon was observed in undoped CaFe 2 As 2 (CFA). At room temperature, the equilibrium phase is UT, but it was recently shown that a CT phase can be induced by quenching films grown at high temperature 2 . In this case, the undoped CT phase superconducts with T c =25 K. The UT phase does not exist at low temperature because CFA undergoes a transition from tetragonal (I4/mmm) to orthorhombic (Fmmm) phase at 170 K 3 , with a concomitant transition to an ordered antiferromagnetic state 4 . It is also possible to induce a CT phase at low temperature by applying pressure 5,6 : superconductivity was reported with T c 12 K at 0.3 GPa. Taken together, these findings re-kindle the longstanding question as to whether universal band features can explain unconventional superconductivity.
Here we use a recently developed ab initio technique to show that there is indeed a universal feature, namely incoherence originating from the Fe d xy state. By 'incoherence' we refer to the fuzzy spectral features and momentum-broadened spin excitation caused by enhanced single-and two-particle scattering. Superconductivity depends critically on the alignment of this state to the Fermi level. We are able to make these findings thanks to recent developments that couple (quasiparticle) self consistent GW (QSGW ) with dynamical mean field theory (DMFT) [7][8][9][10] . Merging these two stateof-the-art methods captures the effect of both strong local dynamic spin fluctuations (captured well in DMFT), and non-local dynamic correlation 11,12 effects captured by QSGW 13 . We use QSGW and not some other form of GW, e.g. GW based on DFT. It has been well established that QSGW overcomes limitations of DFT-GW when correlations become strong (see in particular Section 4 of Ref. 12 ). On top of the DMFT self-energy, charge and spin susceptibilities are obtained from vertex functions computed from the two-particle Green's function generated in DMFT, via the solutions of non-local Bethe Salpeter equation. Additionally, we compute the particle-particle vertex functions and solve the linearized Eliashberg equation 9,14,15 to compute the superconducting susceptibilities and eigenvalues of superconducting gap instabilities. For CT and UT phases we use a single value for U and J (3.5 eV and 0.62 eV respectively), which we obtained from bulk FeSe (and LiFeAs) within a constrained RPA implementation following Ersoy et al. 16 DMFT is performed in the Fe-3d subspace, solved using a rotationally invariant Coulomb interaction generated by these U and J. The full implementation of the four-tier process (QSGW, DMFT, BSE, and BSE-SC) is discussed in Pashov et. al. 12 , and codes are available on the open source electron structure suite Questaal 17 . Expressions we use for the response functions are presented in Ref. 9 . Our all-electronic GW implementation was adapted from the original ecalj package 18 ; the method and basis set are described in detail in Ref. 12 . For the one-body part a k-mesh of 12 × 12 × 12 was used; to compute the (much more weakly k-dependent) self-energy, we used a mesh of 6 × 6 × 6 divisions, employing the tetrahedron method for the susceptibility.
We perform calculations in the tetragonal phases of LFA and CFA; in the CT phase (CT-LFA and CT-CFA) and the corresponding UT phase (UT-LFA and UT-CFA). Structural parameters for each phase are given in the SM, Table 1. The DMFT self-energy, spin and charge susceptibilities, and finally the superconducting instability are computed as a function of temperature. CT-QMC samples more electronic diagrams at reduced temperature and provides insights into the emerging coherence/incoherence in single-and two-particle instabilities; however, it cannot provide knowledge about entrant structural (or structural+magnetic) transitions. In the UT phase, the circular pocket around Γ has Fe-dxy character, while chickpea shaped pockets are of Fe-dxz,yz character. These pockets disappear in the CT phase where superconductivity is absent. Simultaneously, the effective band width W of the Fe-3d manifold significantly increases in CT phase (∼4 eV, in the UT phase W ∼2.4 eV, leading to larger electronic itineracy. Also shown is the partial local density of states projected onto the Fe-3d orbitals. The bandwidth W of narrow dxy states gets further narrowed in UT-LFA phase to mark enhancement in effective correlation ( U W ), where U is the Hubbard parameter.
(T N =170 K in CFA), and we can estimate T c in the hypothetical UT phase of undoped CFA below T N .
In brief, we find that the CT-LFA has no superconducting instability, while UT, CT-CFA and UT-LFA are all predicted to be superconducting. All of these findings are consistent with experiment. In the experimentally known cases where the systems do superconduct (UT-LFA and CT-CFA), it appears our estimated T c 's are a factor of two to three times larger than the experimental T c . A similar discrepancy is observed in estimation of T c in doped single-band Hubbard model 14 , where it sources from the local approximations of DMFT and needs a better momentum dependent vertex to circumvent this 19 . Apart from a constant scaling, all of these findings are consistent with experiment. Moreover, we find that the hypothetical UT-CFA phase can have the highest T c of all. We conclude that UT-CFA would be superconducting if it did not make a transition to an antiferromagnetically ordered state. The superior quality of the QSGW bath combined with nonperturbative DMFT has been shown to possess a high degree of predictive power in one-and two-particle spectral functions [7][8][9]12 and as in other cases we are able to replicate the experimental observations of spectral functions, including a reasonable estimate for T c . The remainder of the paper uses this machinery to explain what the origins of superconductivity are.
The three systems predicted to have non-negligible T c (CT-CFA, UT-LFA, UT-CFA) have two things in common. First, the Fe-d xy state contributes to the hole pocket around the Γ point (Fermi surface is shown in Fig. 1; see also the blue band in Fig. 2). Second, the imaginary part of the spin susceptibility Im χ(q, ω) has intense peaks centered at q=( 1 2 , 1 2 ,0)2π/a, in the energy window (2,25) meV. The latter is a consequence of the former: low-energy spin-flip transitions involving d xy are accessible, which give rise to strong peaks in Im χ(q, ω) around the antiferromagnetic nesting vector q AFM =(π/a, π/a, 0). Im χ(q, ω) is diffused in q around q AFM . This broadening in momentum space suppresses antiferromagnetism to allow superconductivity to form. CT-LFA is the only one of the four systems that has negli- gible instability to superconductivity. In CT-LFA the Fed xy state is pushed down (Fig. 2). As a consequence the peak in Im χ(q AFM , ω) occurs at a much higher energytoo high to provide the low-energy glue for Cooper pairs. Also appearing is a pronounced dispersive paramagnon branch around q=0. This branch is present in all four systems, but it is strongest in CT-LFA. Nevertheless the ab initio calculations predict no superconductivity. This establishes that the paramagnon branch contributes little to the glue for superconductivity in these 122-As based compounds. Reducing the c-axis in LFA phase pushes d xy below the Fermi energy E F (top left panel, Fig 2); the remaining hole pocket at Γ is without d xy character (see Fig 1). Quasi-particles in CT-LFA are much more coherent (see Fig 3) with small scattering rate Γ (extracted from the imaginary part of the self-energy at ω→0) and large quasi-particle weights Z relative to the other cases (see SM Table 2 for the orbitally resolved numbers). This further confirms that the CT-LFA phase is itinerant with small correlation, using U/W as a measure. When the d xy state crosses E F , single-particle spectral functions A(q, ω) become markedly incoherent. This originates from enhanced single-particle scattering induced by local moment fluctuations within DMFT and suppressed orbitally resolved Z (SM Table 2). In the superconducting cases the d xy orbital character is the primary source of incoherence with high scattering rate (Γ>60 meV) and quasi-particle weight as low as ∼ 0.4.
The peak in Im χ(q AFM , ω) can be observed in almost all iron based superconductors 15,20 . However, what varies significantly over various systems is the dispersion of the branches. The less itinerant the system is, the smaller dispersion in Imχ(q, ω) (and typical spin exchange scale J ∼ t 2 /U ), and it is more strongly corre- lated.
In the UT-LFA phase, Imχ(q, ω) has a dispersive magnon branch extending to ∼70 meV. As can be observed in Fig. 9, both the branch and the low-energy peak at ( 1 2 , 1 2 ,0) are significantly broad. The dispersion is significantly smaller than in undoped BaFe 2 As 2 (BFA) 21 , dispersion survives up to 200 meV at ( 1 2 , 0, 0). This suppression of branches and concomitant broadening suggests that UT-LFA is more correlated than BFA. In contrast with UT-LFA, CT-LFA has Stoner like continuum of spin excitations (in the figure the intensity is scaled by a factor of five to make it similar to the UT phase) without any well defined low energy peak. Similar spin excitations can be observed in the phosphorus compounds (BaFe 2 P 2 , LiFeP) where the system either does not superconduct or T c is fairly low (when it does) 15 . These are among the most itinerant systems of all iron based superconductors and both the quasi-particle and spin excitations are band like. In both the phases we find weak to no q z -dispersion of the susceptibilities, making the spin fluctuations effectively two dimensional.
In Fig. 5 we compare Imχ(q, ω) at ( 1 2 , 1 2 ,0) for four candidates. The UT-CFA has most intense low energy peak followed by CT-CFA and UT-LFA. Low energy spin excitations for CT-LFA is gapped at ( 1 2 , 1 2 ,0). Further, we take three energy cuts of Imχ(q, ω) at ω=15, 30, 60 meV along the path (H,K,L=0)= (0,0)-( 1 2 , 0)-( 1 2 , 1 2 )-(0,0). At 15 meV, the UT-CFA peak is significantly stronger than the rest; CT-LFA has weak uniform spin excitation at q=0, and is almost entirely suppressed at ( 1 2 , 1 2 ,0). It appears that an intense low energy peak which is simultaneously broadened in momentum space provides max-imum favorable glue for superconducting ordering. For higher energy ω=30 and 60 meV, cuts the sharp difference between UT-CFA and others start to diminish and the spin excitations for all systems become broad and incoherent and nearly comparable. CT-LFA shows a clear two-peak structure associated with the high energy paramagnon branch and it disperses to ∼500 meV (see SM). The eigenvalues and eigenfunctions of superconducting susceptibilities, superconducting pairing symmetries can not be extracted from the spin dynamics alone. We compute the full two particle scattering amplitude in the particle-particle channel within our DMFT framework, and we solve Eliashberg equations in the BCS low energy approximation 9,14,15 . We resolve our eigenfunctions of the gap equation into different inter-and intra-orbital channels, and observe the trend in the leading eigenvalues with temperature in both CT and UT phases. We observe that there are two dominant eigenvalues of the gap equation. The eigenvalues increase with decreasing T in the UT-LFA, UT-CFA and CT-CFA, while they are vanishingly small (at least one order of magnitude smaller than the UT phase) and (in the CT-LFA phase) insensitive to T. The corresponding eigenfunctions in the UT-LFA phase have extended s-wave (leading eigenfunction ∆ 1 for eigenvalue λ 1 ) and d x 2 −y 2 (lagging eigenfunction ∆ 2 for eigenvalue λ 2 ) characters (see Fig. 6). We also find that these instabilities are primarily in the intra-orbital d xy -d xy channel and the inter-orbital components are negligible. In both the UT and CT-CFA phases the only instability appears to be of extended s-wave nature. We track the temperature at which the superconducting susceptibility diverges (the leading eigenvalue approaches one) to estimate T c (see Fig. 6). We find that the pairing vertex Γ rises steeply with lowering temperatures and the leading eigenvalue λ follows the temperature dependence of Γ (see SM)). Suppression of the charge component of Γ leads to no qualitative change to the temperature dependence of λ and only weakly changes its magnitude (see SM). Our results suggest that T c is directly proportional to the strength of the low energy peak at ( 1 2 , 1 2 ), which is further controlled by the correlations and scattering in the Fe-3d xy state. To conclude, we establish the interplay between the band structure and correlations that lead to emergence (suppression) of superconductivity in the UT-LFA (CT-LFA) phase. We establish a direct correspondence between the proximity of the d xy state to the Fermi energy, and show that it contributes to enhanced low energy scattering and significantly incoherent quasi-particles. Incoherence affects two-particle features: the spin susceptibilities also show broad and intense low energy spin fluctuations centered at ( 1 2 , 1 2 ). As the phase is quenched, in CT-LFA, d xy is pushed below E F , which causes coherent spectral features to emerge with a broad continuum of spin excitations. These do not provide glue conducive for Cooper pair formation. Our conclusions find further validation in our calculations in UT and CT phases in CFA. UT-CFA was found to have the most intense low energy susceptibility peak among the four candidates and is predicted to have the highest, were the superconducting instability not suppressed by entrant first order structural transition.
This work was supported by the Simons Many-Electron Collaboration. We acknowledge PRACE for awarding us access to SuperMUC at GCS@LRZ, Germany, STFC Scientific Computing Department's SCARF cluster, Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1. In this supplemental material, we show we list the input structural parameters for our calculations, and the orbitally resolved quasi-particle weight and scattering rates in different compounds, as extracted from QSGW +DMFT. We also show the Imχ(q, ω) upto 500 meV to demonstrate the itinerant character of spin excitations in CT-LFA.
Note on U and J
We performed calculations with U and J taken from constrained RPA values for the FeSe and LiFeAs. Subsequent constrained RPA calculations on the pnictide considered here (Table II above) indicate that they are about 15% larger than the values we used. with the change in U and J uniform across all four compounds. Also the ratio J/U∼0.17 is essentially unchanged. While correlations should increase, the conclusions will not change since the adjustment is small and uniform, and and we cannot justify repeating the calculations, in light of the high cost of these calculations.
variants Γ x 2 −y 2 z x 2 −y 2 Γxz,yz zxz,yz Γ z 2 z z 2 Γxy zxy for the CT and UT phases respectively over much higher energies to stress the absence of low energy glue in CT-LFA phase, and its nearly band like spin excitation character. The intensity in the CT phase is artificially multiplied by five to bring excitations for CT and UT phases to the same scale.
A note on Tc estimation
We compute the pairing eigenvalues by solving the linearized Eliashberg equation at different temperatures in the normal phase. The temperature at which the leading eigenvalue becomes one, is where the particleparticle ladder sum (superconducting pairing susceptibility) diverges and corresponds to the T c for that material (the entire method is detailed in our recent work 9 and Park's thesis 14 ). The local three frequency, orbital dependent vertex functions for solving these linearized Eliashberg equations are computed from CT-QMC. CT-QMC is quantum monte carlo based finite temperature solver. It is fairly expensive to sample the desired vertex functions from CT-QMC, for example, at 300 K, we sample the CT-QMC vertex by launching the calculation on 40,000 cores for 4 hours.
The superconducting pairing susceptibility χ p−p is computed by dressing the non-local pairing polarization bubble χ 0,p−p (k, iν) with the pairing vertex Γ irr,p−p using the Bethe-Salpeter equation in the particle-particle channel.
Γ irr,p−p in the singlet (s) channel is obtained from the magnetic (spin) and density (charge) particle-hole reducible vertices by Finally, χ p−p can be represented in terms of eigenvalues λ and eigenfunctions φ λ of the Hermitian particleparticle pairing matrix.
The pairing susceptibility diverges when the leading eigenvalue approaches unity. When the particle-particle ladder sum χ p−p = ((χ 0,p−p ) −1 − Γ p−p ) −1 diverges, the normal state becomes unstable towards superconductivity. In a temperature dependent calculation this corresponds to the T c where the leading eigenvalue of the matrix Γ p−p χ 0 reaches one (as shown in Eqn. (3)). The eigenvector corresponding to the leading eigenvalue λ gives the symmetry of the superconducting order parameter ∆ α,β (k, ν). In an ideal scenario we need need to solve the eigenvalue problem of the following matrix; − K B T k ν α β γδ γ p−p,s (αβkν; α β k ν )χ 0,p−p α β γδ (k , ν ) ∆(γδ)(k , ν ) = λ∆ α,β (k, ν) The matrix that needs to be diagonalized has a size (norb 2 * nomega * nkp) * (norb 2 * nomega * nkp). In case of our materials, we find that even at β=20, which is roughly T = 580 K, the matrix size that we need to diagonalize is of the size (norb=5, nomega=200, nkp=1000) size (25*200*1000)*(25*200*1000). So we employ BCS low energy approximation to diagonalize the matrices at different temperatures and extract the eigenvalue spectrum. The BCS approximation amounts to using the Γ p−p strictly from the lowest energy (ν = 0 + , ν = 0 + , ω=0). However, Γ p−p contains all relevant momentum and orbital structure, and bubble contains information from all energies, momentum and orbitals. This appears to be reasonable approximation as the vertex contains essential features of superconductivity which is a low energy phenomenon. Additionally the pairing vertex also shows the essential temperature dependent enhancement that is prerequisite to Cooper pairing. We project the computed bubble and vertex functions on to the leading superconducting pairing symmetry channel ∆ k ∼ (cosk x + cosk y ) and show their temperature dependent behaviour.Γ = k,k ∆ k Γ(k, k )∆ k k (∆(k)) 2 (5) Further, we suppress the charge (density) component of the Γ p−p to show that the eigenvalues (λ −c ) only get very weakly affected. This is in complete consistency FIG. 8. The particle-particle bubbleχ 0 and the particleparticle interaction vertexΓ (projected onto the leading pairing symmetry channel) are plotted as functions of temperature. The leading eigenvalue of the Eliashberg gap equation λ follows primarily the steep rise inΓ with lowering temperatures.
with what we show in the main paper that the superconducting instability is in one-to-one correspondence with the spin instability and the pairing is primarily mediated via spin fluctuations. * swagata.acharya@kcl.ac.uk | 5,005 | 2020-03-29T00:00:00.000 | [
"Physics"
] |